5.3. Virtual machine replica sets
Just like a VirtualMachinePool, a
VirtualMachineInstanceReplicaSet resource tries to ensure that a specified number of virtual machines are always in a ready state. The VirtualMachineInstanceReplicaSet is very similar to the Kubernetes ReplicaSet1.
However, the VirtualMachineInstanceReplicaSet does not maintain any state or provide guarantees about the maximum number of VMs running at any given time. For instance, the VirtualMachineInstanceReplicaSet may initiate new replicas if it detects that some VMs have entered an unknown state, even if those VMs might still be running.
Using a VirtualMachineInstanceReplicaSet
Using the custom resource VirtualMachineInstanceReplicaSet, we can specify a template for our VM. A VirtualMachineInstanceReplicaSet consists of a
VM specification just like a regular VirtualMachine. This specification resides in spec.template.
Besides the VM specification, the replica set requires some additional metadata like labels to keep track of the VMs in the replica set.
This metadata resides in spec.template.metadata.
The amount of VMs we want the replica set to manage is specified as spec.replicas. This number defaults to 1 if it is left empty.
If you change the number of replicas in-flight, the controller will react to it and change the VMs running in the replica set.
The replica set controller needs to keep track of the VMs running in this replica set. This is done by specifying a spec.selector. This
selector must match the labels in spec.template.metadata.labels.
A basic VirtualMachineInstanceReplicaSet template looks like this:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: vmi-replicaset
spec:
replicas: 2 # desired instances in the replica set
selector:
# VirtualMachineInstanceReplicaSet selector
template:
metadata:
# VirtualMachineInstance metadata
spec:
# VirtualMachineInstance template
[...]
Note
Be aware that ifspec.selector does not match spec.template.metadata.labels, the controller will do nothing
except log an error. Further, it is your responsibility to not create two VirtualMachineInstanceReplicaSet conflicting with each other.A real world example could look like this:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: vmi-cirros-replicaset
spec:
replicas: 2
selector:
matchLabels:
kubevirt.io/domain: vmi-cirros
template:
metadata:
labels:
kubevirt.io/domain: vmi-cirros
spec:
[...]
When to use VirtualMachineInstanceReplicaSets
You should use VirtualMachineInstanceReplicaSets whenever you want multiple exactly identical instances not requiring persistent disk state. In other words, you should only use replica sets if your VM is ephemeral and every used disk is read-only. If the VM writes data this should only be allowed in a tmpfs.
Warning
You should expect data corruption if the VM writes data to a storage not being a tmpfs or an ephemeral type.Volume types which can safely be used with replica sets are:
cloudInitNoCloudephemeralcontainerDiskemptyDiskconfigMapsecret- any other type if the VM instance just writes to a tmpfs
Note
This is the most important difference to a VirtualMachinePool. If you want to manage multiple unique instances using persistent storage, you have to use a VirtualMachinePool. If you want to manage identical ephemeral instances which do not require persistent storage or different data sources (startup scripts, configmaps, secrets) you should use a VirtualMachineInstanceReplicaSet.Using a VirtualMachineInstanceReplicaSet
We will create a VirtualMachineInstanceReplicaSet using a CirrOS container disk from a container registry. As we know, container disks are ephemeral, so this fits our use case very well.
Task 5.3.1: Create a VirtualMachineInstanceReplicaSet
Create a file vmirs_lab05-cirros.yaml in the folder labs/lab05 and start with the following boilerplate config:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: lab05-cirros-replicaset
spec:
replicas: 2
selector:
matchLabels:
kubevirt.io/domain: lab05-cirros
template:
metadata:
labels:
kubevirt.io/domain: lab05-cirros
spec:
[...]
Enhance the spec.template.spec block to start a VM matching these criteria:
- Use the container disk quay.io/kubevirt/cirros-container-disk-demo
- Use an empty
cloudInitNoCloudblock - Use
1replicas - Configure the guest to have
1cores - Resources
- Request
265Miof memory - Request
100mof cpu - Limit
300mof cpu
- Request
Use this empty cloudInitNoCloud block to prevent cirros from trying to instantiate by using a remote url:
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#cloud-config
Task hint
Your resulting VirtualMachineInstanceReplicaSet should look like this:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: lab05-cirros-replicaset
spec:
replicas: 2
selector:
matchLabels:
kubevirt.io/domain: lab05-cirros
template:
metadata:
labels:
kubevirt.io/domain: lab05-cirros
spec:
domain:
cpu:
cores: 1
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
resources:
limits:
cpu: 300m
requests:
cpu: 100m
memory: 265Mi
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: quay.io/kubevirt/cirros-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#cloud-config
kubectl apply -f labs/lab05/vmirs_lab05-cirros.yaml --namespace lab-<username>
virtualmachineinstancereplicaset.kubevirt.io/lab05-cirros-replicaset created
Task 5.3.2: Access the VirtualMachineInstanceReplicaSet
There is not much the CirrOS disk image provides besides entering the VMs using the console.
Check the availability of the VirtualMachineInstanceReplicaSet with:
kubectl get vmirs --namespace lab-<username>
NAME DESIRED CURRENT READY AGE
lab05-cirros-replicaset 2 2 2 1m
List the created VirtualMachineInstances using:
kubectl get vmi --namespace lab-<username>
NAME AGE PHASE IP NODENAME READY
lab05-cirros-replicasetnc5p5 11m Running 10.244.3.96 training-worker-0 True
lab05-cirros-replicasetp25s2 11m Running 10.244.3.149 training-worker-0 True
You can access the VM’s console with virtctl using the name of the VMI:
virtctl console lab05-cirros-replicasetnc5p5 --namespace lab-<username>
Scaling the VirtualMachineInstanceReplicaSet
As the VirtualMachineInstanceReplicaSet implements the Kubernetes standard scale sub-command, you could scale the VirtualMachineInstanceReplicaSet using:
kubectl scale vmirs lab05-cirros-replicaset --replicas 1 --namespace lab-<username>
Horizontal pod autoscaler
The HorizontalPodAutoscaler (HPA)1 resource can be used to manage the replica count depending on resource usage:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: lab05-cirros-replicaset
spec:
maxReplicas: 2
minReplicas: 1
scaleTargetRef:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
name: lab05-cirros-replicaset
targetCPUUtilizationPercentage: 75
This will ensure that the VirtualMachineInstanceReplicaSet is automatically scaled depending on the CPU utilization.
You can check the consumption of your pods with:
kubectl top pod --namespace lab-<username>
NAME CPU(cores) MEMORY(bytes)
user2-webshell-f8b44dfdc-92qjj 6m 188Mi
virt-launcher-lab06-cirros-replicasetck6rw-9s8wd 3m 229Mi
Task 5.3.3: Enable the HorizontalPodAutoscaler
Create a file hpa_lab05-cirros.yaml in the folder labs/lab05 with the following content:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: lab05-cirros-replicaset
spec:
maxReplicas: 2
minReplicas: 1
scaleTargetRef:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
name: lab05-cirros-replicaset
targetCPUUtilizationPercentage: 75
Create the HorizontalPodAutoscaler in the cluster:
kubectl apply -f labs/lab05/hpa_lab05-cirros.yaml --namespace lab-<username>
horizontalpodautoscaler.autoscaling/lab05-cirros-replicaset created
Check the status of the HorizontalPodAutoscaler with:
kubectl get hpa --namespace lab-<username>
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
lab05-cirros-replicaset VirtualMachineInstanceReplicaSet/lab05-cirros-replicaset cpu: 2%/75% 1 2 1 7m44s
Open a second terminal in the webshell and connect to the console of one of your VM instances:
kubectl get vmi --namespace lab-<username>
NAME AGE PHASE IP NODENAME READY
lab05-cirros-replicasetck6rw 9m47s Running 10.244.3.171 training-worker-0 True
Pick the VMI and open the console:
virtctl console lab05-cirros-replicaset<pod> --namespace lab-<username>
Start to generate some load. Issue the following command in your webshell:
load() { dd if=/dev/zero of=/dev/null & }; load; read; killall dd
In the other terminal, regularly check the following commands:
kubectl top pod --namespace lab-<username>
kubectl get hpa --namespace lab-<username>
After a short delay, the HorizontalPodAutoscaler kicks in and scales your replica set to 2:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
lab05-cirros-replicaset VirtualMachineInstanceReplicaSet/lab05-cirros-replicaset cpu: 283%/75% 1 2 2 11m
And you will see that a second VMI has been started:
kubectl get vmi --namespace lab-<username>
After the horizontal pod autoscaler scaled up your instances, head over to the console where you generated the load.
Hit enter in the console to stop the load generation. By default, the horizontal pod autoscaler tries to stabilize
the replica set by using a stabilizationWindowSeconds of 300 seconds. This means that it will keep the replica set stable
for at least 300 seconds before issuing a scale down. For more information about the configuration, head over to
the Horizontal pod autoscaler documentation
.
End of lab
Cleanup resources
You have reached the end of this lab. Please stop your running virtual machines to save resources on the cluster.
Delete your VirtualMachinePool:
kubectl delete vmpool lab05-webserver --namespace lab-<username>
Delete your VirtualMachineInstanceReplicaSet:
kubectl delete vmirs lab05-cirros-replicaset --namespace lab-<username>
Delete the horizontal pod autoscaler
kubectl delete hpa lab05-cirros-replicaset --namespace lab-<username>