One of my VM’s crashed so it stopped itself. However, it wouldn’t leave the stopping phase no matter what I did. So I tried deleting it after backing up the volume and now it’s stuck in the terminating phase.
Environment
- Harvester ISO version: 1.1.1
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): Supermicro HS11 + AMD EPYC 7551P
Additional context
One of my nodes stopped due to a power outage (VM was running on another node though). Don’t think it’s the issue though but guess it’s worth mentioning.
Hey,
I just had a similar issue earlier this week.
The VM could be cloned and launched but i wanted to properly stop and restart the VM.
So this is what i did:
# Connect to the node ssh rancher@<IP>
sudo bash
kubectl get -n <namespace> virtualmachineinstances.kubevirt.io
# Confirmed the pod is Terminating
# Now: because kubevirt create a VM as two seperate kubernetes resources VM and VMInstance you can just kill the corresponding pod
kubectl delete virtualmachineinstances.kubevirt.io -n <namespace> <yourvmname>
# Kubernetes will spinup a fresh instance of your vm
# I did had to use the --force in my case though not sure what would could be the unintended consequences