"waiting for 2 etcd machines to delete"

Hi,

I tried to upgrade my cluster, and somehow - rancher decided to delete two machines at ones (VSPhere provider), causing the progress to get stuck.
I tried to restore the etc-backup (it has worked before in similar situations), and that made rancher “realize” that the nodes that it deleted is gone - so thats good :slight_smile:

However, it is not stuck with this:
“waiting for 2 etcd machines to delete”
How can I proceed after this? I have really struggled to find information online about this particular issue. I can’t find any finalizers that seems to block this either.


apiVersion: provisioning.cattle.io/v1
kind: Cluster
metadata:
  annotations:
    field.cattle.io/creatorId: user-jz9v5
  creationTimestamp: '2023-08-17T07:35:08Z'
  finalizers:
    - wrangler.cattle.io/provisioning-cluster-remove
    - wrangler.cattle.io/rke-cluster-remove
    - wrangler.cattle.io/cloud-config-secret-remover
  generation: 18
  managedFields:
    - apiVersion: provisioning.cattle.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            v:"wrangler.cattle.io/cloud-config-secret-remover": {}
      manager: rancher-v2.7.5-secret-migrator
      operation: Update
      time: '2023-08-17T07:35:09Z'
    - apiVersion: provisioning.cattle.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"wrangler.cattle.io/provisioning-cluster-remove": {}
            v:"wrangler.cattle.io/rke-cluster-remove": {}
        f:spec:
          .: {}
          f:cloudCredentialSecretName: {}
          f:defaultPodSecurityAdmissionConfigurationTemplateName: {}
          f:defaultPodSecurityPolicyTemplateName: {}
          f:kubernetesVersion: {}
          f:localClusterAuthEndpoint:
            .: {}
            f:caCerts: {}
            f:enabled: {}
            f:fqdn: {}
          f:rkeConfig:
            .: {}
            f:chartValues:
              .: {}
              f:rke2-cilium: {}
            f:etcd:
              .: {}
              f:disableSnapshots: {}
              f:snapshotRetention: {}
              f:snapshotScheduleCron: {}
            f:etcdSnapshotRestore:
              .: {}
              f:generation: {}
              f:name: {}
              f:restoreRKEConfig: {}
            f:machineGlobalConfig:
              .: {}
              f:cni: {}
              f:disable-kube-proxy: {}
              f:etcd-expose-metrics: {}
            f:machinePoolDefaults: {}
            f:machinePools: {}
            f:machineSelectorConfig: {}
            f:registries:
              .: {}
              f:configs: {}
              f:mirrors: {}
            f:upgradeStrategy:
              .: {}
              f:controlPlaneConcurrency: {}
              f:controlPlaneDrainOptions:
                .: {}
                f:deleteEmptyDirData: {}
                f:disableEviction: {}
                f:enabled: {}
                f:force: {}
                f:gracePeriod: {}
                f:ignoreDaemonSets: {}
                f:skipWaitForDeleteTimeoutSeconds: {}
                f:timeout: {}
              f:workerConcurrency: {}
              f:workerDrainOptions:
                .: {}
                f:deleteEmptyDirData: {}
                f:disableEviction: {}
                f:enabled: {}
                f:force: {}
                f:gracePeriod: {}
                f:ignoreDaemonSets: {}
                f:skipWaitForDeleteTimeoutSeconds: {}
                f:timeout: {}
      manager: rancher
      operation: Update
      time: '2024-02-14T11:21:46Z'
    - apiVersion: provisioning.cattle.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          .: {}
          f:agentDeployed: {}
          f:clientSecretName: {}
          f:clusterName: {}
          f:conditions: {}
          f:observedGeneration: {}
          f:ready: {}
      manager: rancher
      operation: Update
      subresource: status
      time: '2024-02-15T01:21:39Z'
  name: dev
  namespace: fleet-default
  resourceVersion: '158698501'
  uid: e9e555af-cfd4-4412-9fda-9dc75c5b650f
spec:
  cloudCredentialSecretName: cattle-global-data:cc-nt5vn
  defaultPodSecurityAdmissionConfigurationTemplateName: ''
  defaultPodSecurityPolicyTemplateName: ''
  kubernetesVersion: v1.26.11+rke2r1
  localClusterAuthEndpoint:
    caCerts: ''
    enabled: false
    fqdn: ''
  rkeConfig:
    chartValues:
      rke2-cilium: {}
    etcd:
      disableSnapshots: false
      snapshotRetention: 25
      snapshotScheduleCron: 0 */5 * * *
    etcdSnapshotRestore:
      generation: 2
      name: dev-etcd-snapshot-dev-pool1-096006f5-vrll6-1707832805-local
      restoreRKEConfig: none
    machineGlobalConfig:
      cni: cilium
      disable-kube-proxy: false
      etcd-expose-metrics: false
    machinePoolDefaults: {}
    machinePools:
      - controlPlaneRole: true
        dynamicSchemaSpec: >-
          {"resourceFields":{"boot2dockerUrl":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          URL for boot2docker
          image"},"cfgparam":{"type":"array[string]","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"nullable":true,"create":true,"update":true,"description":"vSphere
          vm configuration parameters (used for
          guestinfo)"},"cloneFrom":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"If
          you choose creation type clone a name of what you want to clone is
          required"},"cloudConfig":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"Filepath
          to a cloud-config yaml file to put into the ISO
          user-data"},"cloudinit":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          cloud-init filepath or url to add to guestinfo, filepath will be read
          and base64 encoded before
          adding"},"contentLibrary":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"If
          you choose to clone from a content library template specify the name
          of the
          library"},"cpuCount":{"type":"string","default":{"stringValue":"2","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          CPU number for docker
          VM"},"creationType":{"type":"string","default":{"stringValue":"legacy","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"Creation
          type when creating a new virtual machine. Supported values: vm,
          template, library,
          legacy"},"customAttribute":{"type":"array[string]","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"nullable":true,"create":true,"update":true,"description":"vSphere
          custom attribute, format key/value e.g. '200=my custom
          value'"},"datacenter":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          datacenter for virtual
          machine"},"datastore":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          datastore for virtual
          machine"},"datastoreCluster":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          datastore cluster for virtual
          machine"},"diskSize":{"type":"string","default":{"stringValue":"20480","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          size of disk for docker VM (in
          MB)"},"folder":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          folder for the docker VM. This folder must already exist in the
          datacenter"},"hostsystem":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          compute resource where the docker VM will be instantiated. This can be
          omitted if using a cluster with
          DRS"},"memorySize":{"type":"string","default":{"stringValue":"2048","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          size of memory for docker VM (in
          MB)"},"network":{"type":"array[string]","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"nullable":true,"create":true,"update":true,"description":"vSphere
          network where the virtual machine will be
          attached"},"os":{"type":"string","default":{"stringValue":"linux","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"If
          using a non-B2D image you can specify the desired machine
          OS"},"password":{"type":"password","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          password"},"pool":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          resource pool for docker
          VM"},"sshPassword":{"type":"string","default":{"stringValue":"tcuser","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"If
          using a non-B2D image you can specify the ssh
          password"},"sshPort":{"type":"string","default":{"stringValue":"22","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"If
          using a non-B2D image you can specify the ssh
          port"},"sshUser":{"type":"string","default":{"stringValue":"docker","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"If
          using a non-B2D image you can specify the ssh
          user"},"sshUserGroup":{"type":"string","default":{"stringValue":"staff","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"If
          using a non-B2D image the uploaded keys will need chown'ed, defaults
          to staff e.g.
          docker:staff"},"tag":{"type":"array[string]","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"nullable":true,"create":true,"update":true,"description":"vSphere
          tag id e.g.
          urn:xxx"},"username":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          username"},"vappIpallocationpolicy":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          vApp IP allocation policy. Supported values are: dhcp, fixed,
          transient and
          fixedAllocated"},"vappIpprotocol":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          vApp IP protocol for this deployment. Supported values are: IPv4 and
          IPv6"},"vappProperty":{"type":"array[string]","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"nullable":true,"create":true,"update":true,"description":"vSphere
          vApp
          properties"},"vappTransport":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          OVF environment transports to use for properties. Supported values
          are: iso and
          com.vmware.guestInfo"},"vcenter":{"type":"string","default":{"stringValue":"","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          IP/hostname for
          vCenter"},"vcenterPort":{"type":"string","default":{"stringValue":"443","intValue":0,"boolValue":false,"stringSliceValue":null},"create":true,"update":true,"description":"vSphere
          Port for vCenter"}}}
        etcdRole: true
        machineConfigRef:
          kind: VmwarevsphereConfig
          name: nc-dev-pool1-8lmwh
        machineOS: linux
        name: pool1
        quantity: 4
        unhealthyNodeTimeout: 0s
        workerRole: true
    machineSelectorConfig:
      - config:
          protect-kernel-defaults: false
    registries:
      configs: {}
      mirrors: {}
    upgradeStrategy:
      controlPlaneConcurrency: '1'
      controlPlaneDrainOptions:
        deleteEmptyDirData: true
        disableEviction: false
        enabled: true
        force: true
        gracePeriod: -1
        ignoreDaemonSets: true
        skipWaitForDeleteTimeoutSeconds: 0
        timeout: 120
      workerConcurrency: '1'
      workerDrainOptions:
        deleteEmptyDirData: true
        disableEviction: false
        enabled: true
        force: true
        gracePeriod: -1
        ignoreDaemonSets: true
        skipWaitForDeleteTimeoutSeconds: 0
        timeout: 120
status:
  agentDeployed: true
  clientSecretName: dev-kubeconfig
  clusterName: c-m-vrtks6ms
  conditions:
    - lastUpdateTime: '2023-08-17T12:40:00Z'
      status: 'False'
      type: Reconciling
    - lastUpdateTime: '2023-08-17T07:35:09Z'
      status: 'False'
      type: Stalled
    - lastUpdateTime: '2024-02-14T11:21:46Z'
      status: 'True'
      type: Created
    - lastUpdateTime: '2024-02-15T01:21:39Z'
      status: 'True'
      type: RKECluster
    - lastUpdateTime: '2023-08-17T07:35:09Z'
      status: 'True'
      type: BackingNamespaceCreated
    - lastUpdateTime: '2023-08-17T07:35:09Z'
      status: 'True'
      type: DefaultProjectCreated
    - lastUpdateTime: '2023-08-17T07:35:09Z'
      status: 'True'
      type: SystemProjectCreated
    - lastUpdateTime: '2023-08-17T07:35:09Z'
      status: 'True'
      type: InitialRolesPopulated
    - lastUpdateTime: '2023-08-17T07:35:09Z'
      status: 'True'
      type: CreatorMadeOwner
    - lastUpdateTime: '2024-02-14T11:14:18Z'
      message: waiting for 2 etcd machines to delete
      reason: Waiting
      status: Unknown
      type: Updated
    - lastUpdateTime: '2024-02-14T11:14:18Z'
      message: waiting for 2 etcd machines to delete
      reason: Waiting
      status: Unknown
      type: Provisioned
    - lastUpdateTime: '2024-02-14T10:28:10Z'
      message: Cluster agent is not connected
      reason: Disconnected
      status: 'False'
      type: Ready
    - lastUpdateTime: '2023-08-17T07:35:10Z'
      status: 'True'
      type: NoDiskPressure
    - lastUpdateTime: '2023-08-17T07:35:10Z'
      status: 'True'
      type: NoMemoryPressure
    - lastUpdateTime: '2023-08-17T07:35:11Z'
      status: 'True'
      type: SecretsMigrated
    - lastUpdateTime: '2023-08-17T07:35:11Z'
      status: 'True'
      type: ServiceAccountSecretsMigrated
    - lastUpdateTime: '2023-08-17T07:35:11Z'
      status: 'True'
      type: RKESecretsMigrated
    - lastUpdateTime: '2023-08-17T07:35:11Z'
      status: 'True'
      type: ACISecretsMigrated
    - lastUpdateTime: '2024-02-14T10:28:07Z'
      status: 'False'
      type: Connected
    - lastUpdateTime: '2023-08-17T12:39:45Z'
      status: 'True'
      type: GlobalAdminsSynced
    - lastUpdateTime: '2023-08-17T12:39:53Z'
      status: 'True'
      type: SystemAccountCreated
    - lastUpdateTime: '2023-08-17T12:39:53Z'
      status: 'True'
      type: AgentDeployed
    - lastUpdateTime: '2023-08-17T12:40:00Z'
      status: 'True'
      type: Waiting
  observedGeneration: 18
  ready: true

I do find this, under “related resources” > “Mgmt cluster”

metadata:
  annotations:
    authz.management.cattle.io/creator-role-bindings: '{"created":["cluster-owner"],"required":["cluster-owner"]}'
    field.cattle.io/creatorId: user-jz9v5
    lifecycle.cattle.io/create.cluster-agent-controller-cleanup: 'true'
    lifecycle.cattle.io/create.cluster-provisioner-controller: 'true'
    lifecycle.cattle.io/create.cluster-scoped-gc: 'true'
    lifecycle.cattle.io/create.mgmt-cluster-rbac-remove: 'true'
    management.cattle.io/current-cluster-controllers-version: v1.26.7+rke2r1
    objectset.rio.cattle.io/applied: >-
      H4sIAAAAAAAA/4xSTWvkSAz9K4vOttPf6Rj2EMIewrLZwAwzZ7VLditdVhmV7CYT8t+HcrszTYaEORlL70nv6dULYMffSCMHgRJaFGyoJbGiQjNPBYerYQkZHFgclHDn+2ikkEFLhg4NoXwBFAmGxkFi+q2ZvLsYUCmhBb1PA/pImj/9uBnWkEHYPVFlkaxQDhcETsjqtCof2fQJOhyFNG+GA5TQaRg4mWFpLi3Ms7/+ZXF//9L/+TTBlqAER8MfQWOHVcLXnshyRzX23iD7SA66loWjKRolq6Y9wWsGo1UO8pVbioZtB6X03mfgcUd+vO1HWvYY91DCal0v1/P19W7hFsvVau4WVC9qt5255eZmu9jM5vPtZrbapW2Txypv80HtEDdtTOXYUTWG2pDYfYsN/T+QKrsEhgxQ4pE0iUmKT/f8QpWSTTVHsVLu7PSkYCywkrt9GwglKEq1J72avvm4rRwWxXWxvqD0tn/POPQ7yrHjHHvbl8OsmBfbxODYeXx+uAyOBHeepsxvPamxNFDW6CO96/4XhC3o7/0HsmPQw2PwXD2f8xhz/h70MAY/7XwfPrddUCN3F6TmJp00ST//Qbo1i5EK+redPlToz4J72/8jrgssltgnPW7CvmZwZHHhGB+ValJy57c99V9/BgAA///9exs63AMAAA
    objectset.rio.cattle.io/id: cluster-create
    objectset.rio.cattle.io/owner-gvk: provisioning.cattle.io/v1, Kind=Cluster
    objectset.rio.cattle.io/owner-name: dev
    objectset.rio.cattle.io/owner-namespace: fleet-default
    provisioner.cattle.io/encrypt-migrated: 'true'
    provisioning.cattle.io/administrated: 'true'
    ui.rancher/badge-color: '#00ff04'
    ui.rancher/badge-icon-text: 🧪
    ui.rancher/badge-text: Example Text
  creationTimestamp: '2023-08-17T07:35:08Z'
  finalizers:
    - wrangler.cattle.io/mgmt-cluster-remove
    - controller.cattle.io/cluster-agent-controller-cleanup
    - controller.cattle.io/cluster-scoped-gc
    - controller.cattle.io/cluster-provisioner-controller
    - controller.cattle.io/mgmt-cluster-rbac-remove

The - wrangler.cattle.io/mgmt-cluster-remove Finalizer looks interesting. I have no idea what it does, but that makes me think if I can remove the finalizers from the .yaml that I posted in the first post? Or atleast set a timeout?

Hi @wioxjk, Do you want to remove the 2 etcd nodes from the cluster?

Hi, those two nodes are already deleted by rancher in VMware but rancher thinks that they are still present in the cluster for some reason.

Ah… okay. Can you see that node in the cluster still being present? If yes then you can go ahead edit the yaml of the node and remove the finalizers of the Node.(Not the cluster). This should remove the node.

Thanks for your reply!
I tested that with the nodes that is “pending” (screenshot in the first post), but sadly nothing happened - the finalizer rows does not get removed

Can you try deleting the node and when it’s in deleting state then try to remove the finalizers from the node yaml and check if it’s getting deleted?

If that’s not working and if this is non-production cluster:
From the local cluster(where rancher is deployed), you can do:
kubectl get machines.cluster.x-k8s.io -n fleet-default | grep <cluster-name>
This will give you all the machines in your cluster. If you find the machine that you want to remove from your cluster here, try deleting it using:
kubectl delete machines.cluster.x-k8s.io <custom-somenumber> -n fleet-default .
If it’s again not deleting it then edit the same object using kubectl edit machines.cluster.x-k8s.io <custom-machineID> -n fleet-default and remove the finalizers field with its values and see if that’s getting deleted.

1 Like

Thanks for a great response! I will note this down for future problems :slight_smile:

Sadly, the machines I deleted with:

> kubectl delete machines.cluster.x-k8s.io dev-pool1-54545967f4-5tzfj -n fleet-default
machine.cluster.x-k8s.io "dev-pool1-54545967f4-5tzfj" deleted
> kubectl delete machines.cluster.x-k8s.io dev-pool1-54545967f4-txrlv -n fleet-default
machine.cluster.x-k8s.io "dev-pool1-54545967f4-txrlv" deleted
> kubectl delete machines.cluster.x-k8s.io dev-pool1-6c57d47756-hvx88 -n fleet-default
machine.cluster.x-k8s.io "dev-pool1-6c57d47756-hvx88" deleted

is still stuck in pending:

dev-pool1-54545967f4-7vr6l               dev                                                                                                         Pending   8s    
dev-pool1-54545967f4-j6qhh               dev                                                                                                         Pending   14s   
dev-pool1-6c57d47756-8wfk5               dev                                                                                                         Pending   7s 

If I check the .yaml of the first node in the list:

apiVersion: cluster.x-k8s.io/v1beta1
kind: Machine
metadata:
  annotations:
    machine.cluster.x-k8s.io/exclude-node-draining: 'true'
  creationTimestamp: '2024-02-16T08:14:15Z'
  finalizers:
    - machine.cluster.x-k8s.io
  generateName: dev-pool1-6c57d47756-
  generation: 1
  labels:
    cattle.io/os: linux
    cluster.x-k8s.io/cluster-name: dev
    cluster.x-k8s.io/control-plane: 'true'
    cluster.x-k8s.io/deployment-name: dev-pool1
    cluster.x-k8s.io/set-name: dev-pool1-6c57d47756
    machine-template-hash: '2713803312'
    rke.cattle.io/cluster-name: dev
    rke.cattle.io/control-plane-role: 'true'
    rke.cattle.io/etcd-role: 'true'
    rke.cattle.io/rke-machine-pool-name: pool1
    rke.cattle.io/worker-role: 'true'
  managedFields:
    - apiVersion: cluster.x-k8s.io/v1beta1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:machine.cluster.x-k8s.io/exclude-node-draining: {}
          f:finalizers:
            .: {}
            v:"machine.cluster.x-k8s.io": {}
          f:generateName: {}
          f:labels:
            .: {}
            f:cattle.io/os: {}
            f:cluster.x-k8s.io/cluster-name: {}
            f:cluster.x-k8s.io/control-plane: {}
            f:cluster.x-k8s.io/deployment-name: {}
            f:cluster.x-k8s.io/set-name: {}
            f:machine-template-hash: {}
            f:rke.cattle.io/cluster-name: {}
            f:rke.cattle.io/control-plane-role: {}
            f:rke.cattle.io/etcd-role: {}
            f:rke.cattle.io/rke-machine-pool-name: {}
            f:rke.cattle.io/worker-role: {}
          f:ownerReferences:
            .: {}
            k:{"uid":"68b492d9-a795-401f-bfec-ff601d6a03a3"}: {}
        f:spec:
          .: {}
          f:bootstrap:
            .: {}
            f:configRef: {}
          f:clusterName: {}
          f:infrastructureRef: {}
      manager: rancher
      operation: Update
      time: '2024-02-16T08:14:15Z'
    - apiVersion: cluster.x-k8s.io/v1beta1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          .: {}
          f:lastUpdated: {}
          f:observedGeneration: {}
          f:phase: {}
      manager: rancher
      operation: Update
      subresource: status
      time: '2024-02-16T08:14:15Z'
  name: dev-pool1-6c57d47756-8wfk5
  namespace: fleet-default
  ownerReferences:
    - apiVersion: cluster.x-k8s.io/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: MachineSet
      name: dev-pool1-6c57d47756
      uid: 68b492d9-a795-401f-bfec-ff601d6a03a3
  resourceVersion: '159719336'
  uid: 6b8400dd-b6b4-4055-a2f2-e8688e7455b9
spec:
  bootstrap:
    configRef:
      apiVersion: rke.cattle.io/v1
      kind: RKEBootstrap
      name: dev-bootstrap-template-tb9w8
      namespace: fleet-default
      uid: 787b7c10-510e-4b66-b430-7068e96a99fd
  clusterName: dev
  infrastructureRef:
    apiVersion: rke-machine.cattle.io/v1
    kind: VmwarevsphereMachine
    name: dev-pool1-096006f5-vxtvk
    namespace: fleet-default
    uid: 0f97213b-2ca3-4eba-80a2-8b37ab9f6c5f
  nodeDeletionTimeout: 10s
status:
  lastUpdated: '2024-02-16T08:14:15Z'
  observedGeneration: 1
  phase: Pending

My untrained eye does not point to any rows that is relevant here.
The finalizers will just be added again.

After deleting those objects are those getting recreated? If you do kubectl get machines.cluster.x-k8s.io -n fleet-default are you still seeing your machines which you delete using the kubectl delete?

I am still seeing those nodes, but the age is different - it is like rancher is recreating them immediately after deletion with the same name and configuration.

Oh… can you try the same above process for nodes.management.cattle.io? kubectl get nodes.management.cattle.io -n <clusterID>? You’ll find the clusterID in the Related Resources section when you click on the your cluster from cluster management menu. It’s something like c-m-something. You can describe the machines and find the node name. After that you can delete that machine if the node name in the description matches the name of the node you want to delete.

Thanks!
I will try that!

The time-stamp did not match at all, but I will try :slight_smile:

> kubectl get nodes.management.cattle.io -n c-m-vrtks6ms
NAME            AGE
machine-2xtb4   2d4h
machine-m8h7n   15d
machine-x75fc   15d
machine-z5r6q   15d

Hi again,

I interpreted it like this:

kubectl delete nodes.management.cattle.io machine-z5r6q -n fleet-default

Howevery, I am greeted with this:

Error from server (NotFound): nodes.management.cattle.io "machine-z5r6q" not found

I don’t think that the namespace would be fleet-default. The namespace would be your cluster-id it is something like c-m-xxxx. You can find it in the Related Resources tab if you click on your cluster from the Cluster Management page.