Rebooting vSphere provisioned boot2docker host adds new "host" entry / disk not persisting

Loving vSphere provisioning through Rancher/docker-machine into boot2docker - it’s really slick.

I realized though that at each reboot, whist the boot2docker VM does come up with the same hostname, it always registers as an additional host in Rancher (with the same name). I assume that’s because /var/lib/rancher/state on the VM does not persist between reboots?

Not sure whether anyone uses vSphere/boot2docker regularly, but how do you ensure the local disk persists on these VMs?

That is why yes; What version of rancher/server, and the rancher/agent that’s going on the host?

I put in a change to detect boot2docker a while ago that should fix this, which is in agent:v1.0.2, which ships with server:v1.1.0-dev5 and v1.1.0.

I run server:v1.1.0-dev5 and the boot2docker machine came up with agent:v1.0.2.

I can confirm that run.sh inside my agent container definitely contains your change.

I can also confirm that this returns zero/true on my VM:

lsb_release | grep -i boot2docker

A “docker inspect rancher-agent-state” confirms that it has the following volumes still. I assume, according to you change, that should no longer be the case?

"Volumes": {
    "/var/lib/cattle": "/mnt/sda1/var/lib/docker/volumes/d8c[snip]d07/_data",
    "/var/lib/rancher": "/var/lib/rancher",
    "/var/log/rancher": "/var/log/rancher"
},
"VolumesRW": {
    "/var/lib/cattle": true,
    "/var/lib/rancher": true,
    "/var/log/rancher": true
},