After reboot, can't start containers attached to volumes

We have a single node Rancher 1.5.10 setup which controls several containers. Rancher itself runs on a separate box than the worker.
The single worker got rebooted during a massive migration of physical boxes. Upon coming back online, all of the containers associated with secrets or volumes can not run. Trying to start them logs a 500 internal server error:
Expected state running but got stopped: /VolumeDriver.Attach 'snowglobe-search-data' (driver 'rancher-secrets') returned status 500: 500 Internal Server Error

Attempting to start the container by hand (docker restart ) gets an error:
Error response from daemon: Cannot restart container <id>: VolumeDriver.Mount: Volume Name not given

Attempting to mount the volume associated with that container also fails:
docker run -ti -v myvol:/data ubuntu bash
docker: Error response from daemon: VolumeDriver.Mount: Volume Name not given.

An inspection on the volume returns stuff that looks right. I can browse the settings on the volume in rancher. Attempts to clone the container with the volume fail for the same reasons. I’m not sure how to fix this. Can someone point me in the right direction?

a little more info; it seems like the volume that I was trying to mount has a driver of ‘rancher-secrets’, but it is a local volume.