Upgrade Error response from daemon: Unable to remove volume (Solved Rancher v1.3.2)

After upgrading to new server version I am getting this error over and overt not sure how to clear it up.

I am guessing its related to a host that might be trying to remove but not sure what host it might be…

==> cattle-debug.log <==
2016-12-08 07:33:36,629 ERROR [8ad0fea7-8582-45de-940b-7d6dcf3bb6f4:6121655] [volumeStoragePoolMap:30618] [volumestoragepoolmap.remove] [] [cutorService-16] [c.p.e.p.i.DefaultProcessInstance
Impl] Agent error for [storage.volume.remove.reply;agent=8800]: Error response from daemon: Unable to remove volume, volume still in use: remove b1b5f5ce81c0d11364e2cc30d3d39b16521033b8dc080
64f92278a3ac94bff01: volume is in use - [d7a589bae90e650ed01414e0d8444f0efd484cad0410bd18a2a6efd8d3712877]
2016-12-08 07:33:36,665 ERROR [:] [] [] [] [cutorService-16] [.e.s.i.ProcessInstanceDispatcherImpl] Agent error for [storage.volume.remove.reply;agent=8800]: Error response from daemon: Unab
le to remove volume, volume still in use: remove b1b5f5ce81c0d11364e2cc30d3d39b16521033b8dc08064f92278a3ac94bff01: volume is in use - [d7a589bae90e650ed01414e0d8444f0efd484cad0410bd18a2a6efd
8d3712877]

==> cattle-error.log <==
2016-12-08 07:33:36,665 ERROR [:] [] [] [] [cutorService-16] [.e.s.i.ProcessInstanceDispatcherImpl] Agent error for [storage.volume.remove.reply;agent=8800]: Error response from daemon: Unab
le to remove volume, volume still in use: remove b1b5f5ce81c0d11364e2cc30d3d39b16521033b8dc08064f92278a3ac94bff01: volume is in use - [d7a589bae90e650ed01414e0d8444f0efd484cad0410bd18a2a6efd
8d3712877]

Same story for me, it seems a problem with volume with the new upgrade. It was working fine before but problems starts occurring with 1.2.2.

A pity it’s just before production deployment. Help appreciated, cause we’re considering moving back to old docker-compose stuff which is working.

Here’s a trace
2016-12-29 10:49:02,951 ERROR [db88c6b9-fb8e-4184-8b01-01c0dd1cd0c8:775511] [volumeStoragePoolMap:1231] [volumestoragepoolmap.remove] [] [cutorService-15] [c.p.e.p.i.DefaultProcessInstanceImpl] Agent error for [storage.volume.remove.reply;agent=4]: Error response from daemon: Unable to remove volume, volume still in use: remove 85aae0420902c5749cfbccf41c0aad0dc4bf813917c53629c748876420863b66: volume is in use - [e51ca7133ac41212614fa85cd7fff72f87e84fc9262f7a6a59658f6d47556f6a]
2016-12-29 10:49:03,048 ERROR [5c291662-76c4-41d0-a4f2-3ef0d447e192:775111] [volumeStoragePoolMap:1230] [volumestoragepoolmap.remove] [] [ecutorService-4] [c.p.e.p.i.DefaultProcessInstanceImpl] Agent error for [storage.volume.remove.reply;agent=4]: Error response from daemon: Unable to remove filesystem for e51ca7133ac41212614fa85cd7fff72f87e84fc9262f7a6a59658f6d47556f6a: remove /var/lib/docker/containers/e51ca7133ac41212614fa85cd7fff72f87e84fc9262f7a6a59658f6d47556f6a/shm: device or resource busy
2016-12-29 10:49:03,054 ERROR [:] [] [] [] [cutorService-15] [.e.s.i.ProcessInstanceDispatcherImpl] Agent error for [storage.volume.remove.reply;agent=4]: Error response from daemon: Unable to remove volume, volume still in use: remove 85aae0420902c5749cfbccf41c0aad0dc4bf813917c53629c748876420863b66: volume is in use - [e51ca7133ac41212614fa85cd7fff72f87e84fc9262f7a6a59658f6d47556f6a]
2016-12-29 10:49:03,064 ERROR [:] [] [] [] [ecutorService-4] [.e.s.i.ProcessInstanceDispatcherImpl] Agent error for [storage.volume.remove.reply;agent=4]: Error response from daemon: Unable to remove filesystem for e51ca7133ac41212614fa85cd7fff72f87e84fc9262f7a6a59658f6d47556f6a: remove /var/lib/docker/containers/e51ca7133ac41212614fa85cd7fff72f87e84fc9262f7a6a59658f6d47556f6a/shm: device or resource busy
time=“2016-12-29T10:49:03Z” level=info msg=“Refresh for this catalog community is already in process, skipping”

Are all of your infrastructure services up to date? If you go to the “Infrastructure” stacks page, do any of them have" Upgrade Available" showing?

@denise All is up to date except the network one as you can see but this is new. If I upgrade the network will it restart any of my containers or have any downtime.

I am guessing it’s an update