Migrate API questions

I think that I understand the premise behind what I want the migrate action to do, but I can’t find any docs that actually describe what the migrate call does, nor does it seem to actually do anything when I use it.

Due to the lack of documentation (or my inability to find it) around the use of the migrate call, I assume it moves/migrates a container or applicable resource to another host, is this even close to what it is intended to do? (please say yes, because I’m looking forward to using this feature)

Thanks,

Phillip

What resource is that on @Phillip_Ulberg?

This is what I’m doing -

  1. navigate to anywhere to view containers (i have tried both the “hosts” page and the “containers” page)

  2. click the vertical ellipsis to view the list of available commands for the container, select “view in api”

  3. on the “view in api” page, on the right, under the list of “actions”, select “migrate”

  4. an “action” window will pop up, click on “show request”

  5. you should now see the “api request” pop up, this shows the curl cmd and the http request, in my case it is this -

HTTP/1.1 POST /v1/projects/1a11/containers/1i384/?action=migrate

  1. click on the “send request” button, you then see a bunch of output from the http response, it clearly shows the status -

state": “migrating”

  1. i then click the “reload” button on that window, it now shows -

state": “running”

I go back to the Hosts view, the container I wanted to migrate is still on the same host. At this point I usually question whether I even understand what the migrate command should do, and I have come to the realization that I should probably ask for assistance (and hope not to be shamed too badly).

Phillip

By this point I’m thoroughly confused

Wow, so, long story short: migrate does nothing and should not even be in the API. It is mistake/bug that it is there.

We don’t support explicit migrating between hosts, but you can define services with scheduling rules and healthchecks such that the containers that make up the service automatically get moved if the healthcheck fails.

To force a container that is part of a service to move, you can update its scheduling labels and then upgrade the service. This can be doe through the UI or rancher-compose for services.

Individual containers (ones not part of a service) cannot be migrated, but they can be cloned.

Thanks for the update, and I get this from the point of SOA, but what about scenarios that do not fall into any of the ones you described? I simply want to the ability to move a container from one host to another, whether it be for host maintenance, load-leveling, or something else. This concept has been around in the vm world for quite some time, and this seems like a huge miss for Rancher.

Truthfully, trying the “migrate” api call wasn’t my first instinct, you want know what sort of expectations users have of these kinds of systems? the first thing I did was to try to drag/drop the container onto a different host, that’s what we expect. I can totally understand if the tool isn’t there yet, but you’re telling me that it’s a “mistake/bug”, and that’s very disappointing.

Maybe I got spoiled using ESXi and vm’s for so many years (vMotion/storage vMotion), but this such a core/base level requirement for any sort of guest/host management system. I don’t know that it is reasonable to expect that every/all instances of needing to move a container fall into the “prescribed” method for doing so, and there are plenty of good reasons why users may have some “individual” containers around that are not part of a service, and also don’t have/need a healthcheck.

The following github issue has been marked as “release/future” -

Phillip

“Mistake/bug” refers to the fact that the action that doesn’t really do anything was exposed. Not that the idea is bad.

It’s actually there because the cattle framework was originally for Virtual Machine orchestration and some VM drivers have migration that works. Both VM and Container “extend” the Instance resource type and that has the action on it. If there were an easy way to implement this for containers we would, but it’s just not something that built-in to Docker or easy to implement and handle all the edge cases.