Only one process running inside docker container

In rancher v1.6 there were several processes like auth-service, websocket-proxy, mysql etc running inside docker container after running the below command

sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server

But in rancher v2.0 single process named rancher is running inside the docker container. Does rancher doesnot needs all other processes which used to run previously or all the process is coupled in a single process named rancher. Cn someone help me out with this doubt??


Please refer to this document from Ranchr to learn more abaout Rancher v2:


Thanks @amioranza . I already referred to this docs. But unable to clear my doubt on :->
I ran rancher server on k8s cluster and imported one cluster. I saw that on the imported cluster one cluster-agent (deployment) and node-agent(daemonset) is running. I feel that node-agent is for checking health stats but i am still not aware for what purpose the cluster-agent is running in the imported cluster??? It would be a great help if u explain.

So I am interpreting this that in order to run rancher on my workstation to manage my application containers I need the old 1.6 version?
Meaning when I dont need or use kubernetes the rancher2 container is wrong for me and I cannot run it in a single container?

IF that is so I suggest to add some code to the startup of a rancher2 container,
which probes the execution environment of the container and writes a log entry that its not finding essential kubernetes roles/communication points.

This way you are saving all the 1.6 users a lot of frustration trying to use rancher2.

Nothing seems to be working out of the box the same way as was 1.6
And refering to a lengthy document is simply TLDR :wink:

2.0 is kubernetes. If you don’t want k8s you don’t want 2.0, full stop.

The rancher server itself can still run as a single container which contains the UI, API, database, etc like in 1.x. But the next step to do anything useful is creating a cluster and adding nodes to it, and that is creating a(nother) k8s cluster out of the machines you provide. That will deploy a bunch of containers on those machines.

And managing existing/standalone containers that happen to be on the node is not a thing that k8s does.

So that was my point!
But because I can actually just update my rancher container to 2.0 and the software lets me do this but simply doesnt work, I sincerely suggest you add the few lines to let everyone know trying to to that…
I am pretty sure i am not the only one who is or will be trying this.

TL;DR is a comon attitude in this field and the few lines extra for you probably have a massive impact!
I am playing around with rancher for a while now and what you just said is not very clear from flying over the documentation (as we all do at first)

so the simple test during startup IS the smart way to handle it…
or do you have enouh followers already and dont care?

There is no “update” path between 1.x and 2.x, to the point that we intentionally publish them as different image names so you can’t accidentally end up on 2.x or think it is a newer version of the same thing.

So TBH I have no idea what your point is really, what the software “let [you]” do, how we could prevent that, or what you think should be added to the documentation.

If it’s basically a “Should I just keep using 1.6?” section of the 2.x docs, that’s not going to happen. Nobody should do that. Support extends out a year and a half or so to give customers time to migrate, but the large majority of development is on 2.x and we’re certainly not going to encourage new installs of 1.x or sticking to it indefinitely.

I think the website is pretty clear on what the current product is, starting from the first sentence:

Multi-Cluster Kubernetes Management
Rancher is open-source software for delivering Kubernetes-as-a-Service.

It’s an entirely different product, it appeals to an overlapping but different set of people, but this is the way the market has been heading for a while. K8s is powerful but not friendly or easy to use, so we are working on separate projects to bring a simpler experience on top of it for later this year.