Node autoscaling?

Hi,

I havent seen this addressed and a quick search came up empty. Will R2 provide for a way of scaling nodes themselves?

I understand that K8s will typically take care of scaling deployment groups, re-assigning pods to nodes, etc (something which is currently not possible with 1.6 & cattle without some api and external hacking), but will Rancher be able to work with autoscaling the nodes themselves?

I assume this is either there or coming soon? (i.e. to help autohealing of a cluster as well as scaling).

2 Likes

This is not in-scope for 2.0[.0]. We will look at priorities again after 2.0 is done.

HI @vincent thanks for the info…

I watched last years last Rancher 2 webcast as well as the one for the Tech preview launched a couple of weeks ago…

I understood (I think) a lot of the idea of rancher 2, in managing (creating?) multiple clusters in multiple operating scenarios (from minikube to aws, google, azure, bare metal etc), as well as creating a single authentication layer (which is really cool and useful in a bunch of use-cases, but less so if you dont expect to be creating different clusters all around the place)… Nonetheless I’m a bit unsure of what the Rancher layer is going to bring to the table for now…

In Rancher 1 this was very clear for me, it allowed easy scaling of workloads, the container networking, service/stack management & deployment, upgrades (rolling, blue green, etc, etc), and with Rancher 2 this isnt too clear for me? Do you have a document, video, post or something I can use to check that?

As we are considering our move to production containers I really like cattle and its options (despite missing a couple of features that would be useful), but when considering K8s I’m having a bit more trouble reasoning with the team about the benefits of including rancher? Am I correct in that the app lifecycle in Rancher 2 is handled directly in K8s? (The examples all show using kubectl directly for creating and altering apps)…

Thanks for clearing this up for me!

Yes workloads are directly managing the relevant k8s resources. The project-level stuff like workloads is the least complete in the preview, but also a fairly known quantity as far as what we need to finish.

In native k8s RBAC and things like secrets can only be assigned at a namespace level, and if you work with it for a while switching between namespaces and managing the same secret in multiple namespaces etc becomes a lot of overhead. So Rancher adds the concept of Project as a set of namespaces to which users can be assigned RBAC roles to and create resources like secrets that are shared across all the associated namespaces. There is also generally finer-grained control over what people can do within a Project/Environment beyond the 4 previous levels of Owner/Member/Restricted/Read-Only.

And then there’s UI/CLI/Compose files/API similar to Cattle in 1.x providing a lightly-opinionated way to do almost everything while hiding a lot of the complexity of k8s. For example when you add a Workload you pick how you want to scale it and what volumes you want like in 1.x. You don’t need to know about (or know the nuanced difference between) a Deployment, Replica Set, Daemon Set, Replication Controller, Job, and Pod.

Rough mapping from 1.x/Cattle:

  • Host -> Node
  • Clusters: new, they “own” the hosts now instead of being tied to a single Environment
  • Environment -> Project (which is what it was called in the API all along), but projects belong to a cluster and share the hosts of that cluster
  • Stack -> Namespace
  • Service -> Workload
  • Container(+ associated sidekicks) -> Pod
2 Likes

I’m in the same boat. I’ve recently dived into Kubernetes Engine and am struggling to understand why I need Rancher now. It’s pretty dead simple and has a great UI.

Secrets across namespaces aren’t an issue if you create node pools. I see access control being far superior in Rancher but I’d imagine this isn’t a big issue for most Rancher users like me. Honestly I’m in love with the slickness and simplicity of Google Kubernetes Engine.

Well the obvious thing would be that not everybody wants to run their code on GCP and be tied to Google. We are agnostic, Google is one option. And if you’re talking about anything beyond create cluster I don’t even know how to respond to comparing their mostly read-only UI and occasional YAML editor to our UI, even with half of it broken in preview…

Secrets are tied to a namespace, if you have ten namespaces that all need an API key or whatever you need to create ten separate copies of it. Projects own a set of namespaces and can push a single secret (or role binding, etc) that you manage down to all the namespaces. This has nothing to do with nodes.

You may not care about auth integration and RBAC if you’re one person with one cluster, but it is a large problem for medium-large companies.

3 Likes

Hi Vincent, I wrote that somewhere else before: I personally would appreciate a technical and not-so-technical overview (with detailed use cases, maybe even examples) what “Rancher brings to the table” in v2.0.

I do see the RBAC concept, but all the project vs. namespaces vs. secrets things are a bit beyond me right now cause we are just before moving to k8s. So this would help me greatly, and I think I’m not alone :wink: . Especially since secrets and secret management is a “thing” that we happily ignore right now cause it’s just so effing annoying.

Any idea when the workloads management is supposed to come into preview? I’d like to get a feel for that before we make a decision (which at this point is probably to go with pure K8s)…

We will be using on-premise and cloud hybrid deployment, so Rancher 1.6 was great as we can see the entire app spanning different datacenters… But just for setting up the cluster I think its overkill…

I get the RBAC benefits, but there are a few other options for that too, and since we run a single application and dont have different environemtns for different customers etc, we dont really scale in environments.

Oh well :confused:

I was really looking to have something like rancher 1.6 where we can easily do deploys, updates, control the LBs easily in order to do A/B or B/G deploys, etc… Anyways I guess something else will come along for this in k8s sooner or later

BTW, is there any rancher comparison page with other tools? I just started looking into nirmata (no free version but seems to bridge a lot of what i’m missing between 1.6 and the roadmap for 2)

A lot of it is “there” already to click on if you go down into a project but not really working in the current preview; 2.0 will provide largely the same UI/CLI/API experience as Cattle in 1.x, built on top of K8s. There’s similar but different UI design, different names for most things (partial mapping above), and various new or removed features for what can or can’t be done on k8s vs cattle. But still largely “Cattle”. And some longstanding requests like being able to share nodes (hosts) between projects (environments).

Understood, so basically Application/stack management is going to come “up” (or down? topology-wise) into rancher in 2.0, correct? Because unless I really missed it, when we have a K8s environment in Rancher 1.6, we dont have any application control within the rancher UI (admittedly I didnt check the API), but we get Stacks/Applications replaced by the K8s ui, so that will change in R2?

Thanks!

This is old now but yes, in 1.x if you choose k8s we setup the cluster and then do not get involved in Workload management. In 2.0 all environments (projects) are k8s and there is UI for all the major stuff, similar to cattle in 1.x.

@vincent just to circle back, I tested the new Tech preview and I can now see workload management! kudos!!

I have a quick question - related to nodes (so I dont know if I can just put it here or should add a new thread)… anyway here goes: will R2 support now or in the near future a “hybrid” cluster? i.e. nodes in the cloud & on premise? or on different clouds? I apologize if this is a stupid question due to some practicality, but for those of us who want to operate in a hybrid cloud format it is a pertinent point…

In R1 this was a non-issue as each node was separate, bbut now I only see provisioners for the cluster itself? I assume that adding hosts manually/using some other tool would proably work, and also adopting an existing cluster with nodes spread over multiple environments would also work, but will this be somehow supported in Rancher itself? Is there some other best practice instead? (I’d like to not have to deploy changes to more than once cluster, and to have, for example, scaling from on-premise to the cloud - for this last piece I understand it would involve some custom work to use the rancher api to deploy additional nodes)

(based on the example from https://rancher.com/adding-custom-nodes-kubernetes-cluster-rancher-2-0-tech-preview-2/ it should be possibble but i want to make sure I understand correctly, this would join the cluster as an additional node somewhere else, correct? are there implications if these nodes are on-prem vs cloud?)

ps: the new preview is impressive!!! some small bugs I’ve caught so far but its extremely polished!!!congratulations!

1 Like

TL;DR: Not really no; Kubernetes is basically designed for a cluster to be in one availability zone of one provider and so the UI reflects that. See my posts in http://forums.rancher.com/t/minimal-cluster/10021/8

No access to that topic… Is there somewhere I need to register for access to other content? I’m quite interested in learning since I know gke and kops bboth have multi-az cluster… Please let me know how to gain access to the more advanced content :slight_smile:

Arg, sorry… https://info.rancher.com/rancher2-beta , but I just added you to the forum group.