Anyone can share their set up of Multiple Environments in Rancher?

I’m been chatting with a local incubator and there is a funny story about how a start-up lost the guy that does the ops work, and so no documentation and deploy steps.

They were considering rewriting and porting to heroku since no on knew how to deploy. As the mentor put it, “you guys are f-ed”.

So I’m curious if I can setup a multi environment Rancher for that incubator. Has anyone setup a shared environment, how is then isolation?

Environments are the basic unit of isolation. Each one has its own hosts, isolated overlay network and set of users who are allowed to use them. You would need one combined access control data source, the simplest being just using local auth and defining users, or using github auth.

Okay thats great, I’m tackling the documentation a bit and it seems like. (pardon should have done this first)
The environment is a unique isolation with its own api.

environment owns:

The next thing by “one combined access control data source”? Does this mean the Rancher Server’s Database?

Now the issues I see is
StartUp A: Uses Gitlab
StartUp B: Uses Bitbucket
StartUp C: Uses Github
StartUp F: Does not use source control :scream:

Is there a way to hook up separate authentication mechanisms per environment?

Whilst some documentation is clearly required in terms of the operating model procedures, i.e who is responsible for each part of the infrastructure build and run, common tasks/solutions and so on, I can’t help but feel that you might be better advised to shift your mindset slightly. First as much as possible of the infrastructure and infrastructure software config should be fully automated. Then your docs only really need to cover how to execute that automated process through various types of change (new set-up, upgrade of existing parts, debug and trouble-shooting, etc.). Second, the automation code that you develop for the first point should (largely) be self documenting. You have probably come across the term ‘infrastructure as code’ and the ‘immutable server pattern’, these are solid (forgive the pun) principles on which to base your operational environment and help to significantly reduce the (over) relience on specific individuals. I’m not saying you don’t need smart people, you absolutely do (creating automation like this takes skill and attention to detail) but it does de-risk change and at the same time provides the confidence to move towards continuous delivery at the infrastructure level as well as for applications. Of course your automation (config management) solution needs to be sufficiently modular to accommodate the types of variability you mention. In my situation we use a combination of tools for this, including Puppet, Packer, Terraform and a number of core support services such as Jenkins, Nexus and the like. The tools you choose and which ones you might want to maintain yourself versus those you want to buy as a ‘service’ is clearly a matter for your organisation and those of your partners. The Rancher API allows access to most of the resources that you need to manage.

IMO you don’t really need to document the Rancher product, Rancher themselves do a good job of that already and in the places where you think it is not adequate they are usually completely amenable to suggestions for improvement. So your docs should really just link to Rancher’s at appropriate points.

We all know that maintaining extensive documentation can become a full time job and that detracts from the real task of delivering business value. The platforms we use to support that are just that and not and end in themselves.

Perhaps not the response you were looking for, but sometimes re-thinking an approach can help to focus scarce resources on the things that really provide value.

HTHs

Fraser.

I totally agree with this point, places where I worked / interned at, the trend I see is when you have smart developers/hackers leading or managing, a lot of them just disdain infrastructure like it is below them. Saying ‘Just use Heroku’, ‘It only takes a day to setup a CI, why bother wasting time on this’, but then they go on to spend 100s of hours solving deployment bugs that and or actually ending up in situations where they can’t operate.

I’m trying to offer a way to change this (and also learn the tools to do Infrastructure as Code), so I think Rancher with its ease of use and friendly UI would be a great way to offer this. Personally in every org I tried to preach Infrastructure as Code I get faced with trepidation, since it is an investment in engineer time not tied to productivity or the next investment pitch. I totally get that, but it just makes the work environments pretty terrible.

So I’m just doing this outside of work and learning the tools, would you say that Terraform is worth learning? Currently I’m working on setting up Rancher HA through CloudFormation and Ansible. Though I hope to setup a proper Rancher HA deployment for myself as a learning experiment, and hopefully get some of the incubator guys to chip in on the Infrastructure Cost.

Curious if there is a way to include multiple logins per env, wouldn’t mind checking out the code and see if a PR is feasible.

1 Like

Only one active auth provider can be configured.

Supporting multiple simultaneously is very technically messy (e.g. an environment can have users from different providers but your can’t necessarily get info about those users unless the logged in user has a token to talk to that provider)

And creates a very poor user experience (ever seen a site that has sign in buttons for Twitter GitHub MySpace Facebook LinkedIn and forgotten which one you used for that site? You’ll usually end up with multiple accounts)

We’re don’t currently have GitLab or BitBucket anyway, so it’s somewhat moot. AD/LDAP are probably not suitable for your situation so it would have to be GitHub (which most developers probably have if they use one of the others) or maintaining your own list with Local.

Terraform, … yes I would say it’s worth the effort. That said I work in an environment dominated by AWS so clearly Cloud Formation makes sense too. Some people like to play the platform independence card and, whilst there is something to be said for that, my experience is that typically when you buy into a platform you want to leverage as much functionality as it has to offer. Clearly AWS has more, and a more diverse set of services than any other, so lowest common denominator would to my mind be wasteful. We view AWS services in ‘bands’ where, for example, ‘green band’ services are those where they are simply billing wrappers around generic services that have counterparts on other platforms. EFS and Elasticache would be good examples. When we get to the more specialised services, we are a little more cautious (I guess we we have all been bitten by vendor lock-in) but they are not outlawed if the business case is good enough. This is one of the strengths of Rancher of course, and possibly a weakness too :wink:

For playing around with Rancher, just set yourself a local Vagrant/VirtualBox environment. It’s quick and easy and gives you the create-use-destroy pattern so useful for trying out techniques in safety and from a stable starting point. Zero cost as well (so long as you have a laptop with enough grunt).

Ansible is good, and easy to use with Rancher. I would caution going too far with desired state approaches for infrastructure provisioning, but some people swear by it. Personally I prefer the ‘bake’ model and immutability but as always YMMV.

Regards

Fraser.

1 Like

Just to throw this out there, I’m currently working on a Rancher provider for Terraform. You can see the ongoing work here: https://github.com/objectpartners/terraform/tree/provider-rancher

The first cut that I submit upstream will be able to manager environments, registration tokens, stacks, and potentially services via Terraform.

This way you could use Terraform to create a Rancher Environment and AWS Autoscaling Group that launches and attaches to that environment all together.

I’m hoping to have this first cut done in the next couple weeks.

2 Likes

I see, okay Github or manual account creation.

Yeah the multi providers issue does make sense, as long as the accounts and environments work that should be good.

This would be incredibly useful!

We should collaborate. I’ve been building “uberstack” [1] which uses Terraform under the bonet and can deploy a Rancher server along with Docker registry and Jenkins. As i say, it uses Terraform under the bonet, but then defers to docker-machine for instance creation/management.

It supports aws and virtualbox at the moment. I’d be curious to discuss this in a different thread, as I it seems we are trying to solve a similar problem and it would be very interesting to see which approach works out best.

[1] github.com/odoko-devops/uberstack

Interesting. PRs are welcome to the repo that I linked above. And just to be clear, the Terraform integration will not deploy and and set up a Rancher server. It will instead be able to manage resources within an existing Rancher Server. That’s what Terraform’s wheelhouse is.

Makes sense @johnrengelman. Except I’m not sure what you mean by “Terraform’s wheelhouse”. It is great for deploying the whole infrastructure, which obviously includes Rancher nodes. Uberstack can do the latter too, and includes a ‘rancher-agent’ class that will register the node with Rancher Server. Maybe there’s info/code you could usefully steal (here).

I mean that Terraform is designed to model resources via associated APIs.
So you would add a provided to Terraform that can manage resource objects in Rancher (i.e. CRUD an environment or stack), but you wouldn’t create a Terraform resource that would create a Rancher HA cluster. You’d instead model that using the other various providers to create the machines and provision them (e.g. aws_instance)

Ahh, got it. So you’re saying that you can create a Rancher HA cluster with Terraform, but you’d do that with the tools you already have. You’d just use a Rancher resource to start containers/etc from within Terraform. Now that makes sense.

yeah, so what I’m writing you’ll have some thing like this:

provider "aws" {
  region = "us-east-1"
}

provider "rancher" {
  url = "https://rancher.mycompany.com"
  token = "1234"
}

resource "rancher_environment"  "test" {
  name = "test"
  engine = "cattle"
}

resource "rancher_registration_token" "test" {
  environment_id = "${rancher_environment.test.id}"
}

resource "aws_launch_configuration" "cluster" {
  user_data = <<EOF
#cloud-config
runcmd:
  - ${rancher_registration_token.test.command}
EOF
}

This assumes that you already have an existing Rancher server up and running to interact with. If you wanted Terraform to create the server, you’d have to do that in a separate project.

Absolutely makes sense. Looks good @johnrengelman