Working towards the release of 2.0, we are gathering ideas for articles/blogs which we can create or which we can help create (write/review). Let us know what you would like to see, this can be anything, and if you are interested in writing, we can help.
I will really be interested in writing article with your help or talking at conference for release of 2.0.
About my ideas of articles/talks so far:
My priority will be to demystify the devops with rancher 2.0: how can a normal dev, with some to no experience in ops (especially with docker & k8s) profits from rancher 2 to deploy its code. How will rancher 2.0 help him to build (pipeline ??), package (registry ??) and put to production something he codes. How k8s and rancher will help produces a stable product, that can self heal and scale in seconds and that is safe (at least safer than traditional bare machine when done by dev : log, metrics, replicated storage, blue green deployement).
Another intersting article will be a article how rancher 2.0 choose to abstract k8s stuff. What k8s obscure stuff for newcommers really mean (pod, service, ingress, deploiement, dset, rset, statefullset, volumeclaim ect) and why it rancher abstract it like this so a newcommers can start in minute with k8s.
Then by order of my personal priorities:
- storage: comparison in term of perf, replication and easy to install: nfs vs gluster vs rook vs ect and integration with rancher
- k8s on premise : is it worth the price. Why rancher helps not loosing the north while building a cluster on your own (rke, admin stuff, ect).
- security: how to do expose an api in tls with docker. How to secure admin for a dev team (rancher github / oauth and CRDT managed by rancher) How isolation provided by docker and k8s abstraction helps and why are they easy to do in rancher
- architecture in k8s: service mesh or so ? what is k8s unlocking for you ? how rancher helps ? How to go further ?
What do you thing ?
I would like to see an article about managing Kubernetes configMaps, secrets, and YAML files. We are in the process of moving our pipeline from Supervisord managed processes to containers in Kubernetes. We have several dozen different configuration files and secrets that we need to manage, as well as several dozen Kubernetes YAML files for all of the workers. We want to have revision control for everything, so it would need to be backed by git, etc. It also needs to be automatable so it can be integrated into CI/CD.
I am working on this now, and my solution is a bit clunky with several scripts, and repos to manage everything. I would love to know about other real-world, working solutions to manage this.
I would like your group to revisit logging+monitoring solutions for rancher 2.0. There are a few articles from 2015/2016, but I am wondering what new and possibly integrated solutions are coming available.
Information on the current best practices for persistent & shared storage/volumes would be welcome (or on the intended 2.0 GA solution). There are many interesting contenders (Longhorn, openEBS, rook.io, …), but all seem to be work-in-progress / not guaranteed to work in the rancher ecosystem.
Both non-permanent volumes and host-bound mounts are obviously possible but problematic for certain use cases.
I’m with @tychota and @dirkdevriendt – storage.
Though I’d really like to see this from a Bare Metal perspective. How do I go from Bare Metal server to RancherOS + Rancher 2.x + Rook.io (or Longhorn) to manage multi-host, multi-tiered, self-healing, self-replicating storage?
Example:
RancherOS: Single ssd, or 2 ssd raid 1, or netboot
Storage: 1 metadata ssd per x hdd, rook.io (ceph cluster)
I realize this is both RancherOS and Rancher 2.x, but I think it would be a great topic.
i completely agree with you that this would be an amazing idea. if you do make it real come back with updates!
How to convince your boss to use Rancher.
I am currently in a situation where my boss and the rest of the developers do think that Rancher/Kubernetes is Overkill and too complicate for us to use. They got so used to dealing with the current, inefficient setup that they do not see the pain points in there - like not being able to update the server without taking down the whole system, adding a new VM for each new service that runs on port 80…
It would be nice to have an article to show them how Rancher adresses such common pain points and how it actually makes it easier to run the infrastructure with it.
I did think of another deep dive topic that would be nice to see covered – networking in k8s.
Flannel, Calico, Canal, Weave, <insert half a dozen more options?>
CoreDNS, SkyDNS,
Ingress / load balancing …
The second option to fill in for RKE is network plugin… so many choices.
And how to secure it !
Rancher 2.0 HA option using external etcd or Kubernete etcd with focus of managing multiple Kubernete clusters each cluster has a few nodes. A cluster is a delicate environment for one DevOps team.
Cost saving to reduce nodes for etcd and controlplan for each cluster if possible as it is a small cluster.
One Cluster or nodes to monitor multiple Kubernete clusters using Prometheus, alertmgr and so on.
A Rancher 2.0 specific implementation of Ingress with Let’s Encrypt, preferably using Helm. The following article uses Helm for nginx-ingress
and cert-manager
. It’s kind of what I have in mind, but adapted to Rancher.