If you find good instructions/tools to install a production-ready, on-prem cluster, please share!
I hear too many problems and I had too many problems to go with Rancher 2 in production and since I was planning to use configuration files rather than point-and-click in the GUI, I don’t see much Rancher gives me that surpasses the problems I got/see.
@mbuckman yeah, saw that too, afterwards… well was worth a try. I think if we could at least get some feedback from someone there… that would help a lot.
To me a lot of this worry could be resolved with some information as to the direction. We got excited with an announcement that stated (or alluded to) production readiness… then hit some major hurdles and now are simply concerned.
Lack of information just increases the vacuum of data demand, leaving us to fill the gap on our own, probably with the wrong answers.
My thoughts exactly! Give us some solid information, provide more complete documentation, let us know when Rancher is truly ready for production, because as of now, I have wasted more time than I care to enumerate just trying to get a simple cluster for a MongoDB with a NodeJS app w/o success. on Rancher 1.X this took no longer than 10-15 minutes.
Rancher: When will 2.0 truly be production ready? When will you update documentation to reflect 2.0 changes? Will Rancher truly ever be a complete frontend GUI for K8S or will we always need to depend on Kubectl?
@mbuckman weird, I spent 10 minutes and I got a cluster up with MongoDB and NodeJS. You do need to understand the way Kubernetes works, it’s different from Docker/Cattle. No amount of abstraction will change that.
Sorry guys for replying so late to this. It was really hectic as we were preparing the 2.0 launch and certainly things did not die down after. The team has been really busy preparing patch releases as we prepare to quickly address issues found by our users and customers. Let me try to summarize and provide comments to many of the concerns found in this thread:
Documentation - We admit documentation was really poor when we launched. Some of it were poor assumptions made on our part but either way, it was our fault on not making that better. Just last week, we ran an internal “doc-a-thon” where every engineer in Rancher dedicated a day to properly document the features they worked on. In the coming days, you will start seeing the new docs appear as they have been reviewed by peers, QA, and our doc writer. Rest assured, this will be corrected soon.
Installation - I think we were a bit ambitious in how many ways we wanted to provide installation. Actually, most of the pain were caused on having to deal with TLS certs. Currently, we are looking at adding a Helm chart to install Rancher as well as making configuring certs easier going forward. This should be right after 2.0.3 ships in the coming weeks.
Kubernetes Workload UX - A lot of angst were also generated by, frankly speaking, understanding k8s itself. If you already knew k8s, you probably felt this easier to use after getting your cluster up and running. If you were a previous Cattle user or never used k8s itself, it is definitely not as intuitive when compared to 1.0. We can related I can promise you that you will see improvements in our workload UX (which hopefully should replace dashboard) in the coming releases to address this. Kubernetes is a very powerful tool, but it hasn’t always been easy to understand how to fully use it, especially when compared to Docker/Compose (Cattle). This isn’t something I’ll be honest can’t be instantly addressed as a bug fix because it require some thought to how to best present this UX to our users. It’s been also a challenge for our team. We had thought our initial version was good enough even though we knew it was still harder to use when compared to Cattle, but clearly it hasn’t been. However, I can promise you, this is something we do plan making better!
Is this ready for primetime? The answer is yes. We built Rancher on top of k8s and leverage their CustomResourceDefinition (CRD) and controller framework. K8s, is essentially, ready for primetime and it’s been proven, really by our users and the general k8s community, that have already been running workloads using k8s. I won’t go into exact details, but I can tell you some of the tools you use everyday (from the financial to media segment, and many others) are already being powered by both Rancher 2.0 and Kubernetes.
For those of you that used Rancher 1.0 when we went GA in 2016, you know that we are constantly addressing issues found whenever possible. Obviously, the scale of users have changed since we launched over 2 years ago so our response hasn’t been as quick as I’d like. However, I assure you that we are constantly and doing our best to address problems as they are found. This has always been the case, and will continue to be the case going forward. In 2.1, we are providing two things to help our 1.x users. (1) A tool to help migrate Cattle workloads to K8s workloads. (2) A set of guides/docs to help understand what was in 1.0 and how to recreate those in 2.0. We are already working on that now and as soon as they are ready, we’ll release it.
For those that are using this for the first time, I simply ask for patience. We pride ourselves in making sure we create a product that is easy to use. We are not satisfied with what we have built so far although I’m not sure it can ever meet our high expectations. But, I can promise you that we will continue to help you and to quickly address issues as they are found.
Hopefully, this does address some of the concerns you guys may have. As always if you have questions, concerns, or improvements you’d like to see, please let us know. Everything is open source so if you’d like to help us out, please do! While 2.0 may have been created from what we believe it should be, just like 1.6, a big part of our product has always been driven by our users and customers. Let me know what else you’d like answered, and I’d be happy to.
@dhawton Which Mongo-Replicaset catalog item did you use? Replica Sets are not being initialized automatically as they do in Cattle/Rancher 1.X.
And despite being in the same Namespace, I have to use the full .local hostname for the nodejs app to find a single Mongo instance, which I can get up and running fairly quick, it’s the replica sets that are not working, the instructions mention running a test.sh script and some helm commands, but helm is not available in kubectl and am unsure how this plays along with Rancher.
From your response I assume that 2.0 is ready for primetime for those of us with previous K8S experience, but for the “k8s newbs”, the caveat is that we’ll either have to learn Kubernetes or wait until a 2.1 release.
I do look forward to more documentation as I continue to experience issues getting multiple microservices to work together, I am also at a loss when it comes to the catalog on how to get a MongoDB replica set up and running on an NFS persistent volume. Even taking the persistent volume out of the equation, the replica set never initializes, wither automatically, or manually.
I’ve already gone beyond the time-frame I setup to migrate our meager stack to Rancher 2.0 based on the email that this is a Production Ready product. After two weeks, I’ve got superiors questioning our choice of using Rancher altogether.
If someone could provide a sample doc or video on getting a NodeJS application with a MongoDB replica set for database up and running on 2.0, this would go a long way, as the previous method no longer works.
Thanks @willchan for providing some responses. I see the documentation aspects, which is obviously a huge point for people who don’t already know K8s are being addressed from what you say, in very short order - that’s good news.
Maybe documentation will also resolve some key stability issues that I and others have been running into as well - because maybe we just don’t understand how it works. But I can’t tell from your answer that this is being addressed as you call this production ready but we ran into a major issue - if a worker node is restarted, the whole API becomes unavailable. That’s not a production ready system. Worker nodes may be regularly taken off and on, etc.
@mbuckman It’s definitely a learning curve for those that are learning k8s for the first time. We understand more than anything else how much more complexity that k8s adds vs cattle. However, k8s is also more powerful than cattle ever was and will be (from a mind share and os contribution stand point). We elected to move on to k8s due to the features it provides but also the fact, we believe we will be adding our value in terms of managing your workloads. In all honesty, 2.0 is definitely more geared towards the IT/ops guy that needs a solution to manage/deploy this “kubesprawl” of clusters we see today. Our eventual goal is to provide value on top of k8s (or up the stack), whether it’s making it easier, or to provide technologies in things like serverless or service mesh where you don’t really need to fully understand how k8s works. These are all things that some of the team have already prototyped and working on as we speak. You may see some of that soon.
Like I said, documentation is coming and will come fast. I promise you’ll see updates this week and continuing until we feel it has been sufficiently covered to our community’s satisfaction.
In terms of deploying MongoDB, I do have a team that is working on making those Helm charts easier to deploy. I can ask one of our engineer to see how to improve it beyond the default settings it comes with.
If you are a 1.x user, then you are probably waiting for the 1.6 migration guides I promised. It might be easier to wait until 2.1 for that as we are also in the process of understanding the delta between the two products and how to make that transition easier. I don’t think we can super dead easy to make this transition but we will try our hardest to achieve that. Some of the migration will require you to learn a little bit of k8s.
@etlweather I will get an engineer to look at this. I think one issue is definitely being looked at in 2.0.3. I am not sure if they are all related though. I agree that a worker node for a cluster being down should not affect rancher itself.
Our engineers looked into this issue and can’t seem to reproduce it. We have responded for more information but I do agree that if a cluster worker node is down, it should not cause the cluster to be inaccessible or in a non-active state.
This is definitely reproducible, at least with k8s 1.10. We will re-verify this for other k8s versions to make sure. In the meantime, a fix will definitely be in our next 2.0.3 release of Rancher.
@willchan mind updating here once new documentation becomes available so we don’t have to go hunting down new documentation across Github, Rancher web, and forums.
“In terms of deploying MongoDB, I do have a team that is working on making those Helm charts easier to deploy. I can ask one of our engineer to see how to improve it beyond the default settings it comes with.”
the mongodb cluster and nfs-provisioner should be fine now, please let us know if you guys have encounter any issues or you can file a git issue at https://github.com/rancher/rancher, thanks.
also i can say rancher 1.6 cattle is very convenient on deploying and managing containers it has a list of features like rancher-nfs and other services in the catalog. i’ve tried v2.0 i’ve put up long hours just configuring nfs connection to the host but no luck and has less service feature in the catalog and having hard time to deploy nfs. i hope they will not stop supporting rancher 1.6 cattle or just support both rancher 1.6 and rancher 2.0 all the way.