Hi all,
I’m just beginning to experiment with Rancher as a way to manage containers on a cluster of virtual machines on Amazon AWS, but being new to Amazon AWS, Kubernetes and Rancher; I’m finding it a little difficult to join all the dots.
I will say this; actual deployment of Rancher, and the underlying Kubernetes infrastructure, so far, has been very slick. Kudos to the development team for pulling that off… deploying the master node really couldn’t be much simpler and getting the underlying stack going is not much effort.
I’m not sure what “best practice” is in regards to exactly how and where to deploy what; I guess a lot is “it depends”. My thinking is that the “worker” nodes will be behind a gateway of sorts, and only a few select nodes will have public IP addresses.
Our current infrastructure, which we hope to replace, runs on Vultr; which, we’re quickly outgrowing as a VPS host, hence we’re looking to shift to AWS. Presently, we just install Docker and docker-compose
on an Ubuntu virtual host; and use docker-compose.yml
scripts to deploy our stack from private images stored on Docker Hub.
For this experiment; I have one master node running Rancher server, which sits on a public subnet with an external IP address pointing at it (both IPv4 and IPv6; although the latter doesn’t seem to want to work… that’s an AWS issue I’ll deal with later), and 5 slave nodes which reside on a separate private network.
We’re running Rancher v2.0.0-alpha16, downloaded this morning.
Three of these nodes run etcd
and the management daemons, all operate as workers. There’s a route between the two networks, and the back-end private network has access to the public Internet via a NAT gateway.
The plan is to use Amazon EBS as the volume storage for the entire stack. As I understand it; this is a bult-in feature to Kubernetes. So far, I’ve not had a lot of luck deploying containers to the stack. It appears to go through the motions, but give what appear to be mixed messages as to the status of the containers.
So far, doing it through the Rancher UI fails; if I feed it a docker-compose.yml
, it complains Unable to find schema for "volumetemplate"
. I get the same message if I do a kompose convert -o /tmp/kube-compose.yml
then feed it the resulting file.
The way that has worked at least partially is to:
- Go to the cluster dashboard and create a new Project
- Go back to the cluster dashboard and create a new Namespace; selecting the new project
- Run
kompose up --namespace mynamespace
That at least sets a few wheels in motion. I wind up with a stack of pods being created and a few persistent volume claims. However, the pods wind up stuck in either ErrImagePull
or ImagePullBackOff
states; and the claims remain stuck in the Pending
state.
I’ve tried creating the storage class for EBS via kubectl
on the command line using the following definition:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: awsgp2-apse2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zones: ap-southeast-2a, ap-southeast-2b, ap-southeast-2c
I’m not sure where I plug AWS credentials in at this point, the Kubernetes documentation does not make this clear. The above triggers Kubernetes to create PersistentVolumeClaims
which have awsgp2-apse2
as the storage class; but they still sit around in the Pending
state when I do kubectl get pvc
.
According to the Kubernetes docs:
Dynamic provisioning can be enabled on a cluster such that all claims are dynamically provisioned if no storage class is specified. A cluster administrator can enable this behavior by:
Marking one StorageClass object as default;
Making sure that the DefaultStorageClass admission controller is enabled on the API server.
I believe I’ve done the former; I’m not sure how I check the latter in a Rancher-deployed Kubernetes cluster. Has anyone managed to get EBS storage working with Rancher 2.0/Kubernetes?
As for the creation of the pods themselves, I suspect it could be down to Allow specifying imagePullSecret · Issue #897 · kubernetes/kompose · GitHub.
If someone could advise on a work-around to that issue, I’d be most interested.
I’ve also tried deploying from the catalogue. If I try to perform this, say, I deploy the library-kubernetes-dashboard
; I get taken to a page that shows library-kubernetes-dashboard
with a message: Template version not found
, in a supposedly “active” state. Clicking on this gives me a Unable to find schema for "stack"
, which sounds horribly like this issue:
Is that actually merged into the latest build or is there yet to be a build containing that fix?
Thanks in advance.
Regards,
Stuart Longland