Deploying a docker-compose stack to Rancher 2.0 on AWS

Hi all,

I’m just beginning to experiment with Rancher as a way to manage containers on a cluster of virtual machines on Amazon AWS, but being new to Amazon AWS, Kubernetes and Rancher; I’m finding it a little difficult to join all the dots.

I will say this; actual deployment of Rancher, and the underlying Kubernetes infrastructure, so far, has been very slick. Kudos to the development team for pulling that off… deploying the master node really couldn’t be much simpler and getting the underlying stack going is not much effort.

I’m not sure what “best practice” is in regards to exactly how and where to deploy what; I guess a lot is “it depends”. My thinking is that the “worker” nodes will be behind a gateway of sorts, and only a few select nodes will have public IP addresses.

Our current infrastructure, which we hope to replace, runs on Vultr; which, we’re quickly outgrowing as a VPS host, hence we’re looking to shift to AWS. Presently, we just install Docker and docker-compose on an Ubuntu virtual host; and use docker-compose.yml scripts to deploy our stack from private images stored on Docker Hub.

For this experiment; I have one master node running Rancher server, which sits on a public subnet with an external IP address pointing at it (both IPv4 and IPv6; although the latter doesn’t seem to want to work… that’s an AWS issue I’ll deal with later), and 5 slave nodes which reside on a separate private network.

We’re running Rancher v2.0.0-alpha16, downloaded this morning.

Three of these nodes run etcd and the management daemons, all operate as workers. There’s a route between the two networks, and the back-end private network has access to the public Internet via a NAT gateway.

The plan is to use Amazon EBS as the volume storage for the entire stack. As I understand it; this is a bult-in feature to Kubernetes. So far, I’ve not had a lot of luck deploying containers to the stack. It appears to go through the motions, but give what appear to be mixed messages as to the status of the containers.

So far, doing it through the Rancher UI fails; if I feed it a docker-compose.yml, it complains Unable to find schema for "volumetemplate". I get the same message if I do a kompose convert -o /tmp/kube-compose.yml then feed it the resulting file.

The way that has worked at least partially is to:

  1. Go to the cluster dashboard and create a new Project
  2. Go back to the cluster dashboard and create a new Namespace; selecting the new project
  3. Run kompose up --namespace mynamespace

That at least sets a few wheels in motion. I wind up with a stack of pods being created and a few persistent volume claims. However, the pods wind up stuck in either ErrImagePull or ImagePullBackOff states; and the claims remain stuck in the Pending state.

I’ve tried creating the storage class for EBS via kubectl on the command line using the following definition:

kind: StorageClass
  name: awsgp2-apse2
  annotations: "true"
  type: gp2
  zones: ap-southeast-2a, ap-southeast-2b, ap-southeast-2c

I’m not sure where I plug AWS credentials in at this point, the Kubernetes documentation does not make this clear. The above triggers Kubernetes to create PersistentVolumeClaims which have awsgp2-apse2 as the storage class; but they still sit around in the Pending state when I do kubectl get pvc.

According to the Kubernetes docs:

Dynamic provisioning can be enabled on a cluster such that all claims are dynamically provisioned if no storage class is specified. A cluster administrator can enable this behavior by:

    Marking one StorageClass object as default;
    Making sure that the DefaultStorageClass admission controller is enabled on the API server.

I believe I’ve done the former; I’m not sure how I check the latter in a Rancher-deployed Kubernetes cluster. Has anyone managed to get EBS storage working with Rancher 2.0/Kubernetes?

As for the creation of the pods themselves, I suspect it could be down to Allow specifying imagePullSecret · Issue #897 · kubernetes/kompose · GitHub.

If someone could advise on a work-around to that issue, I’d be most interested.

I’ve also tried deploying from the catalogue. If I try to perform this, say, I deploy the library-kubernetes-dashboard; I get taken to a page that shows library-kubernetes-dashboard with a message: Template version not found, in a supposedly “active” state. Clicking on this gives me a Unable to find schema for "stack", which sounds horribly like this issue:

Is that actually merged into the latest build or is there yet to be a build containing that fix?

Thanks in advance.
Stuart Longland

1 Like

I have this same issue
template version not found sitting in the Catalog Apps with no way to delete these.

I am also on Rancher 2.0 preview which was brought down just 2 days ago. So I am not sure if your fix made it into that branch.

I was able to recreate this issue by adding
then deploying addons-infra-core-services

I was able to clear the Catelog Apps with
kubectl delete namespaces addons-infra-core-services
kubectl delete namespaces addons-infra-k8s-support

1 Like

Importing compose files and the entire catalog in general do not work in the current preview. And those two items specifically are from preview1 and have no relevance now.

1 Like

I am needing to set
- --feature-gates=PersistentLocalVolumes=true
- --feature-gates=VolumeScheduling=true
is there a way to get that set in the current Rancher 2.0
the Kubelet command is not currently available.

1 Like

Okay, so if what I’m running is preview1, what preview versions should I be running?

root@ip-172-31-9-154:~# docker pull rancher/server:preview
preview: Pulling from rancher/server
Digest: sha256:9a3396e13b156b875421c643cdd2a654a9822ed14472e8f37c99b258e4372876
Status: Image is up to date for rancher/server:preview

Docker seems to think what I have is the latest.

1 Like

I meant those two catalog items were for preview 1 (months ago) and deploying them makes no sense in the current preview (which you are apparently running).

1 Like

Ahh right. Sorry, still getting familiar with how this all works. :slight_smile:

In the meantime while I try to figure out how to get persistent volumes going, I’m just in the process of going through the Kubernetes 101 tutorial. One road block I’ve stumbled upon is that interactive sessions don’t seem to work:

stuartl@vk4msl-ws ~ $ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1 --env "POD_IP=$(kubectl get pod nginx -o go-template='{{.status.podIP}}')"
If you don't see a command prompt, try pressing enter.
                                                      Error attaching, falling back to logs: unable to upgrade connection: you must specify at least 1 of stdin, stdout, stderr


I tried running the date command to see if there was any life, but so far, no response. The container is allegedly up, but I have no way of interacting with it.

stuartl@vk4msl-ws ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"archive", BuildDate:"2018-02-28T22:52:46Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.7-rancher1", GitCommit:"ca92a5ebf0ac155a96027262802c0a47d0148af7", GitTreeState:"clean", BuildDate:"2018-01-23T23:00:06Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

I’ve also tried with kubectl 1.9.2 without much luck either. Is this a known issue?

Okay, there’s a workaround:

$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1 --env "POD_IP=$(kubectl get pod nginx -o go-template='{{.status.podIP}}')" -- sh -c 'wget -qO- http://$POD_IP'
<!DOCTYPE html>
<title>Welcome to nginx!</title>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>

Kludgy, but it’ll get me by. :slight_smile: