How to test rancher metadata service, i.e rancher/confd-base:0.11.0-dev-rancher?

hi try to build a config image based on confd.
you’ve provided a rancher/confd-base:0.11.0-dev-rancher docker image.
How can i run this image to test some variable confd resolution?
docker run -i -t rancher/confd-base:0.11.0-dev-rancher /bin/bash
returns

System error: exec: “/bin/bash”: stat /bin/bash: no such file or directory

another question:
i’ve seen your github repo GitHub - rancher/confd: Manage local application configuration files using templates and data from etcd or consul .
where is the Dockerfile?

best regards,

Charles.

@clescot, the image is run in scratch, which means there is nothing but the Binary in there. We do that to keep the image size as small as possible. If you want to use confd just to test, I would recommend building a container with Debian or something… You can just change the FROM line in this repo https://github.com/rancher/compose-templates/tree/master/utils/containers/confd if you change it to Debian/Ubuntu/etc you can run the command you describe.

Here is an example how we use this container: https://github.com/rancher/compose-templates/tree/master/elasticsearch/containers/0.4.0/elasticsearch-conf

We then deploy it like so: https://github.com/rancher/compose-templates/blob/master/elasticsearch/0.3.1/docker-compose.yml

Alternatively, you could start the rancher/confd-base container and then start another container with bash and --volumes-from=<rancher/confd-base>

The repo you mention above is just our fork of the official Confd project. We tied a release to it, but it was added to kelseyhightower/confd master this past weekend. Not in an official release yet.

Hope this helps, let us know if you have more questions.

@cloudnautique, thanks for your answer (and your great work!).
I try to adapt a kafka image for a rancher cluster.
i’ve tried to learn from the zookeeper example you’ve provided.
I’ve got a running confd image.
But my difficulty comes from the rancher-metadata association.
My first attempt was to:

  • modify a config file
  • build a docker image
  • deploy to the docker repository
  • put in a rancher cluster
  • got a very small feedback when it fails from the logs (no possibility to connect to the image from confd-base:0.11.0-dev-rancher)
  • another time…
    Which is very long and tedious…
    a perfect fit would be to run a container with confd associated with a mocked rancher-metadata, running like in the integration test present in the confd git repository (https://github.com/kelseyhightower/confd/blob/master/integration/rancher/test.sh ).
    i’ve tried also to use the etcd backend (but building this kind of image is not trivial), and now the env backend to simulate the rancher backend, but some differences remains (naming conventions with underscore)…
    So, shortly, if you can provide a confd image with a mocked metadata rancher to validate config files, it would be very useful.

best regards,

Charles.
ps:i’ve seen this last git comment in your zookeeper example :
"Scale was added to the metadata service, so now clusters can come up
in one shot. "
=> have you any doc pointing this feature, or is it planned for the next release?

I know what you mean about the tedious dev cycle. I typically have an instance running locally and then it just uses the local cache for a registry, but even that is still pretty slow.

I really like the idea of a metadata simulator. That would make it a lot easier. Then you could just bind mount the confd target mounts to your local working dir. I’ll try and work something up soon.

The zookeeper commit was based on the metadata path /self/service/scale. That tells you how many containers are supposed to be running. It is not perfect, in there are some cases where it would lag. In the case of Zookeeper it needs to know about all of the nodes in the cluster. Before this, what would happen is the first node would know about itself, the second about 1 and 2, the third node would know about all. So the cluster was in a bad state on init. Now, what I can do is get the scale, and wait until length /self/service/containers >= scale. I believe the path is available in Rancher v0.42.0 which is latest, if not I know its in v0.43.0 which is still in RC status.

Hey cloudnautique, I’ve tried rancher-meta&confd in a nginx container. It’s really awesome, but there’s some questions. If I made confd container as a sidekick, just like zookeeper , but I also want to use reload_cmd or check_cmd. It seems confd and nginx must be in the same container, or the ‘reload_cmd’ cannot be exec in confd container(sidekick). In such case, how could I arrangement them in a proper way?

best regards,

Yeah, if you want to run the commands, you’ll need to run confd in the same container as the process. You could still do it with the side kick approach preventing you from having to create a new mega-container. You would have to share /etc/confd and the binary… along with a new entrypoint to call confd and start the app. If you can talk to your primary app over a socket, then you have some flexibility, but in a lot of cases you can’t send another container a signal :-\ pid namespace sharing would be great.

You have to be careful though, particularly for clustered systems. Take zookeeper for instance, if you have a config change pushed down, the cluster will restart all at once. The service will lose quorem and impact all services depending on it. So we have been thinking about the best way to orchestrate the restart.

I know its not the best answer at the moment, but believe us, we are thinking about this one. Any thoughts on this would be welcome!

Thank you so much for the help and valuable suggestion about clustered systems.

As you mentioned above, we could use rancher/confd-base, then share the confd program/binary to primary container via volume. However, there’s still some feeling about “invasion”, the primary container need to aware of the existence of confd program and run it up(by script maybe). So, in the end, I simply pack two service together in one container and run them both.

Here’s my understanding(maybe superficial, with poor expression : ) ) please let me know if I make any mistake.

  1. If the primary service just want to refresh some file dynamically at regular intervals, then primary service with a confd sidekick maybe an elegant way in rancher. The file can be shared via volume.
  2. But if we need to actively trigger service to reload file, It seems difficult for primary service to be no awareness of confd serivce, at least some action must be occurred in the primary service.
  3. The feature which a hot reload of the configuration without downtime in nginx makes it easier in my case. Maybe it’s easier to do such feature(implemented by app itself) rather than supplied by rancher is a more proper way?

Finally there’s another little question about how the rancher/confd works? I found that it’s running on an empty base image

Isn’t there a way to start a confd container outside of rancher with bind mounting the working dir into it and using the label io.rancher.container.network = true in order to make the metadata service available?

Yeah you could do that, but you lose /self resources.

Typically, I create a service with bind mounts and keep a shell open inside the container to run confd. It has a bit of overhead because you need Rancher running on your development station and the creation of the service.

It would be nice to have a stub of Rancher Metadata runnable as a standalone binary/container.