I've forked rancher catalog to test a kafka entry but it fails

i’ve forked the rancher-catalog repository to test a kafka entry

i’ve updated the rancher cluster to point to my repo (/v1/settings/catalog.url).
The kafka entry is listed in the catalog, but no icon appears, and the configuration detail is empty…
I thought that adding a subdirectory with a similar structure in the ‘templates’ directory would work, but it fails…
Have you any tips on how to add an entry in the catalog?

best regards,


What version of Rancher are you using?

Have you tried logging out and back in to see if that helps?

Those are the right things to do… I think in the current release you have to restart the rancher-catalog-service daemon (or the rancher/server container in general) for it to pick up changes to the catalog URL correctly.

i’m using Rancher v0.47.0, Cattle v0.115.0.
How can i restart the rancher-catalog-service daemon?
by connecting into the rancher/server container ?
Must i need to restart the rancher/server container in all cases?
best regards,


Rancher-catalog-service runs inside the rancher/server container. Easiest way to restart would be to restart the rancher/server container.

Are you running HA? (and therefore have multiple rancher/server containers?)

There is a format error with the rancher-compose.yml file under kafka/0/rancher-compose.yml, due to which the kafka template is not loaded by catalog service.

A ‘:’ is missing after top-level .catalog

The top-level property should be
name: “Kafka”

How to check for this error?

If you can dig up the logs from the rancher-catalog-service, you will see some errors like this which indicates if the template parsing had errors:

ERRO[2015-12-10T12:09:06+05:30] Error unmarshalling rancher-compose.yml under template: DATA/kafka/templates/kafka/0, error: yaml: line 1: mapping values are not allowed in this context
ERRO[2015-12-10T12:09:06+05:30] Skipping the template version: kafka/0, error: yaml: line 1: mapping values are not allowed in this context

After correcting https://github.com/clescot/rancher-catalog/blob/master/templates/kafka/0/rancher-compose.yml
you can wait for sometime, rancher-catalog-service should sync the updates periodically.

You can use something like http://www.yamllint.com/ to check the syntax. We’re working on having some basic checks run as part of CI for pull requests that you could potentially reuse for your own catalog once we have it running for pull requests to ours.

thanks to @denise , @prachi and @vincent for their replies.

@denise : no, i’m not currently running a cluster in HA mode.
I prefer waiting the galera master/master replication system that rancherlabs seems to working on :wink: .

@prachi : thanks to pointing this yaml formatting error!
i’ve dig up the logs from the rancher server container.
now it works.

@vincent : thanks to pointing this yaml validating service .

@denise @vincent @prasti @cloudnautique :
i’ve spotted another issue i’ve solved (i’ve specified in the type field an ‘integer’ type instead of a ‘int’ type) :
Can you publish a list of supported question types in the catalog ?

i’ve got now a rough working kafka cluster from this catalog catalog. but some questions remains before submitting a pull request to the rancher-catalog:

  • what is the purpose of catalog ?
    i think catalog is present for testing purpose.
    Like ‘infrastructure as code’ is the way many folks go on, we should privilege for production, configuration files stored in git instead of configuration through ui (or maybe you want provide a feature to commit to personal git repository through ui?)

  • which configuration options enable through catalog ? kafka has got many configuration parameters, and it seems weird to copy all parameters to the rancher-compose.yml : if you’re agree with the testing purpose goal for catalog, catalog entry configuration should provide only essential configuration options, isn’t it?

  • enable external access via catalog entry configuration ? : kafka external access implies different configurations. internal container ip need to be advertised to other kafka containers, but kafka clients need to know the external ip.
    This problem can be solved through some configuration. Can i enable in the docker-compose.yml via some options port opening ? i.e, if you configure external access, is it possible to open 9092 port in docker-compose via an ‘if’ statement?

  • container-config is always possible ? Configuration through config-container seems the best way according to your compose-templates repository. I’ve tried this way for the kafka entry, but was blocked, because i need to know in the configuration file the internal IP of the container running kafka. Like the configuration container is run (and confd/metadata resolution) before volume_from mechanism is executed, we cannot put in the config file the right IP.

  • what is the intended content of a catalog entry? Is is a recommanded way to provide, in the kafka use case, a way to create some topics(via the catalog entry configuration), or provide a container to monitor kafka (via for example a kafka-manager container), or consume it (hello kafkacat)?

  • Does catalog entry need to reference rancher images ? I’ve built a kafka rancher image (https://github.com/clescot/rancher-kafka).
    If catalog entries from Rancher must reference only rancher images, how to PR them?

thanks for your great work.

best regards,


The currently supported question types are a subset of https://github.com/rancher/api-spec/blob/master/specification.md#schema-fields:

  • string
  • int
  • password
  • boolean
  • enum
  • multiline

Plus service to refer to another service.

@clescot, the catalog service now does validation as well, with a git repo… making sure it can load the whole catalog. You can look at the ./scripts/ci script in the Rancher catalog repo to see how to do it.

Purpose of the catalog… the official Rancher catalog is ideally moving towards production ready apps. Yes, its pretty hard to do :slight_smile: The catalog entries are typically the 80% use case, but the compose files / confd files are aimed at being 100% customizable. Logstash works pretty well in that you can copy / paste whole sections of the file. Confd / metadata is just one way to get config. If you wanted to go the git route you could pass a config url and do a checkout in a sidekick. In the next release, you can add multiple git catalog repos at a global level.

Which configuration options… this depends… again shooting for the 80% use case of what people would need. We are working on DNS discovery mechanisms, so you could hit service.stack as a DNS entry and get the other containers.

external access via catalog entry… No if statement support, but its something that has come up. We have also talked about suffix/prefix 'ing of values. So if you have

  - ${var}8080

and a question like

  • variable: var
    type: string
    suffix: ":"
    you could get the user to enter in 80 and have it replace to 80:8080

contaienr config is always possible? I’d say so at this point. Though it requires some tricks. Theres a little side project(cloudnautique/giddyup) I have been working on to codify them. The big one is using rancher metadata, with that you can get at just about everything. The big thing there is network namespacing because metadata returns responses based on the requesting IP. So you would want to do something like:

  net: none
  net: 'container:kafka-main'
     io.rancher.sidekicks: kafka-data,kafka-config
     - kafka-data

with that setup when confd queries /self/container/primary_ip you get the correct rancher ip.
Also, in systems like this you want to wait for all services to come up first before writing config… so you want (psuedo code…)

while len(/self/service/containers) < /self/service/scale {

Take a look at Giddyup for some other use cases, Galera is going to be using it extensively.

what is intended content?.. we want to be able to deliver production ready systems. Ideally, once launched from the catalog it is immediately consumable, but still gives you the raw app experience. Glusterfs for instance creates a volume for immediate use, but you could still log in and create new volumes.

1 Like

@vincent : thanks for these informations about type accepted in catalog entry.
@cloudnautique : thanks for your detailed answer . i’m please that rancher will permit multiple referenced repositories in the next release.
I’ve not understood this sentence : “If you wanted to go the git route you could pass a config url and do a checkout in a sidekick”.
My question was : how to persist (infrastructure as code) catalog entry version and answer.txt in a git repository ?
Il will update my kafka catalog and submit a PR soon with your insightful remarks.
best regards,


@cloudnautique : i’ve tried to follow your recommendations about separating my kafka entry into kafka-data, kafka-config and kafka but i’ve got an issue :
I use the same template from the previous successful one (but all stuff was merged into one container and run with a supervisor daemon) to generate zookeeper connection string :
but here are the generated string : localhost:2181 instead of privateip1:2181,privateip2:2181,privateip3:2181.
The template code is the same :
` zookeeper.connect={{range ls “/services/zookeeper/containers”}}{{ $containerName := getv (printf “/services/zookeeper/containers/%s” .)}}{{getv (printf “/containers/%s/primary_ip” $containerName)}}:2181,{{end}}

Is there any drawback with the primary ip when we use the networking mode?

here are the docker-compose file :
image: clescot/rancher-kafka-config:33
net: 'container:kafka’

  • kafka

io.rancher.scheduler.affinity:host_label: kafka=true
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.sidekicks: kafkaconfig

image: clescot/rancher-kafka:19
- /data
best regards,


hmm… I haven’t seen localhost returned as a container ip from metadata before. The network mode just shares the interface between the two containers. So it is making requests always as the kafka container. Its a pattern we follow all the time. If you don’t do this and you separate them out, you end up with answers for the config container which is seldom what you want.

Is there a difference in the two images? In your repo, I only saw 1 Dockerfile, so I assumed they came from the same place.


i’ve put the code with two different Dockerfile in another branch (named ‘divide_and_conquer’) which is not yet pushed to github.
I’ve encountered this situation with the v0.50 rancher/server and 0.6.2 rancher-compose.
I will push this new branch in few hours to illustrate this weird case…
thanks for your support and patience!


@cloudnautique :
my issue is encountered with this repository:

in the branch ‘divide_and_conquer’ (divide and conquer the rancher containers ;–) ) :

i’ve tried to follow your recommendations about separating config from runtime here :

But it seems to fail when the metadata resolution resolve the zookeeper ips to localhost … (i’ve curl succesfully on another container, to check that metadata returns an ip).

any clue?
best regards,

i’ve upgraded the rancher/server to 0.50.1 but it fails too.
Have you any tips to debug this metadata issue?

best regards,


i’ve scrutinized the rancher compose-template repository, and i see some examples with a network mode (net: container:<othercontainer>), but there are no examples which resolve in a template, another stack like needed in kafka (i.e zookeeper).
Maybe there is an issue towards resolving another stack in metadata when you init the current stack?

best regards,