Alarms in rancher

if anyone interesting in getting email notifications when services are not operating correctly(i mean monitor degraded states) I have developed a simple application over Rancher API as a workaround for now. See https://github.com/ndelitski/rancher-alarms

3 Likes

This is awesome! There’s an undocumented feature of Rancher that will make it a bit easier to deploy these types of services. Just put the labels

labels:
  io.rancher.container.create_agent: true
  io.rancher.container.agent.role: environment

What this will do is when you deploy your container we will create an API key for the container and set it as env vars CATTLE_URL, CATTLE_ACCESS_KEY, and CATTLE_SECRET_KEY. Now that I write this I realize those should probably be RANCHER_*. I’ll change that at some point…

But honestly we’ve been having fun writing little utilities similar to this. I’m glad to see others doing it.

3 Likes

What keys are used in CATTLE_ACCESS_KEY and CATTLE_SECRET_KEY? Are they related to what i created in UI? What if i deactivate all keys for environment?

I have at least 2 more ideas in my mind:

  1. Docker+Host stats collection to graphite backend. Main point is to to see not separate metrics of hosts and individual containers in charts, but look at them as they represented in rancher. Also service could extend some hosts and services metadata(btw can i associate some metadata with a host?) and then alarms service could notify when, for example, host is running over 80% CPU for about 5 mins or close to ran out of disc capacity.
  2. AWS-ELB agent running globaly. I have a bit tired of manually connect/disconnect rancher hosts to ELB hosts when they are created or removed, so it would be cool if I add labels to every Rancher LB pointing to ELB above. When host up it agent will connect it to corresponding ELB’s. Or maybe this problem i could solve with an autoscaling group in AWS. I should make a research.
    Will work on them when i have spare time…

@ibuildthecloud We have a very complicated infrastructure and somewhere is better to link containers on a host-level rather than in service in terms of optimization(i mean that rancher links work with DNS entries pointing to the list of all service containers ips). Is any possibility to implement this case in a Rancher or maybe you will look at this in a future? For example i have workers pipelined in a queue with RabbitMQ node. Every host is represented in 1 instance of each worker and 1 instance of RabbitMQ node. All rabbit nodes are tied into HA cluster and replicate data to each other in a background. This hosts may be in a different datacenters but data is shared. And this is a quite simplified scenario. In rancher these workers are created as a mail stack, rabbitmq as a queue stack. I can’t create such complicated sidekicks(and if i could i would like to see them a different services) and I don’t want if my workers will send and receive messages from RabbitMQ nodes placed in other far-faraway locations.

CATTLE_ACCESS_KEY and CATTLE_SECRET_KEY are api keys that are dynamically created for that container. The last as long as the container is around. You don’t see them in the UI for list of api keys, so if you deactivate those keys, they are independent of the generated ones from the labels.

If you run net: host for you service and then link that service to another server, the IP we report is the “agent IP” of the host, which is the IP that Rancher know the host by. If you have a container that is net: host and you want to use our DNS you need to add the label io.rancher.container.dns=true.

Will that work for you?

Hi,

Do these labels also work on K8S containers?

Thanks
-Vishal

Yes, these labels work on k8s containers.

Hi there!

Does this option still valid for Rancher version 1.1.0? and if yes do I need to deploy that container using rancher-compose?

I just create a container but I can’t not see those environment variable, this is how spin-up the container:

docker run -it -l io.rancher.container.create_agent=true -l io.rancher.container.agent.role=environment -l io.rancher.container.dns=true --rm alpine ash

And it appears on Rancher properly with the corresponding labels.

What I’m trying to do it’s get those ENVs to run a curl command to create a API key from my bootstrap script and then be able to use rancher-compose using RANCHER_ACCESS_KEY and RANCHER_SECRET_KEY from other scripts.

Thanks

Just in case this could be useful for someone else I manage to solve my issue by using the already created rancher-agent which in this case already had the needed environment variables CATTLE_ACCESS_KEY and CATTLE_SECRET_KEY. It seems like if you use docker-compose to create a container even adding those labels you won’t get the CATTLE environment variables populated.

This is my script to create a new Rancher Environment API Key programmatically

Cheers :slight_smile:

@zot24 the labels only work when creating a container through the Rancher API/UI/CLI. If you directly do docker run the container is already running and there is no way to inject further environment variables after-the-fact.

1 Like

That’s what I guess after doing some test, thanks you for yoyr answer @vincent