Adding new environment variables to existing stacks

Hi folks,

If my environment variables change between releases, I’m not sure

  • rancher-compose up -u

will update the environment variables on the upgraded containers, does it? Is there a way to do that?

Right now, I’m running a dev/test/prod style system, where each environment (such as test) is a stack. However, I’m using internal load-balancers to navigate to these environments.

If I’m stuck that the environment variables are NOT updated, it’s pretty easy to delete/recreate my stack, but that screws up the load-balancer. Is there a way (using rancher-compose) to re-add load balancer endpoints after deleting/recreating a stack? I can delete/recreate the load-balancer using rancher-compose, but that brings down the other environments while it’s rebuilt.

Any advise?

Can you share your docker-compose.yml and rancher-compose.yml so that I can understand what you’re trying to do regarding the changing environment variables? When you say environment variables, are you talking about environment interpolation?

Also, your other option is to perform a rolling upgrade for each container, which would automatically update the load balancer end points.

Hi Denise,

As you can see from my docker-compose file (sent privately), I created 3 environments:

  • e2 (engineering test)
  • d2 (development)
  • uat (user acceptance testing)

These are basically the same setup, just different environment variables changing the database between e2/d2/uat. When I have a new s/w version, I push it to e2 and test (internal engineering validation). Then, when that passes, I upgrade d2 (wider testing), then uat (end-user beta testing), etc.

If something goes wrong with an upgrade, or my first push to e2 (where I test deployments), I sometimes have to do a rancher-composer rm -f in the e2 environment and recreate it. However, when I remove that environment, the labels in the loadbalancer for e2 are removed.

I did learn yesterday that I can essentially:

  • cd e2
  • rancher-compose rm -f
  • rancher-compose up
  • cd …/loadbalancer
  • rancher-compose up -u
  • rancher-compose up -c

So, by “upgrading” the loadbalancer, it re-adds the e2 environment back into the labels. However, I’m concerned that this upgrade affects the other environments, as that upgrade occurs.

Does this help clarify?

When you say environment, do you mean stack (which is environment in the API). Based on your docker-compose.yml file, it seems like it’s different stacks.

Your docker-compose.yml has targets in different stacks.

labels: <target> <target> <target> <target> <target> <target> <target>

If you remove a stack (like e2), then the labels for e2 would be removed as those services don’t exist anymore. That’s expected behavior, there shouldn’t be any services that don’t exist in Rancher in the labels.

Man, nomenclature can get confusing. Environment for me is a complete system (dev/test/production). It may span multiple stacks.

You are right regarding labels are removed from the LB if the stack is deleted – that’s the core of my problem. Let me try and sum up (along with my solution):

  • There are config options driven by environment variables
  • If those environment variables change in the docker-compose.yml file, I don’t think (please confirm) that upgrading a stack will change the environment variables. I believe you have to delete/recreate the stack to get the new environment variables

My solution was to move to only 2 environment variables (NODE_ENV and ROLE), and then to pull the rest of the environment from my consul host (an etcd alternative). So, I can now change environment variables without deleting/recreating the stack.

Let me sum up:

  • We need to add a new stack to an existing load balancer running on production (so we don’t want interruptions)
  • The LB is in it’s own stack (since it supports multiple stacks)
  • We update the docker-compose.yml to include the labels for the new environment
  • We do a
    rancher-compose up -u
    to “upgrade” the LB, which effectively adds the new labels to all the LB instances

Is this an acceptable solution for updating a LB w/o impacting existing sites?

When you say “add new stack to a load balancer”, that’s slightly confusing as we only add services to load balancers not stacks.

Yes, you should be able to update the targets of the load balancer, This should update the which services the load balancer is directing traffic to. It can be checked in /etc/haproxy/haproxy.cfg located inside the load balancer.

The previous existing services/containers would continue to run so if you have any issues, you should be able to rollback to your previous service.