How are you bridging rancher environments?

It would be nice if Rancher could manage connections between environments and their services. Something like external link or service but pointing to service in different environment.

Use-case: App environment sending logs to monitoring environment.

ops/monitoring/docker-compose.yml

server: # ...

collector-app:
  image: rancher/lb
  ports:
    - 12345
  links:
    - server
  labels:
    rancher.environment.bridge: 'true'
    # allow only listed environments
    rancher.environment.allowed: devs # name or env id

devs/app/docker-compose.yml

server:
  external_links:
    - ops/monitoring/collector-app
1 Like

This is exactly the use case which lead me to find this post. I want to have logging environment setup where all of my stacks can pipe into one place rather than one logging stack per environment.

This is technically possible with custom security group configurations and a daemon running in each stack but things would be much simpler if we could use the templates provided (or create our own catalogs) which allowed targeting services across environments.

1 Like

Each environment has its own unconnected overlay network & hosts, so this isn’t really possible with internal IPs. You could use external-DNS (Route53, etc in the Catalog) + an External Service in each stack that points to the collector service DNS entry. That works even if the collector is on a completely separate Rancher installation.

Thanks for the confirmation. I did manage to get a fully isolated login environment setup and logstash listening on a port that was public, however I could never get the logsprout daemon to pipe data in and I’m not sure how to go about debugging.

I used the logsprout service from the catalog and pointed it to the logstash instance (via it’s external IP and port) but nothing ever came through as far as I can see. The data gets sent from logsprout as I can see a huge jump in network transfer if I hammer one of the containers with traffic but logstash never picks it up. Note that I had to use custom compose files for this as “target service” is compulsory from the catalog UI which prevents me from continuing since the logging stacks are isolated away from any other services.

Also, to confirm that logs were being picked up correctly I used the original logsprout image from docker hub and piped data into a remote syslog which worked straight away. So my problem now seems to lie with the logstash catalog service not receiving data, any recommendations on how to debug this?