How to locate a container using the Rancher API

I want to access a container directly, but I’m not sure of the best way.

So from the environment API, I can get the list of services. I expected to find something like a “containers” link on the service, but didn’t find it.

I’ve found that containers are part of the project and the reference a stack (a.k.a environment). So I could just do a search on the name, but I don’t expect the container “name” to be unique across all Environment(UI environment) so I need some advice on the best way to get a list of containers for a given service.

I know you’re gonna ask why…
we have a bunch of services that start up to drive some testing, and we want to be able to access a web service on each on if needed to get details about the ongoing status of the testing. We could do this with Mesos by getting the host where the container is running, and then Mesos will map a generated port to the port exposed on the container. I don’t care about doing things the same way as Mesos, but I do want to be able to hit “http://:8080” on each of the containers in a given service.

The link you want is instances… Cattle initially did VMs too so the base type is instance and container “extends” it.

Thanks, I found it.

So now I have the container info, and I can get the host, but what is the Rancher way to actually connect to the container. For example, a web service running in the container. Do I need to expose ports on the host and then keep track of which container is mapped to which port?

I’m starting to get a Route53 zone set up now. Is this the preferred method going forward? The docs read like I’ll be able to hit the fqdn created for each container then Rancher does the routing under the covers - sound correct?

Yes, the fqdn will do the routing for each service. You will not have to worry about the IP addresses. For any service with a port exposed, we will handle it for you with the Rancher Route 53 service.

Very cool! I can’t wait to give it a try. Thanks.

have you thought of having Rancher add hosts to Route53? Does this feature already exist?

What would use DNS entries for the hosts itself for? SSH?

You may want to check out our new blogpost about Rancher’s Route53 service,

yes. SSH is a good example. But we looking at using Rancher to manage the VM resources, in addition to the containers running on them. So if we can create VMs with the API, and then access them with something like that would be cool.

in the API I see the fqdn field for services, but not for instances. Will Rancher not create a an entry in Route53 for each container instance of a given service?

@ebishop Rancher creates a record set per service on Route53. This record set will have a name= and ip addresses=ip addresses of all the hosts where service’s containers got deployed.

I guess I’m not connecting the dots. So the service, which is a logical object has an entry in Route53, but the containers which are more like real endpoints don’t have an entry.

So if I have a service named x, in the y stack in the z environment, and it has some container instances running it that offer up some web service, and their names are web1 ans web2. I know that both service instances listen on port 8080. So I want to be able to add the service, then hit and gain access to the service provided by the web1 instance.

I know your docs indicate that only the service will get a fqdn, so I’m not seeing how I’ll access the containers inside the stack.

Will I be able to do this?

@ebishop DNS resolution in both internal Rancher DNS and external Route53 DNS is done on per service basis, not per standalone container. So assuming you have service named “x” with port 8080 published to the host, and 2 containers deployed: web1 and web2.

In Rancher internal DNS, service x will get following record:

x {web1 container.ip, web2 container.ip}

Services linked to service “x” in Rancher, will be able to resolve “x” within Rancher managed network, to 2 ips - one of web1, another of web2.

In Route53 DNS, its gonna be: {(host ip where web 1 running), (host ip where web2 is running))

So Route53 enabled service x to be discoverable outside Rancher network by mapping public ips of the host where service’s containers are running, to domain name.

In both cases, you resolve “x” service, but not standalone container. Thats the key concept of Rancher Service Discovery feature - service being an end point, and the discovery done against the service.

Ok, I think we are on the same page here. The only way I can connect directly to a single container within a stack is to have that container expose a port on the host.

So according to what you are saying, if web1 was listening on port 8081, I’d need to use something like to access web1. (Then I have to manage avoidance of port conflicts. Yuk.)

I assume the only means of accessing web1 and web2 in Rancher is via a loadbalancer. Is this correct?

One thing that keeps rolling around in my head is that Rancher needs to access each container to do health checks, right? So can I hook into whatever you are doing?

The only way I can connect directly to a single container within a stack is to have that container expose a port on the host.

Yes, if you are connecting to Rancher container from outside of the Rancher managed network. Within the managed network, all containers are accessible even w/o the port published, over ipsec network.

Then I have to manage avoidance of port conflicts.

Rancher allocator would manage it for you. 2 containers with the same public port will never start on the same host.

I assume the only means of accessing web1 and web2 in Rancher is via a loadbalancer. Is this correct?

Yes, you can register web1 and web2 to a Rancher LB. In this case, only LB service port should be published to the host in order for it to become accessible to outside of Rancher managed network.

One thing that keeps rolling around in my head is that Rancher needs to access each container to do health checks, right?

Rancher healtchecks are done within Rancher managed ipsec network. Rancher network-instance performs healtchecks for containers within the network, so containers ports don’t need to be published.

To summarize the above, if your service need to be accessible within Rancher network, you don’t have to publish its ports, and there is no need for this service to get registered to Route53. The service DNS resolution will be done using Rancher internal DNS, and the service’s name will be resolvable to any service linked to it. And only if your service needs to be exposed to public, then you publish its port, and if Route53 service instance is running, it will automatically register your service to the Route53 DNS.

I have cracked this one in Python (and Lua if anyone’s interested in using Nginx). The link below is the source. You can use it as follows:  [--url=<RANCHER_URL> --key=<RANCHER_ACCESS_KEY> --secret=<RANCHER_SECRET_KEY>] --stack=<STACK> --service=<SERVICE> [--port=<PORT>] [--one] [--uri=<URI>]

Rancher API key details will be collected from environment if present same as rancher-compose
–port is internal port - script will return the external one
–one says return one PORT:IP at random from all those present and active
–uri will make this script return http://PORT:IP:/uri so you can use that directly in curl.

eg ```
curl --stack=mystuff --service=mywebserver --port=80 --uri=/index.html

will return the homepage of your website in the 'mystuff' stack/project