Trying to run Eureka (Netflix) under Rancher - resolving issues

We’re trying to run Netflix’ Eureka under Rancher for service discovery. I know there are other ways to do this but our software is built for Eureka. Problem is that containers identify as their hostname which is a random hash. Eureka translates this as .com which doesn’t resolve.

Is there a way to make that hostname resolve? Is there another way to do this? I don’t know enough about Rancher’s network stack.

Hi @kiboro, you are running Eureka as a service on Rancher?

I’m not familiar with Eureka, but is there a way to force it to use IP addresses instead of hostnames? Is there a registration process that is controlled by the user? If so you could use an internal load balancer and link that to your Eureka service, and then create a container/service to register the LB? Not sure if you can force the hostname to the link name, but that would be required.

As a side note ,there is a label you can apply to a service io.rancher.container.hostname_override: container_name that will give you prettier hostnames.

Yes, we’re trying to use Eureka as a Rancher service. We pushed it down there because ti’s the internal fabric of our application so wouldn’t then need host ports to be defined.

We did try to use the IP address. The problem is that the Eureka client picks up the first IP address (172.17.x.x ? Docker?) not the second (10.42.x.x Rancher). So we get a record that doesn’t resolve across the cluster.

Not the prettiest solution but yes we can use the Rancher API to seed the Eureka client with the Rancher IP which is resolvable inside the cluster.

#!/bin/bash

# In case if this image is deployed to Rancher, then get its IP and provide to Eureka explicitly.
EUREKA_INSTANCE_HOSTNAME=$(curl http://rancher-metadata/latest/self/container/primary_ip)

# Run CMM and specify the hostname
java -jar pi-zuul.jar -Deureka.instance.hostname="$EUREKA_INSTANCE_HOSTNAME"

Glad metadata could help out. Its probably a little cleaner then getting it from ip addr show :slight_smile:

If thats your run script, you might want to consider exec java... that way Docker will manage the process you really care about.

1 Like

Sorry, the last part makes some sense, but I don’t completely understand.

I’m running the java jar under the script. If the jar finishes/fails then the whole thing ends and so does the container. What else does Docker do with the exec process? Does it still run entrypoint before doing the exec?

You would still run what you have in the script, but add the exec to the execution of the jar. In most cases, it will work fine, the java process will end or die and then the script will follow suit. Where it gets a bit different is, you don’t get system signals from Docker when its time to shutdown or stop. Docker only talks to PID 1 inside the container, which is typically the command called by entrypoint or cmd, and then PID 1 inside the container is responsible for handling the clean shutdown. By changing the line to exec java -jar... you make the java process PID 1.

Oh, right now I understand. I was confusing Docker exec and system exec. Exec is good, java will replace the shell script as PID 1.

This question is very old, but I am facing some Eureka issues too.

My Scenario:

I have deployed a workload: eureka on a namespace: microservices
my workload has 4 Pods eureka-0 to eureka-3 on each node of the cluster (I have a cluster with one manager node, and 4 worker nodes) I have added a loadbalancer (which I think I have not used yet) and Rancher created a discovery service named eureka.

I know that Rancher/kubernetes created these host names:
eureka-[0…3].eureka.microservices.svc.cluster.local and eureka.microservices.svc.cluster.local, last one I believe is being used by the other Pod’s with Eureka’s client because the clients are able to register on the different eureka instances. The thing is I was trying ti use the generated names for each Pod to communicate between each Eureka instances but I realized after opening a shell on each pod that if I ping any eureka-[0…3].eureka.microservices.svc.cluster.local , only itself will resolve. so what is the point of those names if within the cluster they do not resolve?

How can I make the generated names (workload name plus domain) to be resolved within the cluster?

Thanks in advance,

-Martin

UPDATE:

I finally used the own vm node ip address and hostname instead of the one generated by each pod. (I have one pod per node based on label to make eureka pods being deployed on specific nodes)