Communication issue between pods

We are experiencing problems with communication between pods.
The scenario:
In our Rancher cluster, we have the default project and a namespace called che. We have created 2 workloads within the che namespace. WorkloadA uses a Keycloak image and starts up successfully. WorkloadB uses an image with an application that authenticates through Keycloak. In order for this to work, we need to set an environment variable that has the Keycloak host and port.

We have tried 2 approaches:

  1. We have created a service which refers to the keycloak workload and used the generated internal host for the service (e.g., ..svc.cluster.local) for the environment variable of WorkloadA. The pod for WorkloadA starts up correctly with this host; however, when we click on the endpoint, the IDE for WorkloadA fails to start. It is supposed to authenticate via a redirect URI for keycloak, but it never gets that far.
  2. We created an ingress that refers to the nodeport service which was generated when creating the Keycloak workload. In the ingress, we select the Specify a hostname to use option. We give it the host for our cluster prefixed with keycloak (e.g., keycloak.daeirnd05.k8s.eur.ad.sag). This host is used in the environment variable for WorkloadA. Starting the pod for WorkloadA is then hit or miss. Sometimes it starts, and sometimes it doesn’t. When it doesn’t start, we can see an error that it can’t connect to Keycloak. However, the times that it does start, we can always bring up the IDE by clicking on the endpoint.

The cluster host daeirnd05.k8s.eur.ad.sag is within our domain name server which is set such that DNS resolution of *.daeirnd05.k8s.eur.ad.sag would return one random IP of one of the nodes in our Rancher cluster.
This most likely results in the hit or miss scenario mentioned above.

We could use some advice on what would be the best approach to use in this scenario.