Accessing load balancer from other containers on the same host

I have a Docker registry running as a Rancher service, with a Rancher LB in front of it. This works fine for accessing it elsewhere, but I’m now trying to access it from another container that is on the same host, but not on the Rancher managed network (a Drone build — Drone runs each build in its own network), and this isn’t working.

A tcpdump shows the following:

07:30:19.929081 IP 172.18.0.2.45478 > [my public IP address].443: Flags [S], seq 4236333450, win 29200, options [mss 1460,sackOK,TS val 23407920 ecr 0,nop,wscale 7], length 0
07:30:19.929081 IP 172.18.0.2.45478 > 10.42.114.254.443: Flags [S], seq 4236333450, win 29200, options [mss 1460,sackOK,TS val 23407920 ecr 0,nop,wscale 7], length 0
07:30:20.949306 IP 172.18.0.2.45478 > [my public IP address].443: Flags [S], seq 4236333450, win 29200, options [mss 1460,sackOK,TS val 23408176 ecr 0,nop,wscale 7], length 0
07:30:20.949306 IP 172.18.0.2.45478 > 10.42.114.254.443: Flags [S], seq 4236333450, win 29200, options [mss 1460,sackOK,TS val 23408176 ecr 0,nop,wscale 7], length 0
07:30:22.965323 IP 172.18.0.2.45478 > [my public IP address].443: Flags [S], seq 4236333450, win 29200, options [mss 1460,sackOK,TS val 23408680 ecr 0,nop,wscale 7], length 0
07:30:22.965323 IP 172.18.0.2.45478 > 10.42.114.254.443: Flags [S], seq 4236333450, win 29200, options [mss 1460,sackOK,TS val 23408680 ecr 0,nop,wscale 7], length 0

I think what happens is that 10.42.114.254 then tries to reply to 172.18.0.2 but it can’t, since there is obviously no routing from the Rancher managed network to the bridge network created for the Drone build. Is there any way to work around this?

I found https://github.com/rancher/rancher/issues/1920 which suggests that this hairpin NAT case should be working, but I guess it isn’t; maybe because I’m accessing from another container in a different Docker network, rather than from the host itself? Accessing the registry (via the LB) directly from the host does, in fact, work.

I tried upgrading to Rancher 1.6.3 with the new networking stack with promiscuous on / hairpin off, but this scenario still doesn’t work.