How expose container port internally only?

How can I explicitly expose a container tcp port internally only? I’ve tried to set Public Host IP, under Port Map in the gui, to but I get:

Allocation failed: No healthy hosts meet the resource constraints: [ports: []

You don’t need to publish ports for internal use. Containers can directly reach each other.

But if the image dockerfile has EXPOSE set, then only those ports are exposed internally. How can I expose a port without editing the image?

All 65535 ports of one container are directly reachable from any other container in the environment (with managed networking, and with default policy manager config). They don’t need to be mapped and don’t need to be EXPOSEd.

Then please explain this:

This Lighttp2 image has EXPOSE 80 in its dockerfile. I create a Lighttp2 container with no public ports and a load balancer that proxies http requests to it (port 80). This works fine as long as Lighttp2 listens on port 80 but if I change this to f ex. 81, and change the load balancer accordingly then it stops working. If I then edit the dockerfile to EXPOSE 81 and pull the image again then it works. If I change it back to 80 it stops working again. If I take a look at the container in the Rancher webgui, only the EXPOSED port is listed (as private).

Expose is just metadata, a hint that says “this image probably has something listening on port 80”. It’s presence, correctness, or lack thereof has no actual effect*.

80 works because the lighthttp config says to listen on 80:

If you change the balancer to send requests to 81 and the lighthttp config file still says to listen on 80, then nothing is on 81, the requests fail, and the container is removed from the list of balancer targets.

*: except for container _edit _we only show/let you remap the ports that are either published or feclared exposed. But this is not relevant to your situation.

I did change the lighttp config file to listen on 81. I’ll try to be a little clearer:

If I change the lighttp2 config file to listen on f ex. 81, change the load balancer to send requests to port 81, and restart the container then it stops working. If I then edit the dockerfile to EXPOSE 81 and pull the image again then it works. If I change the dockerfile back and pull again it stops working again.

I’m sorry but this is simply not true. Unless you’re bind-mounting in the docker socket, talking to the docker API, listing the exposed ports and generating a lighthttpd config for them, the container has no way to know what ports are marked exposed and EXPOSE has nothing to do with your issue.

You’re probably conflating some other problem, perhaps forgetting to update the health check target port or not force-pulling and it happens to land on a host that already has an older copy.

Sorry, my mistake. Lighttpdd2 has two config files lighttpd.conf and angel.conf. I did set it to listen on port 81 in lighttps.conf but I needed to also allow it in angel.conf. Port 80 on the other hand doesn’t need to be explicitly allowed in angel.conf, it’s implicitly allowed by default.

Thanks for your feedback.