UDP networking stack problems

Hello!

I’m trying to run the ossec host in the rancher ecosystem. I have a docker image that works when I run it with the following command:

sudo docker run -e AUTO_ENROLLMENT_ENABLED=true -p 4214:4214/udp -p 4215:1515 -d --add-host logstash:10.53.3.105 vitals/ossec-raher:0.0.4-RC2

However fails with the following docker-compose/rancher-compose: https://gist.github.com/tobowers/3d4c763d5cd23015dbc8f0e880defa35

Before I realized that it was “something in rancher” - I had gotten to a point in debugging of running tcpdump inside the container (in rancher) and on a vanilla host (not part of the rancher ecosystem) running the ossec agent. The ossec manager (rancher) received a UDP packet and indeed sent a packet in return to the agent (vanilla host). The agent did receive a packet from the manager (rancher) but for some reason the application (ossec agent) wasn’t receiving the packet.

Since the exact same image works if running vanilla docker… I have to assume it has something to do with how Rancher manipulates packets for its network? (even though I’m exposing the port on the host, and connecting to the hosts IP address and port).

Any help would be appreciated!

Topper

Are you sure these filenames are correct? It looks like you’re extending docker-compose-base.yml, but docker-compose.yml seems to be the other file present.

Can you give a bit more detail on how it fails?

I updated the file names. Everything deploys fine.

I don’t have much detail except… the protocol ossec uses is: agent connects to manager, sends a hello message over UDP (tcdump and ossec debug logs show this reaching the container). The server sends a response… tcpdump on the agent shows this packet is received. However, the ossec agent sits there as if it never received anything from the server.

Given that this works when run with a docker run… it has to be something in the way rancher changes the packet headers?

I am having the same issue. The problem is that rancher somewhere rewrites the source port of the outgoing packet and doesn’t rewrite it back to where it originated from, essentially the “internal” port of the container is leaked in the package out.

I have yet to come up with a solution for this.

So this is a capture I have done. It is set up between rancher running in AWS, and a t2.micro also running in AWS. I have a go server running in rancher which echoes what is coming in. The client part is just sending a sequence number and waiting for a return value. (code is coming)

As you can see in the capture, the dst port out is 10001 and the src port back is 14593.

Frame 1: 43 bytes on wire (344 bits), 43 bytes captured (344 bits) on interface 0
Internet Protocol Version 4, Src: 192.168.99.119 (192.168.99.119), Dst: x.x.x.x (x.x.x.x)
User Datagram Protocol, Src Port: 33084 (33084), Dst Port: scp-config (10001)
    Source port: 33084 (33084)
    Destination port: scp-config (10001)
    Length: 9
    Checksum: 0x3566 [validation disabled]
        [Good Checksum: False]
        [Bad Checksum: False]
Data (1 byte)

Frame 2: 43 bytes on wire (344 bits), 43 bytes captured (344 bits) on interface 0
Internet Protocol Version 4, Src: x.x.x.x (x.x.x.x), Dst: 192.168.99.119 (192.168.99.119)
User Datagram Protocol, Src Port: 14593 (14593), Dst Port: 33084 (33084)
    Source port: 14593 (14593)
    Destination port: 33084 (33084)
    Length: 9
    Checksum: 0xe052 [validation disabled]
  `      [Good Checksum: False]
        [Bad Checksum: False]
Data (1 byte)

https://github.com/rancher/rancher/issues/6494