Websockets with Kubernetes and AWS ALB using default ingress controller


We have a cluster of Kubernetes running on AWS using Rancher. We use Rancher default ingress controller (running on two EC2 instances) with an ALB in front. The setup works great up to the point where we want to use websockets (socket.io ). As we have a number of compute nodes (we use planes in production) the websocket client hits a different compute node each time and this breaks the socket. Is there a way to make this setup work?



Turn on sticky sessions in ALB?

Its on, but it looks like that the ingress is ignoring it.