Rancher on AWS - Kubectl fails to access logs


We deployed kubernetes on AWS using latest Rancher 1.6x. Kubectl fails to access the logs of the containers. All other kubectl commands work.

We failed to use the “private IP” in the kubernetes catalog option as it did not work (VMs fail to find network) so I suspect the pb to be a missing port.

Any suggestions?

Do you have ALB or another balancer/proxy in front of it? Logs and exec in kubernetes use the SPDY protocol, which has limited support (largely because it’s deprecated).

No ELB or ALB. The whole rancher deployment of nodes failed when we enabled “private IP”, and the endpoints as a result use the VM public IP.

What error message are you getting? Also, what Private IP option are you referring to and what did not work about it? That could also be something that needs investigating.

The provisioning error seems similar to this issue How To: Deploy Rancher/Kubernetes in Amazon VPC private subnet - #3 by dvdcrn.

There is an error in the infrastructure-> Hosts saying something about not being able to set the network.

Once environment is created. From Rancher → Add Hosts → Using Amazon-ec2 driver → … → Select Node Options → Use Only Private Address. (See pic).

If we select “Use only private IP address” then the node provisioning fails.

Seems the kubectl logs command is functional after opening both 10250 and 10255 as part of the EC2 security group rules for all nodes part of the K8s cluster.

Found those ports info here: http://rancher.com/docs/rancher/v1.6/en/kubernetes/