Could not determine network mode for container. Using none networking

I’m relatively new to Rancher/K8S/Docker. I’ve got a k8s cluster running on 3 hosts. Everything on the k8s management containers is green.

When I try to deploy an app from the catalog, or use kubectl to deploy a test app from my workstation, the new app constantly refreshes and never starts. It looks like rancher tries to launch the container, it can’t get networking, so it kills it and then relaunches it again. Lots of messages like this appear in the rancher-server container logs…

2016-09-12 14:43:49,041 WARN [4600b3a1-cc9a-4166-8de1-6732acbc5d54:184547] [containerEvent:51499] [containerevent.create->(ContainerEventCreate)] [] [utorService-335] [i.c.p.p.c.ContainerEventCreate ] Could not determine network mode for container [externalId: 0c7cd70fa242dd52aa6fd32969d704553b09223b974eac473328ccd38ebb9817]. Using none networking.

I can’t figure out what’s going on because the app container is never around long enough for me to get docker logs out of it. I think my networking is ok. If it wasn’t I don’t think k8s containers would start either, but they seem to be working fine.

I’m running my host nodes on CentOS Atomic.

Any help is appreciated because I’m stuck at the moment.[quote=“mmencel, post:1, topic:3981, full:true”]
I’m relatively new to Rancher/K8S/Docker. I’ve got a k8s cluster running on 3 hosts. Everything on the k8s management containers is green.

When I try to deploy an app from the catalog, or use kubectl to deploy a test app from my workstation, the new app constantly refreshes and never starts. It looks like rancher tries to launch the container, it can’t get networking, so it kills it and then relaunches it again. Lots of messages like this appear in the rancher-server container logs…

2016-09-12 14:43:49,041 WARN [4600b3a1-cc9a-4166-8de1-6732acbc5d54:184547] [containerEvent:51499] [containerevent.create->(ContainerEventCreate)] [] [utorService-335] [i.c.p.p.c.ContainerEventCreate ] Could not determine network mode for container [externalId: 0c7cd70fa242dd52aa6fd32969d704553b09223b974eac473328ccd38ebb9817]. Using none networking.

I can’t figure out what’s going on because the app container is never around long enough for me to get docker logs out of it. I think my networking is ok. If it wasn’t I don’t think k8s containers would start either, but they seem to be working fine.

I’m running my host nodes on CentOS Atomic.

Any help is appreciated because I’m stuck at the moment.

I upgraded to 1.2.0-pre3 today with the same result. I’ve been through all the troubleshooting steps I can find. Is there a way to see the logs from the containers that keep getting created/destroyed so quickly? If I could look at those I might be able to figure out what’s going on.

Thanks,
Matt

1 Like

So I think I’ve found where in the code this is coming from. The ContainerEventCreate class at line 286/287 in the ContainerEventCreate.java file.

https://github.com/rancher/cattle/blob/master/code/iaas/logic/src/main/java/io/cattle/platform/process/containerevent/ContainerEventCreate.java

I’m not positive though why it’s not finding the network mode in the config, or even really what config the code is referring to.

Possibly getting closer on this. So I’ve setup Rancher and changed the docker.network.subnet.cidr settings to 192.168.0.0/16. We already use a bunch of 10.X addresses internally.

When I launch the Kubernetes cluster, all the k8s nodes come up with 192.168 addresses, but I see the k8s cluster IP is 10.43.0.1. Is that hardcoded in the Rancher configs somewhere?

When I launch the rabbitmq service from the catalog. The new node comes up on 192.168, but when I dig into the settings for the rabbitmq service it’s got a 10.43. address.

So my “physical” VMs are 10.50.x.x. When I start the k8s services and all the k8s nodes spin up, or launch any new service nodes, the docker node IPs are in the 192.168 range per the CIDR setting I changed in the API. When I launch any actual services like rabbitmq inside the docker nodes, they pull an IP in the 10.43 range, which is the k8s cluster.

Am I seeing this correctly? Is that where my problem lies or am I going down a rabbit trail here?

Thanks,
Matt

OK. So it seems to totally be caused by me changing the docker.network.subnet.cidr setting. When I changed it back to the default of 10.42.0.0/16, everything started working. I can deploy a working app from the catalog.

So apparently something is not right in the config that spins up Kubernetes environments when that default CIDR setting is modified?

I opened an issue for this: https://github.com/rancher/rancher/issues/6113