Security Advisory for containerd CVE-2020-15257 and Kubernetes CVE-2020-8554]

Security Advisory: [CVE-2020-15257 and CVE-2020-8554]

This is a security advisory on the following two medium-rated vulnerabilities:

  • CVE-2020-15257: containerd – containerd-shim API Exposed to Host Network Containers
  • CVE-2020-8554: kubernetes - Man in the middle using LoadBalancer or ExternalIPs

To see if your environment is vulnerable, please go through the CVE posts in the containerd’s security advisory and kubernetes-security-announce forum and the information below in this advisory post.

containerd CVE-2020-15257

Details

Per this technical advisory from nccgroup and containerd’s security advisory, there is a critical bug in containerd related to the hand-off of command/control functionality to the containerd-shim via an abstract Unix domain socket.

This bug allows malicious containers running in the same network namespace as the shim, with an effective UID of 0 but otherwise reduced privileges, to cause new processes to be run with elevated privileges.

Such sockets are essentially permission-less within their network namespaces which means that an attacking container running as host uid/euid==root and bound to the host network namespace could execute arbitrary commands via the containerd-shim and effect a breakout to the host.

Am I vulnerable?

You are vulnerable if you allow untrusted pods or containers to run in the host namespace and are on an affected version of Docker or containerd.

RKE relies on Docker. As such, any RKE hosts running Docker 19.03.13 and prior is affected.

K3s and RKE Government embed containerd. As such, the following versions of K3s and RKE Government are affected:

  • v1.19.4+k3s1 and prior
  • v1.18.12+k3s1 and prior
  • v1.17.14+k3s2 and prior
  • v1.18.12+rke2r1 and prior (for RKE Government)

In Kubernetes, untrusted pods that can set hostNetwork to true can exploit this bug.

Note that you can also check a host directly to see if the containerd-shim is configured to be susceptible to this bug. If the following command returns results on a host, then that host is affected by this bug:

cat /proc/net/unix | grep 'containerd-shim' | grep '@'

Note: this bug impacts both Docker and containerd container runtimes.

How do I mitigate this vulnerability?

The best mitigation is to upgrade containerd or Docker, but there is also a workaround. All options are explained below.

For hosts running Docker (RKE clusters)

RKE clusters, including those provisioned by Rancher, will need to upgrade Docker to version 19.03.14. This is now available to use via Rancher’s Docker installation script.

For hosts running containerd (K3s and RKE Government clusters)

K3s and RKE Government clusters, which embed containerd, will need to be upgraded to one of the following versions:

K3s

  • v1.19.4+k3s2
  • v1.18.12+k3s2
  • v1.17.14+k3s3

RKE Government

  • v1.18.12+rke2r2

If after upgrading, the command (below) still returns results, then you will want to stop the affected containers. In such a scenario, a system reboot will suffice.

cat /proc/net/unix | grep 'containerd-shim' | grep '@'

Workaround

You can work around this issue by ensuring that untrusted pods or containers are not running in the host namespace. For Kubernetes, this means ensuring untrusted pods do not have hostNetwork set to true. This can be controlled through the use of pod security policies.

See containerd’s security advisory for more details.

Kubernetes CVE-2020-8554

Details

Per this announcement in the kubernetes-security-announce forum, a security issue was discovered with Kubernetes affecting multitenant clusters: If a potential attacker can already create or edit services and pods, then they may be able to intercept traffic from other pods (or nodes) in the cluster.

An attacker that is able to create a ClusterIP service and set the spec.externalIPs field can intercept traffic to that IP. An attacker that is able to patch the status (which is considered a privileged operation and should not typically be granted to users) of a LoadBalancer service can set the status.loadBalancer.ingress.ip to similar effect.

This issue is a design flaw that cannot be mitigated without user-facing changes.

Am I vulnerable?

All Kubernetes versions are affected. Multi-tenant clusters that grant tenants the ability to create and update services and pods are most vulnerable.

How do I mitigate this vulnerability?

Per the Kubernetes announcement, there is no patch for this issue, and it can currently only be mitigated by restricting access to the vulnerable features. Because an in-tree fix would require a breaking change, there is likely to be a conversation that will be opened up to begin considering a longer-term fix or built-in mitigation

For the immediate, Rancher recommends two potential solutions to mitigate this vulnerability:

  1. Deploying an admission webhook which allows only a specific set of external IPs that can be used as external IPs. The Rancher team has released a chart which is available to deploy as an app from the Rancher UI, or via helm CLI.
  2. Alternatively, if you are using OPA gatekeeper there is a sample ConstraintTemplate you can implement.

It should be noted that these solutions only remediate setting the external IP spec on a service.

There is another attack vector that is executed by setting LoadBalancer IPs, but since it is not recommended for users to be given permission to patch those, no remediation avenue exists other than replicating the above solutions to handle that case.

Admins should also be auditing existing services that have been patched, in addition to any existing external IPs.

1 Like