Projects and CIS mode seem incompatible (please correct me!)

Context:

We are deploying Rancher on RKE2 on-premises, in an industry where security between contracts/clients is particularly important. On hardware, we are using RKE2 (v1.21.3+rke2r1 as of Aug 2, 2021) with the CIS-1.6 compliance option enabled. We are running Rancher v2.5.9 on this. The current documentation was followed carefully in establishing this cluster.

As far as I can tell, Projects are incompatible with the CIS-1.6 option because of the combination of global-restricted-psp and the inability to scope commands to a project.

SCENARIO.

  1. Admin controls a cluster local at address https://example.com/k8s/clusters/local
    makes a project my-new-project and a rancher User new-user. The Admin gives new-user Member permissions to the cluster (local) and Owner permissions to the project my-new-project, only.
  2. User new-user gets a kubeconfig file. The kubeconfig file refers to https://example.com/k8s/clusters/local as the server address
  3. User new-user tries to do kubectl auth can-i create namespace. The answer is “yes”
  4. User new-user tries to do kubectl create namespace testspace. The testspace namespace ends up in the “not in a Project” part of the cluster local, and the new-user cannot access it or move it or delete it. From the user’s perspective, the namespace is now locked.

In response to (4), people might say “use the rancher CLI tool to make the namespace in a project”, but then

  1. User new-user tries to apply a Helm chart or yaml file that creates namespaces. It breaks entirely, and the user does not have the ability to remove any of those namespaces.

This is rather ridiculous, especially because the “user” is quite likely to be an automated CI/CD tool.

I am unable to deploy scoped installations of the Gitlab integration tool or the Ray helm chart because of this problem. I am unable to separate namespaces for multiple client contracts or use any automated tools. Namespaces become a “pet” instead of “cattle”.

What am I doing wrong, or are Projects and restrictive PSPs incompatible?

PROPOSED SOLUTION.
This could be solved very simply, by extending the server address field to incorporate information at the project scope. For example, suppose that the Project appears with a Cluster API at a well-known URL https://example.com/k8s/clusters/local/my-new-project/, where all commands are restricted to that Project. Then this wouldn’t be a problem at all. To any API call, it would appear as its own cluster, with namespaces and so on. (I thought this was the whole intent of Projects, but it doesn’t seem to work this way.)

Your “very simple” proposal is effectively a multi-year project for hierarchical namespaces which has to be built in to upstream or won’t be reliably available anywhere.

Projects are what’s possible today (and years ago), on arbitrary clusters. If you know the projectid you can set the label when making the namespace. If a Helm chart is making a namespace(s), it’s often something cluster-level that an owner would deploy anyway.

Hi vincent,

Thanks for the reply. I have 3 questions, if you’d be so kind.

  1. Is it really true then that there is no possibility of privilege separation, in the sense that we allow a user to be an “Owner” to manage namespaces/pods/etc in project-A but not in project-B (where that user is using only yaml and helm)? If not, what is actually the use case for rancher projects, or more precisely, what does it actually mean to be marked “Owner” of project-A if my API token cannot access it as if I were the Owner of a cluster called project-A? I must be misunderstanding something.

  2. Can you explain this comment “If you know the projectid you can set the label when making the namespace.” I don’t believe this is true.

For example, I know I can make a YAML like this

kind: Namespace
metadata:
  name: test-space
  annotations:
    field.cattle.io/projectId: local:p-x7kk5

But even so a user marked as Owner for the project with that projectId (but only Member at cluster level) cannot make the namespace with kubectl apply -f test.yaml.

  1. I understand that the hierarchical namespace problem can be hard, but I think I was suggesting something indeed simpler than that – could rancher use Ingress and PSP to provide an API URL that forces all commands to act within a certain project? Basically, a use the API URL as a frontend to the “rancher context” command"?

Respectfully, I disagree that a helm chart is typically cluster-level. Helm charts exist just to allow construction of yaml with dynamic logic in their details. We might make a helm chart that says "Make an unprivileged compute pod with the current libraries and run unit tests. Do this in a new namespace of the form “testspace-COMMITID-BRANCH-USERNAME” or something like that.

  1. Projects are effectively managing RBAC for a group of namespaces at a time instead of individually. If you’re an owner of a project you have full access to the namespaces assigned to that project and the resources in it.

  2. Don’t have time to look it up right now but it’s definitely possible. One thing is AFAIR it might be both a label and annotation because (some reason).

  3. No, not with any kind of uniformity. We can’t force ourselves to be in front of a hosted provider (e.g. GKE) or an imported cluster and prevent people from going around to the back door. Direct access to the cluster without Rancher in the middle is a fault-tolerance feature (“authorized cluster endpoint”) on its own. The only way to do this is fork and run only your kind of special clusters that support your special features that only work with your clusters; we are intentionally not OpenShift. Rancher works in and with any k8s cluster.

Of course you can create a namespace (or 10) in a Helm chart, but for most charts the namespace is an input passed in to apply resources into that namespace that already exists; not one of the resources actually created by the chart. And it creates complications if other things get deployed into that namespace, or when the release is removed. Also this is why Helm has its own option to create the namespace you’ve asked it to deploy the chart into, so that the namespace is not actually a managed resource owned by the installation.