Deploying Rancher within an existing Kub cluster

Hi All,

In my environment, there is an existing Kube cluster management system which has deployed the cluster already.

I really like the Rancher UI and the way it provides LDAP integration and Role Based Access Control, and an easy to use interface for our developers to see the status of their workloads

When I created a Deployment with Rancher it came up and is running just fine, I ran the Rancher setup to import an existing Cluster and pasted the URL in to the Kubernetes Master node.

It’s working really well, except that it is creating a lot of Namespaces with some metadata such as below;

admin@ipdm-ccp-kub01-master3ab5b29a7b:~$ kubectl get namespace
NAME             STATUS   AGE
c-ggwfv          Active   31d
c-jxdc2          Active   2d20h
cattle-system    Active   46d
ccp              Active   49d
default          Active   49d
foobar-n         Active   18d
ipdm-devtest     Active   49d
istio-system     Active   49d
kube-public      Active   49d
kube-system      Active   49d
local            Active   49d
p-42hgl          Active   26d
p-4zmwj          Active   31d
p-6r5xx          Active   26d
p-9vxs4          Active   2d20h
p-b6lrg          Active   2d20h
p-fp7dk          Active   19d
p-kx4sb          Active   2d20h
p-vxv85          Active   49d
p-xlfjf          Active   25d
p-zsq4p          Active   49d
rancher          Active   49d
test             Active   33d
test-namespace   Active   25d
u-3anxqzucwx     Active   18d
u-juk5kmqmp4     Active   38d
u-reuxsn4ugz     Active   38d
user-nlkld       Active   49d
admin@ipdm-ccp-kub01-master3ab5b29a7b:~$

There’s not really a question here, but yes they store things that are project (p-) or user (u-) specific.

It is generally recommended not to use the cluster running Rancher for anything else, because you can easily inadvertantly give other users of the cluster it’s running in too much access. Ability to edit those namespaces can allow a user to grant or escalate themselves permissions for all the other clusters managed by the Rancher instance by manipulating the CRDs that define user roles from underneath us.

My apologies, yes I hadn’t really asked a clear question

I only have 1 Kubernetes cluster to work with at this time, so it’s all in one together

Is there a better way to run Rancher inside of the Kubernetes cluster?
Is there a way to run Rancher inside the Kubernetes cluster that does not create these Namespaces?

Maybe a single instance deployment could do the job, at least this is what I did, as long as you don’t need HA /high availability/ you would be better off spinning up a Rancher in a host where docker is installed and go from there; If you go this way, make sure the k8s cluster managed by Rancher could ping back the Rancher instance;

~Rado