System/Default Namespaces are not in a Project

Hi all,

i am trying to automate my Rancher setups. I works quite well except one small problem. If i create a new cluster with some simple API calls the cluster will be created but all namespaces (kube-system, cattle-system, default) are not in a project after the nodes joined the new cluster. But if i create a new cluster in the webui it all works fine. So i think i am missing something but i can’t find it.

Rancher: v2.1.6

Here is an example API call

 curl -u "${CATTLE_ACCESS_KEY}:${CATTLE_SECRET_KEY}" \
-X POST \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{"amazonElasticContainerServiceConfig":null, "azureKubernetesServiceConfig":null, "description":"Dev Cluster", "dockerRootDir":"/var/lib/docker", "googleKubernetesEngineConfig":null, "name":"dev", "rancherKubernetesEngineConfig":{}}' \
'https://localhost:8443/v3/clusters'

HTTP Request:

HTTP/1.1 POST /v3/clusters
Host: localhost:8443
Accept: application/json
Content-Type: application/json
Content-Length: 226

{

    "amazonElasticContainerServiceConfig": null,
    "azureKubernetesServiceConfig": null,
    "description": "Dev Cluster",
    "dockerRootDir": "/var/lib/docker",
    "googleKubernetesEngineConfig": null,
    "name": "dev",
    "rancherKubernetesEngineConfig": { }

}

Do you have any idea what is missing? I know i can move namespace manually but i am trying to understand what’s going wrong.

Thank you very much.

Best regards

I think the problem is that the projectId value and the “field.cattle.io/projectId” label which does not match the real project id (“p-vxvb8”) and the real cluster id (“c-896jz”)

"id": ["cattle-system"],
"labels": {
  "field.cattle.io/projectId": "p-k9bb4"
},
projectId": "c-sr7hw:p-k9bb4",

"id": ["kube-system"],
"labels": {
  "field.cattle.io/projectId": "p-k9bb4"
},
projectId": "c-sr7hw:p-k9bb4",

But why? :thinking:

Hi, p7k
Have same problem after importing a cluster to new Rancher server.
wrong project id. can not move namespaces, can not change via web interface to api.
Did you manage to figure it out?
N

Just an update. My issue happened during migration from one rancher cluster to another.

Namespaces seem to contain invalid projectID. in format:
“projectId”: “existing_cluster:notexisting_project”

I was able to use

kubectl annotate --overwrite namespace existing_namespace field.cattle.io/projectId=existing_cluster:existing_project

Hi naorw,

i think my problem was related to some old files which i did not clean up during my tests. I wrote an Ansible Playbook to clean them but i missed a directory. I could’nt reproduce it with a new node.