Updating Cluster Workspace with Rancher API v3

I’m new to Rancher and i’m trying to automate the import of a k3s (v1.25.10+k3s1) single-node cluster with a bash script and several API calls (shown below). I’m using Rancher (v2.9.1) on Azure AKS.

# 1. Get the Login Token
JSON_DATA=$(jq -n --arg passwd "$PASSWD" \
            '{
                "username": "admin",
                "password": $passwd
            }')
LOGINRESPONSE=$(curl -s "https://${SERVER_URL}/v3-public/localProviders/local?action=login" -H 'content-type: application/json' --data-binary "${JSON_DATA}")
LOGINTOKEN=$(echo $LOGINRESPONSE | jq -r '.token')

# 2. Get the API Token
JSON_DATA=$(jq -n '{ "type": "token", "description": "automation" }'
APIRESPONSE=$(curl -s "https://${SERVER_URL}/v3/token" -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary "${JSON_DATA}")
APITOKEN=$(echo $APIRESPONSE | jq -r '.token')

# 3. Define the cluster registration

# Here i put some configuration, i would like to import the cluster in the "dev" workspace instead
# of the fleet-default workspace

JSON_DATA=$(jq -n --arg orderId "$ORDER_ID" \
    '{
        "type": "cluster",
        "name": $orderId,
        "import": true,
        "metadata": {
            "labels": {
                "environment": "dev",
                "fleet.cattle.io/workspace": "dev",
                "fleet.cattle.io/cluster-group": "dev"
            },
            "fleetWorkspaceName": "dev"
        },
        "annotations": {
            "fleet.cattle.io/workspace": "dev",
            "fleet.cattle.io/cluster-group": "dev"
        },
        "spec": {
            "fleetWorkspaceName": "dev"
        }
    }')
CLUSTERRESPONSE=$(curl -s "https://${SERVER_URL}/v3/cluster" -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary "${JSON_DATA}")
CLUSTERID=$(echo $CLUSTERRESPONSE | jq -r '.id')

# 4. Get the cluster registration token 
JSON_DATA=$(jq -n --arg clusterId ${CLUSTERID} \
    '{
        "type": "clusterRegistrationToken",
        "clusterId": $clusterId
    }')
ID=$(curl -s "https://${SERVER_URL}/v3/clusters/${CLUSTERID}/clusterregistrationtoken" -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary "${JSON_DATA}" | jq -r '.id')

# 5. Get the command to enroll the cluster to rancher
AGENTCOMMAND=$(curl -s "https://${SERVER_URL}/v3/clusters/${CLUSTERID}/clusterregistrationtoken/$ID" -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" | jq -r '.insecureCommand')

With these commands i can succesfully import my cluster.
What i would like to achieve is to import the cluster in my “dev” workspace (previously created, with also the Cluster Group called “dev”) instead of in the “fleet-default” workspace.

Since with my previous API calls the cluster is imported in the “fleet-default” namespace (even if i’ve tried to specify in the point 3 to put it in “dev”), I tried to “update” its configuration with the following commands (without success)

# Get the config of the cluster
current_config=$(curl -s "https://${SERVER_URL}/v3/clusters/${CLUSTERID}" -H "Authorization: Bearer ${APITOKEN}")

# Update the fleetWorkspaceName field
PUT_JSON=$(echo "$current_config" | jq --arg ws "dev" \
        '. | .fleetWorkspaceName = $ws' )

# Update the cluster config
curl -s -X PUT "https://${SERVER_URL}/v3/clusters/${CLUSTERID}" -H "Authorization: Bearer ${APITOKEN}" -H "Content-Type: application/json" --data-binary "${PUT_JSON}"

I also have enabled the “provisioningv2-fleet-workspace-back-population” Feature Flag but my cluster is still being registered in the “fleet-default” workspace.

Am i missing something? Does the Rancher API supports this kind of operation?

Ok, i find out that Rancher Workspace should start with “fleet-” .

If i define the workspace name with “fleet-dev”, it works.