Unable to upgrade downstream cluster from kubernetes 1.24 to 1.25 du to PSP-issue

When trying to upgrade a downstream-cluster from v1.24.17-rancher1-1 to v1.25.16-rancher2-3 we get the error admission webhook “rancher.cattle.io.clusters.management.cattle.io” denied the request: cannot enable PodSecurityPolicy(PSP) or use PSP Template in cluster which k8s version is 1.25 and above

The following is the cluster-yaml we are trying to apply;


answers: {}
default_pod_security_admission_configuration_template_name: rancher-privileged
docker_root_dir: /var/lib/docker
enable_cluster_alerting: false
enable_cluster_monitoring: false
enable_network_policy: true
fleet_workspace_name: fleet-nogi
local_cluster_auth_endpoint:
  enabled: true
name: nogi-staging
rancher_kubernetes_engine_config:
  addon_job_timeout: 45
  authentication:
    strategy: x509|webhook
  authorization: {}
  bastion_host:
    ignore_proxy_env_vars: false
    ssh_agent_auth: false
  cloud_provider: {}
  dns:
    linear_autoscaler_params:
      cores_per_replica: 128
      max: 0
      min: 1
      nodes_per_replica: 4
      prevent_single_point_failure: true
    node_selector: null
    nodelocal:
      node_selector: null
      update_strategy:
        rolling_update: {}
    options: null
    reversecidrs: null
    stubdomains: null
    tolerations: null
    update_strategy:
      rolling_update: {}
    upstreamnameservers: null
  enable_cri_dockerd: true
  ignore_docker_version: false
  ingress:
    default_backend: false
    default_ingress_class: true
    http_port: 0
    https_port: 0
    options:
      allow-snippet-annotations: 'false'
      enable-underscores-in-headers: 'true'
      use-forwarded-headers: 'true'
    provider: nginx
  kubernetes_version: v1.25.16-rancher2-3
  monitoring:
    provider: metrics-server
    replicas: 1
  network:
    mtu: 0
    options:
      flannel_backend_type: vxlan
    plugin: canal
  restore:
    restore: false
  rotate_encryption_key: false
  services:
    etcd:
      backup_config:
        enabled: true
        interval_hours: 12
        retention: 6
        safe_timestamp: false
        timeout: 300
      creation: 12h
      extra_args:
        election-timeout: '5000'
        heartbeat-interval: '500'
      gid: 0
      retention: 72h
      snapshot: false
      uid: 0
    kube-api:
      admission_configuration:
        api_version: apiserver.config.k8s.io/v1
        kind: AdmissionConfiguration
        plugins:
          - configuration:
              apiVersion: pod-security.admission.config.k8s.io/v1beta1
              defaults:
                audit: privileged
                audit-version: latest
                enforce: privileged
                enforce-version: latest
                warn: privileged
                warn-version: latest
              exemptions: {}
              kind: PodSecurityConfiguration
            name: PodSecurity
            path: ''
      always_pull_images: false
      event_rate_limit:
        enabled: true
      pod_security_policy: false
      secrets_encryption_config:
        enabled: true
      service_node_port_range: 30000-32767
    kube-controller: {}
    kubelet:
      extra_args:
        protect-kernel-defaults: 'true'
      fail_swap_on: false
      generate_serving_certificate: false
    kubeproxy: {}
    scheduler: {}
  ssh_agent_auth: false
  upgrade_strategy:
    drain: true
    max_unavailable_controlplane: '1'
    max_unavailable_worker: '1'
    node_drain_input:
      delete_local_data: true
      force: true
      grace_period: 900
      ignore_daemon_sets: true
      timeout: 10800

We have removed all PSPs from the cluster and there are no PSP templates.

> kubectl get psp -A
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
>

Check the rancher release notes( you need to go way back as you are running quite old versions), there upgrading past k8s 1.24 is described .

Did follow the release notes the first time around but also found that pod_security_policy_template_id specifically has to be set to null insted of just removed from the cluster.yaml from this post