Kube-proxy settings in custom RKE2 cluster

Hi!

I would like to set up this configuration for MetaLB:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

https://metallb.universe.tf/installation/

I’ve found instruction for RKE1: How to enable IPVS proxy mode for kube-proxy | Support | SUSE.

For RKE2 I can’t find any instructions. I suspect it should be set in cluster.yaml, but no idea how.

Additional question: Is it possible to add two configs in " Additional Manifest" field? When I try to do it, I see error. I don’t believe I can’t add two CMs or resources, just one.
Cluster Configuration → AddOn config → Additional Manifest

image

When I try to add second cfg using apiversion: v1:
image

This is exactly my case:

But I dont know in which section of cluster.yaml should I add:

kube-proxy-arg:
  - proxy-mode=ipvs
  - ipvs-strict-arp=true

Ok, so looks like it works:

You can add arguments from rke2:

under: machineGlobalConfig:, so for me in GUI in cluster.yaml:

machineGlobalConfig:
  kube-proxy-arg:
    - proxy-mode=ipvs
    - ipvs-strict-arp=true

And after this change, on my rancher-node:
root 3751955 3751907 0 05:42 ? 00:00:00 kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=worker-2 --ipvs-strict-arp=true --kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig --proxy-mode=ipvs

Unfortunately, the documentation is unclear and support couldn’t distinguish between a RKE2 cluster imported into rancher and a cluster created from GUI (custom RKE2). They proposed to manually create config.yaml on node, and in this case after RKE2 restart the configuration would be deleted (becouse in custom RKE2 clusters created from GUI, every change should be done in cluster.yaml file)