How set additional flags to the kubernetes cluster?

My rancher cluster is behind a corporate proxy. I need to inject the proxy url my pods during installation. Kubernetes can do that by using PodPreset. Is there a way in rancher 2.2 to enable PodPreset in Kubernetes?

Example on how to enable the PodPreset: https://kubernetes.io/docs/concepts/workloads/pods/podpreset/

Kubernetes documentation: For example, this can be done by including settings.k8s.io/v1alpha1=true in the --runtime-config option for the API server. In minikube add this flag --extra-config=apiserver.runtime-config=settings.k8s.io/v1alpha1=true while starting the cluster.

How to do that in a 2.2 rancher cluster?

Update: I found in rancher 2.2 the cluster.yaml in the cluster settings. There it is possible to change the parameters for the API server in the section services:

kube-api:
# IP range for any services created on Kubernetes
# This must match the service_cluster_ip_range in kube-controller
service_cluster_ip_range: 10.43.0.0/16
# Expose a different port range for NodePort services
service_node_port_range: 30000-32767
pod_security_policy: false
# Add additional arguments to the kubernetes API server
# This WILL OVERRIDE any existing defaults
extra_args:
# Enable audit log to stdout
audit-log-path: “-”
# Increase number of delete workers
delete-collection-workers: 3
# Set the level of log output to debug-level
v: 4

For me it is unclear how to extend the --runtime-config=settings.k8s.io/v1alpha1=true

Can you please extend the sample or give me some advice?

To change apiserver args, you can click Edit as Yaml when you create and edit(update) a RKE cluster.

  # and the other options...
  kube-api: 
    # and the other kube-api options...
    extra_args:
      runtime-config: "settings.k8s.io/v1alpha1=true"
      # the goal is to add PodPreset admission-controller, here it is appended to the default ones
      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PersistentVolumeLabel,PodPreset"

After that, you can create podpreset objects by import YAML in the UI, or do it in the kubectl way.

Thank you for the response. I tried to start the server with your settings. So I have edited the yaml file and I have Saved it. After udpating the cluster I tried:

kubectl get podpreset but the result is still: error: the server doesn’t have a resource type “podpreset”

My configuration is that:

kube-api:
always_pull_images: false
extra_args:
enable-admission-plugins: “NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PersistentVolumeLabel,PodPreset”
runtime-config: “settings.k8s.io/v1alpha1=true
pod_security_policy: false
service_node_port_range: “30000-32767”

@Eike_H i was able to enable PodPresets by updating the cluster.yml with the following content (Rancher v2.2.1 and Kubernetes 1.12.7):

[other content]
services:
  [other content]
  kube-api: 
    always_pull_images: false
    pod_security_policy: false
    service_node_port_range: "30000-32767"
    extra_args:
      runtime-config: "settings.k8s.io/v1alpha1=true"
      enable-admission-plugins: "DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,NamespaceLifecycle,NodeRestriction,PersistentVolumeLabel,ResourceQuota,ServiceAccount,PodPreset" 
[other content]

After you updated the cluster like that, you can verify if the podpreset resource is correctly enabled by running:
$ kubectl api-resources | grep podpreset

Just be sure if you are copy/pasta these three lines into your cluster config file, that you replace the double-quotes with REAL double-quotes or your cluster will error out and you’ll have to do it again. Spoken from experience!