Kubernetes HPA query

Hi,
I have installed a 3 node kubernetes cluster using rancher and now we need to set up HPA using custom metrics. The docs on the kubernetes site for this suggest
"The cluster has to be started with ENABLE_CUSTOM_METRICS environment variable set to true".
Set the appropriate flags for kube-controller-manager:
–horizontal-pod-autoscaler-use-rest-clients should be true.
–kubeconfig OR --master

How do i achieve this with Rancher?
PS : I do not see the kube-controller-manager or api-server installed on my nodes.
Thanks.

I would also like to know how to do this. I know the k8s environment template can be edited to include custom kubelet flags but that doesn’t seem like the right place. All k8s documentation that I can find about these settings is for a standard k8s installation. There is nothing Rancher specific.

Any and all help is appreciated!
Thanks!

Sorry for the long reply! So I’ve been continuing to research this topic since it is critical I find a way forward. Hopefully my findings here will spark some comments from others that are more knowledgeable on the customization of the Kubernetes stack. And for anyone stuck in the same boat, hopefully this will send them down the right path or at least keep them from going down a dead end road (river?) :slight_smile:.

So to begin, some k8s articles suggest simply adding the flag “–enable-custom-metrics” to kubelet. This is supposed to cause custom metrics to be gathered from cAdvisor (see this page). Unfortunately, support for this has been removed in recent versions of k8s. I am using Rancher 1.6.14 and Kubernetes 1.8.

It appears that now the only way to get custom metrics to work with the HPA is to use the custom metrics API. This requires several steps:

–requestheader-client-ca-file=path to aggregator CA cert?
–requestheader-allowed-names=aggregator
–requestheader-extra-headers-prefix=X-Remote-Extra-
–requestheader-group-headers=X-Remote-Group
–requestheader-username-headers=X-Remote-User
–proxy-client-cert-file=path to aggregator proxy cert?
–proxy-client-key-file=path to aggregator proxy key?

  • Register the custom metrics API with the API aggregation layer. Not sure how to do this yet.

  • Set appropriate flags on kube-controller-manager:

–horizontal-pod-autoscaler-use-rest-clients
–kubeconfig OR --master

For me, my main obstacle at this time is how to pass these flags to the appropriate ks8 component. As stated in my previous post, the environment template only permits passing additional flags to kubelet. I’ve exported the configuration of the kubernetes stack and can see several other default flags passed to the kube-controller-manager and kube-apiserver as well as anything I may have passed to kubelet via modification of the template. It looks like I’m on the right track…

So next I tried starting with a new Rancher environment (orchestration set to “cattle”) and tried adding a stack using the exported k8s configuration (docker and rancher compose ymls). I made no changes to the configuration at this time. But as hosts come online and services begin getting deployed, things just get stuck, forever spinning. It is on my to-do list to analyze what is struggling to start.

Since the above approach hasn’t worked, I’ve also tried digging around the Rancher API to see if there is some way to modify the default template (stack?) to have my desired flags to begin with but no luck there. I even searched the running Rancher Server container to see if it is stored on disk somewhere but I’m unable to find it. I’m not sure where to go next. Possibly a custom catalog? I’ll probably investigate that next. If that works I’m not sure how feasible that would be for me though since final production environments may not have internet access.

That’s it for now. I’ll keep digging but I’m really beginning to run out of ideas. @Parth6288 and I can’t be the only ones to require this capability… Someone out there must know if I’m on the right track or if it simply isn’t supported at this time.

Thanks for reading and any help that can be provided!

Hei @eckseleven,

A workaround could be to clone the service at infrastructure stack and add the flag you need to the command.

Hope this helps.

Thanks, @nrf. Other things have come up so I haven’t had a chance to get back in to this. But where I left off, I had noticed that I can modify various K8s infrastructure stack components to add these flags and then upgrade the stack. I think that’s basically what you are describing. I plan to explore this in the coming weeks.

Since my organization is automating the deployment of Rancher and Kubernetes, it is does complicate matters to have to upgrade the stack afterwards. I can probably find a way to do it though.