How to upgrade from a one master to a multimaster HA solutio


is there a way, to move an existing running solution with one master to a multimaster solution (HA etcd etc) without reinstalling? If I try to reinstall (actual version) from scratch I got an error (see my other post) and I don’t get a working kubernetes cluster at all. I need at least a possibility to redistribute the certs after including the vip of the LoadBalancer in the trusted server list. Ist there a way (with salt?) to do this ?

Thanks for the help …



It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…

Good luck!

Your SUSE Forums Team

Frank, this is not possible in version 2 - you can only add later new workers. With version 3, this will be possible.


[QUOTE=a_jaeger;51923]Frank, this is not possible in version 2 - you can only add later new workers. With version 3, this will be possible.


Hi Andreas,

thanks for your help!!!

I’m a litte bit “disorientated” with the Master HA implementation from SUSE … there is no real documentation about it :frowning:

For me, it is a real black box …

If I do the Initialization of the CaaS Cluster with 3 Masters (by the way, it seems, that the last patches fixed my kube-dns problem from my other post, now ther is no crash any more after cluster init), “kubectl get cs” still shows:

scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}[/FONT]

where are my other etcd instances, or how can I check, if the master clustering is ok ?

Best regards



to get information about the cluster and its master nodes, run:

kubectl cluster-info

For information about etcd, log in to any node of the cluster and run:

set -a; source /etc/sysconfig/etcdctl; set +a; etcdctl cluster-health

ok ok, I was on the wrong track … (by the way “kubectl get cs” and “kubectl cluster-info” still only shows one master, but etcctl was correct).
My original problem what made me suspect some cluster misconfiguration was on the loadbalancer site (wrong haproxy config leaded to error messages pointig to a false direction). After I found it all worked as expected.

Best regards and thans for the help …