Conan
July 26, 2018, 12:21pm
1
Initial I set up Rancher 2.0.6 HA using RKE. The server-url in GUI set to
https://rancher2.abc.com/ and I could access Rancher 2 UI using server-url value.
Then I change server-url to https://dev-rancher2.abc.com/ in UI and update rancher ingress using kubectl to update ingress rule host = dev-rancher2.abc.com
Everything seem OK and I could access Rancher 2.x console via url https://dev-rancher2.abc.com/
However I launch kubectl via Rancher UI. The ~/.kube/config still have old server-url value. i.e.
apiVersion: v1
kind: Config
clusters:
What I need to change so that ‘launch kubectl’ from Rancher UI getting latest value ?
Hi,
I’m trying to update the Rancher2 server-url field (v2.2.4) through the UI and have Rancher reconfigure my kubernetes cluster provisioned with the OpenNebula node driver (infrastructure provider), but this only works with the old server-url.
I read in the Rancher2 documentation (https://rancher.com/docs/rancher/v2.5/en/admin-settings/ ):
Important! After you set the Rancher Server URL, we do not support updating it. Set the URL with extreme care.
but there is some way to reconfigure the cluster with the new FQDN ?
Thanks,
Finally, reading this article I have been able to update the server-url correctly:
1_README.md
# Generate Rancher 2 cluster/node agents definitions
**This is not official documentation/tooling, use with caution**
This generate the Kubernetes definitions of the `cattle-cluster-agent` Deployment and `cattle-node-agent` DaemonSet, in case it's accidentally removed/server-url was changed/certficates were changed. It is supposed to run on every cluster Rancher manages. If you have custom clusters created in Rancher, see `Kubeconfig for Custom clusters created in Rancher` how to obtain the kubeconfig to directly talk to the Kubernetes API (as usually it doesn't work via Rancher anymore). For other clusters, use the tools provided by the provider to get the kubeconfig.
IMPORTANT: You get the cluster/node agents definitions from Rancher, and you apply them to the cluster that is created/managed so you need to switch kubeconfig to point to that cluster before applying them.
## Running it
This file has been truncated. show original
get_agents_yaml_ha.sh
#!/bin/bash
# Usage: ./get_agents_yaml_ha.sh cluster_name
# Needs to have KUBECONFIG environment variable set or ~/.kube/config pointing to the RKE Rancher cluster
# Check if jq exists
command -v jq >/dev/null 2>&1 || { echo "jq is not installed. Exiting." >&2; exit 1; }
command -v kubectl >/dev/null 2>&1 || { echo "kubectl is not installed. Exiting." >&2; exit 1; }
# Check if clustername is given
if [ -z "$1" ]; then
This file has been truncated. show original
get_agents_yaml_single.sh
#!/bin/bash
# Usage: ./get_agents_yaml_single.sh cluster_name
# Needs to be run on the server running `rancher/rancher` container
# Check if jq exists
command -v jq >/dev/null 2>&1 || { echo "jq is not installed. Exiting." >&2; exit 1; }
command -v sha256sum >/dev/null 2>&1 || { echo "jq is not installed. Exiting." >&2; exit 1; }
command -v base64 >/dev/null 2>&1 || { echo "jq is not installed. Exiting." >&2; exit 1; }
command -v md5sum >/dev/null 2>&1 || { echo "jq is not installed. Exiting." >&2; exit 1; }
This file has been truncated. show original