I created a cluster with rke v0.1.11, then imported into rancher 2.1.1. Everything looks good, but when I deploy (say nginx), the url is http://%3Cnil%3E:30524/. The problem seems to be that the public endpoint is nil. I didn’t see any option to specify public endpoint(s) during import or afterward.
I created the cluster this way
vi my.yml
nodes:
address: node1.xx
user: someone
role: [controlplane,worker,etcd]
address: node2.xx
user: someone
role: [controlplane,worker,etcd]
address: node3.xx
user: someone
role: [controlplane,worker,etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
./rke_linux-amd64 up --config my.yml
I created rancher this way (different machine from nodes)
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443
rancher/rancher
I ran the following to import
curl --insecure -sfL https://rancher.xx/v3/import/xsqvswzkcm22x9fnjrvmxmmjt6dwsxmhtvtdmtxvhfg5tggsrqx6jv.yaml | kubectl apply -f -
rancher and nodes are running ubunu 16.04.5 LTS and docker 17.03.2-ce
Appreciate any thoughts.
I stumbled upon the same problem and a similar bug was also reported on github:
opened 08:14AM - 06 Feb 19 UTC
closed 10:35PM - 26 Sep 19 UTC
kind/bug
area/ha
internal
<!--
Please search for existing issues first, then read https://rancher.com/docs… /rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
1. Set up Rancher HA as described in [https://rancher.com/docs/rancher/v2.x/en/installation/ha/](url)
2. Deploy a Catalog App which creates a public endpoint
**Result:**
The public endpoints have strange URLs, for example:
- http://\<nil>:32285
- http://\<nil>:31871
**Other details that may be helpful:**
The issue does not occur when using single-node Installation of Rancher.
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): **Rancher v2.1.5, but even tried with v2.1.6**
- Installation option (single install/HA): **HA**
<!--
If the reported issue is regarding a created cluster, please provide requested info below
-->
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported): **Imported**
- Machine type (cloud/VM/metal) and specifications (CPU/memory): **4 VMs, each with 2 vCPUs and 16 GB RAM. Two hold the etcd and Control Plane roles, the other two are workers.**
- cluster.yml (used for creation):
```
# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: dkr9000.blab.ch
port: "22"
internal_address: ""
role:
- controlplane
- etcd
hostname_override: ""
user: user
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: /home/user/.ssh/id_rsa
labels: {}
- address: dkr9001.blab.ch
port: "22"
internal_address: ""
role:
- controlplane
- etcd
hostname_override: ""
user: user
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: /home/user/.ssh/id_rsa
labels: {}
- address: dkr9002.blab.ch
port: "22"
internal_address: ""
role:
- worker
hostname_override: ""
user: user
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: /home/user/.ssh/id_rsa
labels: {}
- address: dkr9003.blab.ch
port: "22"
internal_address: ""
role:
- worker
hostname_override: ""
user: user
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: /home/user/.ssh/id_rsa
labels: {}
services:
etcd:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
snapshot: null
retention: ""
creation: ""
kube-api:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: ""
pod_security_policy: false
kube-controller:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
kubelet:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 10.43.0.10
fail_swap_on: false
kubeproxy:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
network:
plugin: canal
options: {}
authentication:
strategy: x509
options: {}
sans: []
addons: ""
addons_include: []
system_images:
etcd: rancher/coreos-etcd:v3.2.18
alpine: rancher/rke-tools:v0.1.15
nginx_proxy: rancher/rke-tools:v0.1.15
cert_downloader: rancher/rke-tools:v0.1.15
kubernetes_services_sidecar: rancher/rke-tools:v0.1.15
kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.10
dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10
kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.10
kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0
kubernetes: rancher/hyperkube:v1.11.6-rancher1
flannel: rancher/coreos-flannel:v0.10.0
flannel_cni: rancher/coreos-flannel-cni:v0.3.0
calico_node: rancher/calico-node:v3.1.3
calico_cni: rancher/calico-cni:v3.1.3
calico_controllers: ""
calico_ctl: rancher/calico-ctl:v2.0.0
canal_node: rancher/calico-node:v3.1.3
canal_cni: rancher/calico-cni:v3.1.3
canal_flannel: rancher/coreos-flannel:v0.10.0
wave_node: weaveworks/weave-kube:2.1.2
weave_cni: weaveworks/weave-npc:2.1.2
pod_infra_container: rancher/pause-amd64:3.1
ingress: rancher/nginx-ingress-controller:0.16.2-rancher1
ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4
metrics_server: rancher/metrics-server-amd64:v0.2.1
ssh_key_path: /home/user/.ssh/id_rsa
ssh_agent_auth: true
authorization:
mode: rbac
options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
provider: ""
options: {}
node_selector: {}
extra_args: {}
cluster_name: "test"
cloud_provider:
name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
monitoring:
provider: ""
options: {}
```
- Kubernetes version (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
```
- Docker version (use `docker version`):
```
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Experimental: false
```
- Operating System is CentOS 7:
**Output of cat /etc/os-release:**
```
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
```
**Output of uname -r:**
```
3.10.0-957.1.3.el7.x86_64
```
although it seemingly only occured with a HA-setup and worked fine with Single-Install.
But they don’t seem to have found a solution over there, either.
Assuming you’re notified about this reply:
Did you solve this problem or could you in any way work around it?