Hi I try setup one rancher single node using docker but when i start the worker using the : sudo docker run -d --name rancher-agent --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.5.12 --server https://192.168.1.54:8443 --token llt46sqbfxfxqbm7qpfjd6nj8sgvtpjqd5ndjx6678d4bnld6dnrk4 --ca-checksum e0c5deaabae5fb132353d141cb25476c52c44c5960a8a5551ba88ff2bf4e2fe7 --etcd --controlplane --worker
i have error into the container: tls: failed to verify client's certificate: x509: certificate signed by unknown authority
. The rancher server and rancher agent still run into do docker on some host. I try using one Ubuntu 20.04 host and One mac OS X Catalina.
Akito
March 1, 2022, 11:49am
2
Did you already check out the basics?
opened 01:42PM - 30 Oct 20 UTC
closed 09:55AM - 15 Dec 20 UTC
status/more-info
Hi
I did search on this forum but could not find a working solution.
Ranc… her version: 2.4.8 HA
Cluster type: Custom hosted on AWS,
Docker version: 1.13.1
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
I deployed Rancher with self certificates with a private CA option. The Rancher GUI came up and when I try to create a new cluster, it is stuck in provisioning state with an error ‘check etcd logs’. The etcd logs says:
tls: failed to verify client’s certificate: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kube-ca”)", ServerName “”
Steps to reproduce:
cluster.yml:
cluster_name: Ram-Kube
ssh_key_path: /home/ec2-user/.ssh/id_rsa
nodes:
address: 10.0.9.205
internal_address: 10.0.9.205
user: ec2-user
role: [controlplane,worker,etcd]
address: 10.0.9.197
internal_address: 10.0.9.197
user: ec2-user
role: [worker]
address: 10.0.10.177
internal_address: 10.0.10.177
user: ec2-user
role: [worker]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
network:
plugin: weave
ingress:
provider: nginx
options:
use-forwarded-headers: ‘true’
rke up --config ./cluster.yml
kubectl create namespace cattle-system
kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=tls.crt --key=tls.key
kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem=./cacerts.pem
kubectl -n cattle-system create secret generic tls-ca-additional --from-file=ca-additional.pem=./ca-additional.pem
helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=rancher-draco.asc-dev.io --set ingress.tls.source=secret --set privateCA=true --kubeconfig ./kube_config_cluster.yml --set additionalTrustedCAs=true
Do you see the incorrect configuration parameter in the steps above?
Thanks in advance.
Ram
opened 04:04AM - 02 Feb 19 UTC
closed 03:25PM - 02 Feb 19 UTC
**RKE version:** rke version v0.1.15
**Docker version: (`docker version`,`doc… ker info` preferred)**
```
Containers: 31
Running: 21
Paused: 0
Stopped: 10
Images: 12
Server Version: 17.03.2-ce
Storage Driver: overlay
Backing Filesystem: xfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-957.1.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.701 GiB
Name: localhost.localdomain
ID: ZNEP:KSYX:5MUB:GFES:MA5R:7445:KZGK:ANTF:2MR6:FUQS:4LOW:3HGR
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
```
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)**
CentOS Linux release 7.6.1810 (Core)
3.10.0-957.1.3.el7.x86_64
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**
vSphere
**cluster.yml file:**
```
nodes:
- address: 192.168.37.177
user: devops
role: [controlplane,worker,etcd]
ssh_key_path: "/opt/rancher/devops_rsa"
- address: 192.168.37.178
user: devops
role: [controlplane,worker,etcd]
ssh_key_path: "/opt/rancher/devops_rsa"
- address: 192.168.37.179
user: devops
role: [controlplane,worker,etcd]
ssh_key_path: "/opt/rancher/devops_rsa"
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
```
**Steps to Reproduce:**
1. Set up nginx load balancer per instructions and default using config at https://rancher.com/docs/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/
2. Install HA Rancher on 3 nodes according to https://rancher.com/docs/rancher/2.x/en/installation/ha/
3. Add tls.key and tls.cert, as well as cacerts.pem using following:
```
kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=tls.crt --key=tls.key
kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem
```
4. Attempt to set up a new cluster with a single node, using a vSphere node template
**Results:**
Cluster begins to build, VM is provisioned and goes through most of the build, and errors on bringing up the control plane. First error is
```
[controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [192.168.37.89]: Get https://localhost:6443/healthz: can not build dialer to c-d2hwt:m-xhr9n, log: I0202 03:54:51.820462 1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
```
It then goes back to through trying to provision and throws a different but similar message:
```
[controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service
[kube-apiserver] on host [192.168.37.89]: Get https://localhost:6443/healthz: can not build dialer to c-d2hwt:m-xhr9n, log: I0202 03:56:35.561803 1 log.go:172] http: TLS handshake error from 192.168.37.89:43581: EOF
```
`docker logs rancher/rancher-agent` repeatedly show
```
time="2019-02-02T03:53:14Z" level=info msg="Connecting to wss://rancher.<mydomainname>.com/v3/connect with token mc6vjxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxmspst2jfs"
time="2019-02-02T03:53:14Z" level=info msg="Connecting to proxy" url="wss://rancher.<mydomainname>.com/v3/connect"
time="2019-02-02T03:53:14Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2019-02-02T03:53:14Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
```
**Notes:**
- I am using a GeoTrust cert with a wildcard domain in the SAN. I've used this cert for many other services and have had no issues.
- I have tried adding the intermediate cert immediately after my regular certificate, it makes no difference.
- Since this is a trusted cert from a major CA, also tried without adding the CA cert, made no difference
- Someone with a similar issue said they fixed it by installing cert manager through helm, I tried that as well and no luck
- Also strangely, if I go to the Global view into settings, and click "Show cacerts" it simply displays `<none>` - but if I do `kubectl -n cattle-system get secrets` I see the CA cert I added early is indeed there, and if I download the yaml for the secret I see the base64 encoded data for the cert
- I set up a single node install (no load balancer) using a different certificate against the same vSphere cluster and I was able to provision a node just fine
kubernetes, kubectl, k3s
I am trying out kubernetes and rancher for the first time and tried launching it via docker on AlmaLinux 8.4 (CentOS variant). I am able to get everything else to work just fine but the etcd node is having some issues. This is the from the logs on the etcd docker node. How do I resolve this? Been doing some research/googling but not sure i’m getting the right answers.
Thanks!!!
2021-06-28 14:45:32.863447 I | embed: rejected connection from "10.150.10.227:36827" (error "EOF", ServerName "")…
hello,
I am quite new to Racher,
For last 2 days, i am trying to create a new cluster by using rancher,
however, i am keep getting in in Cluster logs
use of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kube-ca”)", ServerName “”)
2021-06-29 20:27:07.192253 I | embed: rejected connection from “192.168.0.27:51668” (error “tls: failed to verify client’s certificate: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verif…
Hi
I reviewed similar topics on this certificate subject but could not find a solution. I installed Rancher 2.5.1 using my own certs. The command I used:
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=rancher-draco.asc-dev.io --set ingress.tls.source=secret --set privateCA=true --kubeconfig ./kube_config_cluster.yaml
The Rancher GUI came up and when I created a new cluster, it gives me an error:
[etcd] Failed to bring up Etcd Plane: etcd cluster is unhea…
Hi. I am using latest version of Rancher and i am getting this error ("2019-07-26 10:51:25.518141 I | embed: rejected connection from “10.0.8.61:60174” (error “tls: failed to verify client’s certificate: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kube-ca”)”, ServerName “”), while setting up the Master (by choosing Node option as etcd and control panel)