Installing Rancher 2.x from helm chart

Hi everyone.

I am trying to install Rancher 2.x using their helm chart.
I want to point it to some specific domain.

In my cluster I have the following facts or assumptions to keep in mind:

  • The Kubernetes cluster I am using, it was created in the Azure cloud platform.

  • I already have installed cert-manager 0.10v and kong ingress-controller in the Kubernetes cluster.

Actually, in my deployment, I am using kong ingress controller and cert-manager to interact with LetsEncrypt and get SSL/TLS encryption for the services exposed via Ingress resource.
I already did this with some a couple of services.

So, like my idea is to take advantage of kong and cert-manager deployments I already have, I am installing rancher via helm chart with these flag parameters:

  • --set hostname=rancher.mydomain.org
    I want to have a specific domain for my rancher deployment. According to this, I created previously a public static IP address (on the azure platform) which would be assigned as an external IP to forward my rancher instance, so I created an A record in my DNS provider to get this.

  • --set ingress.tls.source=letsEncrypt
    I am choosing the SSL configuration for Rancher service telling that I want to get certificates from LetsEncrypt in order to take in advance of my previous kong and cert-manager deployments such as I mentioned earlier.

  • --set ingress.extraAnnotations.'certmanager\.k8s\.io/cluster-issuer'=letsencrypt-prod
    As I mentioned earlier, I already using cert-manager, so I already have created a cluster issuer called letsencrypt-prod, so I included this option flag parameter in order to use that same cluster that I already had.

So, when I install the rancher helm chart I get this output:

helm install rancher-stable/rancher \
        --name rancher \
        --namespace cattle-system \
        --set hostname=rancher.domain.org \
        --set ingress.tls.source=letsEncrypt \
        --set ingress.extraAnnotations.'certmanager\.k8s\.io/cluster-issuer'=letsencrypt-prod \
        --set letsEncrypt.email=username@domain.org

I could see this deployment creates automatically two ingresses:

NAME                                           HOSTS                   ADDRESS   PORTS     AGE
ingress.extensions/cm-acme-http-solver-5rv5v   rancher.domain.org                 80        74s
ingress.extensions/rancher                     rancher.domain.org             80, 443   3m28s

As a far I know the ingress.extensions/cm-acme-http-solver-5rv5v is a temporary ingress resource created waiting for the cert-manager and kong actions to contact with letsencrypt, and when it happens, it should disappear, but this never happens. This ingress never disappears.

I realized about something is that maybe behind the scenes those rancher ingress created from its helm chart were developed to use Nginx by default such as I have shown here.

Please see the anotations attribute

kubectl describe ingress.extensions/cm-acme-http-solver-5rv5v -n cattle-system
Name:             cm-acme-http-solver-5rv5v
Namespace:        cattle-system
Address:          
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host                   Path  Backends
  ----                   ----  --------
  rancher.mydomain.org  
                         /.well-known/acme-challenge/q4B4ZFErLEu3wA8i5vICyJY61Wh806pB9vtazjv5jUk   cm-acme-http-solver-j2n8p:8089 (10.244.1.70:8089)
Annotations:
  kubernetes.io/ingress.class:                         kong
  nginx.ingress.kubernetes.io/whitelist-source-range:  0.0.0.0/0,::/0
Events:                                                <none>

You can see that this kubernetes.io/ingress.class: kong is indicating that the ingress.extensions/cm-acme-http-solver-5rv5v would try to contact kong as an controller

With regard to rancher ingress resource, it is really the true ingress process to manage rancher and despite it’s pointing to my letsencrypt-prod ClusterIssuer, also is using some nginx annotations and also is pointing to another rancher Issuer.

This means this rancher ingress resource is pointing to two different issuers and clusterissuer at the same time.

⟩ kubectl describe ingress rancher -n cattle-system  
Name:             rancher
Namespace:        cattle-system
Address:          
Default backend:  default-http-backend:80 (<none>)
TLS:
  tls-rancher-ingress terminates rancher.possibilit.nl
Rules:
  Host                   Path  Backends
  ----                   ----  --------
  rancher.mydomain.org  
                            rancher:80 (10.244.0.32:80,10.244.1.69:80,10.244.2.80:80)
Annotations:
  certmanager.k8s.io/issuer:                          rancher
  nginx.ingress.kubernetes.io/proxy-connect-timeout:  30
  nginx.ingress.kubernetes.io/proxy-read-timeout:     1800
  nginx.ingress.kubernetes.io/proxy-send-timeout:     1800
  certmanager.k8s.io/cluster-issuer:                  letsencrypt-prod
Events:
  Type    Reason             Age   From          Message
  ----    ------             ----  ----          -------
  Normal  CreateCertificate  54m   cert-manager  Successfully created Certificate "tls-rancher-ingress"
[I] 

If I choose to include the options mentioned above in the helm installation rancher command, I mean, these:

--set ingress.tls.source=letsEncrypt \
--set ingress.extraAnnotations.'certmanager\.k8s\.io/cluster-issuer'=letsencrypt-prod \

Does Rancher helm chart installation process creating a specific Issuer and an specific ingress and also are going to work with nginx and not with kong?
I would think that yes

Despite that the routes for rancher.mydomain.org which come from the ingress process mentioned above, that routes are being created in my kong database here:

Is just that the handshake between cert-manager, kong and letsencrypt had not been possible, and a proof of it, the acme-challenge has not been completed.

But Also I think, I am not specifying in any step or point that helm installation rancher process should point to some external Ip address as I created at the beginning inside Azure platform.

I was expecting that the helm chart there was a specific set parameter to apply the load balancer IP … something like:

helm install rancher-latest/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=[rancher.mydomain.org](http://rancher.mydomain.org/) \
--set ingress.tls.source=letsEncrypt

# some purpose  
--set proxy.type=LoadBalancer
--set proxy.loadBalancerIP=<myIpAddress>

I say this because kong actually allows to do it from helm chart installation.

I think it makes sense because kong ingress controller does not find a specific endpoint to refer and perform the acme-challenge, and that’s why in the cert-manager pod we don’t have any event …

Every 2,0s: kubectl logs pod/cert-manager-79d7495f47-pjlrg -n cert-manager                el-pug: Fri Sep 13 14:21:42 2019

I0909 13:32:36.925765       1 start.go:76] cert-manager "level"=0 "msg"="starting controller"  "git-commit"="f1d591a53" "v
ersion"="v0.10.0"
I0909 13:32:36.927603       1 controller.go:184] cert-manager/controller/build-context "level"=0 "msg"="configured acme dn
s01 nameservers" "nameservers"=["10.0.0.10:53"]
I0909 13:32:36.928677       1 controller.go:149] cert-manager/controller "level"=0 "msg"="starting leader election"
I0909 13:32:36.930078       1 leaderelection.go:235] attempting to acquire leader lease  cert-manager/cert-manager-control
ler...
I0909 13:32:36.931031       1 metrics.go:201] cert-manager/metrics "level"=0 "msg"="listening for connections on" "address
"="0.0.0.0:9402"
I0909 13:33:49.207086       1 leaderelection.go:245] successfully acquired lease cert-manager/cert-manager-controller
I0909 13:33:49.207919       1 controller.go:101] cert-manager/controller "level"=0 "msg"="not starting controller as it's
disabled" "controller"="certificaterequests-issuer-acme"
I0909 13:33:49.207934       1 controller.go:101] cert-manager/controller "level"=0 "msg"="not starting controller as it's
disabled" "controller"="certificaterequests-issuer-selfsigned"
I0909 13:33:49.207943       1 controller.go:101] cert-manager/controller "level"=0 "msg"="not starting controller as it's
disabled" "controller"="certificaterequests-issuer-vault"
I0909 13:33:49.207955       1 controller.go:101] cert-manager/controller "level"=0 "msg"="not starting controller as it's
disabled" "controller"="certificaterequests-issuer-venafi"
I0909 13:33:49.209061       1 controller.go:120] cert-manager/controller "level"=0 "msg"="starting controller" "controller
"="orders"
I0909 13:33:49.209074       1 controller.go:74] cert-manager/controller/orders "level"=0 "msg"="starting control loop"
I0909 13:33:49.209314       1 controller.go:120] cert-manager/controller "level"=0 "msg"="starting controller" "controller
"="certificates"
I0909 13:33:49.209333       1 controller.go:74] cert-manager/controller/certificates "level"=0 "msg"="starting control loo
p"
I0909 13:33:49.209364       1 controller.go:120] cert-manager/controller "level"=0 "msg"="starting controller" "controller
"="clusterissuers"
I0909 13:33:49.209419       1 controller.go:74] cert-manager/controller/clusterissuers "level"=0 "msg"="starting control l
oop"

In the above case exposed, the rancher deployment process is creating a specific ingress, but I would like to have more control of this ingress in order to point in it, to kong-ingress-controller and also apply some basic-auth plugins and other kong stuff.

I mean I would like to have for rancher something like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    # add an annotation indicating the issuer to use.
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod # letsencrypt-staging
    kubernetes.io/ingress.class: "kong"
    # plugins.konghq.com: rancher-production-basic-auth, rancher-production-acl
  name: production-rancher-ingress
  namespace: cattle-system
spec:
  rules:
  - host: rancher.mydomain.org
    http:
      paths:
      - backend:
          serviceName: rancher
          servicePort: 80
        path: /
  tls: # < placing a host in the TLS config will indicate a cert should be created
  - hosts:
    - rancher.mydomain.org
    secretName: #  letsencrypt-prod or secret/tls-rancher or whatever created with cert-manager

Is my inconvenient related with the ingress.extraAnnotations chart option here?

If yes, how can I manage these custom ingress annotations in order to have my rancher installation with an external IP address and pointing only to kong and not to nginx?

1 Like