Rancher 2.9.0 and cert-manager from app catalog

Hello together,
I have red about the necessary update for lets encrypt certificates. As I am using a single node install and a cluster on the same server, I run rancher server on non default http/https port and use kubernetes cluster with cert-manager from app catalog following

https://www.2stacks.net/blog/rancher-2-and-letsencrypt/#verify-installation

I have seen, taht when installing Rancher 2.9.0, the cert-manager in catalog is still at version 0.5.2. Can I still use it with letsencrypt, or do I need to install a newer version of cert-manager manually?

The post here

referes to the Rancher docs. Unfortunately, no link for a new installation of cert-manger is linked there, and I could not find a description in the documentation yet.

Can anybody provide a link or info on how to install a recent cert-manager version?

Hi @cjohn001,
i used the official helm chart provided by jetstack from https://hub.helm.sh/charts/jetstack/cert-manager . You can add the repo to your rancher catalogs and then install it via UI if you want (or helm directly). My helm config then is very simple (yaml):

ingressShim:
  defaultIssuerName: "letsencrypt-prod"
  defaultIssuerKind: "ClusterIssuer"
letsencrypt: 
  email: "letsencrypt@yourmail.com"

My default ClusterIssuer is the following:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    email: letsecrypt@yourmail.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
    - http01:
        serviceType: ClusterIP
        ingress:
          class: nginx

Then an ingress like the following works:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  rules:
  - host: foobar.yourdomain.com
    http:
      paths:
      - backend:
          serviceName: yourapp
          servicePort: 8080
  tls:
  - hosts:
    - foobar.yourdomain.com
    secretName: yourapp-tls

Please be aware that cert-manager 0.11 introduced some breaking changes like renamed annotations and changed apiVersions. See https://github.com/jetstack/cert-manager/releases/tag/v0.11.0 for more information.

Cheers

Hello Chris,
thanks a lot.I will document the steps here for other newcomers to rancher like me:

Delete old Cert-Manager
https://docs.cert-manager.io/en/latest/tasks/uninstall/kubernetes.html

Install the CustomResourceDefinition resources separately
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml

Create the namespace for cert-manager
kubectl create namespace cert-manager

Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

Update your local Helm chart repository cache
helm repo update

Prepare config.yaml by copying your shim-config into the config file

Install the cert-manager Helm chart
helm install -f config.yaml
ā€“name cert-manager
ā€“namespace cert-manager
ā€“version v0.11.0
jetstack/cert-manager

Hello Chris,
I have one further question you can maybe help me with. I can now use the cluster issuer like you described and I followed the description here

to create an example with an ingress. When I now use the following url, the certificate works well.
https://my-nutri-diary.de/

When using the url as follows.
https://www.my-nutri-diary.de/

the ingress seems not to work. Is the www here interpreted as a subdomain and I would need to set up a second ingress for the hostname www.my-nutri-diary.de?

This looks strange in some way.
Thanks for your help!

Best regards,
Christoph

Hi Christoph,
youĀ“re welcome. yes, in some way. You need to have a certificate for both domains (domain.com and www.domain.com). An ingress for your case can look like thisā€¦

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/tls-acme: "true"
  name: domain-ingress
  namespace: yourns
spec:
  rules:
  - host: domain.com
    http:
      paths:
      - backend:
          serviceName: target-service
          servicePort: http
        path: /
  - host: www.domain.com
    http:
      paths:
      - backend:
          serviceName: target-service
          servicePort: http
        path: /
  tls:
  - hosts:
    - domain.com
    - www.domain.com
    secretName: domain-tls

cert-manager will then create two certificates.

note: cert-manager will create two certificates, but each certificate matches both domains domain.com and www.domain.com. Maybe there is a possibility for a more sophisticated config for this.

But it works, just tested it.

best regards,
Christian

Hello Chris,
thanks a lot. I solved the issue in a less elegant way by defining a second ingress resource for the www.domain as it was not clear to me that I could simply specify multiple rules:host

I was playing around with nginx.ingress.kubernetes.io/from-to-www-redirect: ā€˜ā€œtrueā€ā€™ and added both hosts to the tls:hosts section unfortunately without success, as for the host not given under rules:host a wrong certificate was supplied instead of the one from letsencrypt.

I now assume your option is smarter as it solves the same issue with a single ingress and a common letsencrypt certificate.
Anyway, I am now wondering about how things work in Rancher internally and what my approach really costs my means of processing overhead. Did I understand correctly, that the ingress resources are not more than just configurations which are merged together as a configuration for the system/nginx-ingress-controller which is the actual pod responsible for routing/programming of a single load balancer per node? Or is each ingress to be understood as a separate workload running in the background? Hence, would my previous approach mean that two ingresses denote two running pods and with it additional overhead?


And there is a further topic I am wondering about. The yaml for the ingress you define looks quite comprehensible. If I compare it with the yaml, which is generated for the ingress when using the Rancher UI, I see lots of other stuff in there like

field.cattle.io/creatorId: user-r5b72
field.cattle.io/ingressState: '{"bXkuZGUtY3J0":"default:my-diary.de-crt","bmd5LmRlLy8vODA=":""}'
field.cattle.io/publicEndpoints: '[{"addresses":["185.163.100.11"],"port":443,"protocol":"HTTPS","serviceName":"default:nginx","ingressName":"default:nginx-ingress","hostname":"my-diary.de","path":"/","allNodes":true}]'
kubernetes.io/tls-acme: '"true"'

creationTimestamp: ā€œ2019-10-21T17:23:24Zā€
generation: 4
labels:
cattle.io/creator: norman
name: nginx-ingress
namespace: default
resourceVersion: ā€œ396564ā€
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress
uid: 732fd20c-dc06-4264-ae0d-dc0f65249fbc

Now I am wondering if I do not need all the other stuff that is generated, hence if it is better to work with the rancher ui to get these additional annotations in.
From a repeatability point of view and by means of understandability I would prefer to use sleek yaml files like the one you provided. I think that way I might also be able to use those charts to automate creation of a cluster. So can you comment on what way I should prefer to use, go with RancherUI as I would miss out a lot of preset configuration, or go with manual made charts as the other stuff is not needed or created in the background anyway?

Thanks for the explanation!
Best regards,
Christoph

Yes, youĀ“re right. For each nginx ingress controller pod (per worker node) there is one big nginx-config rendered for all ingresses. You can try to log in to the nginx-ingress-controller pod and see the file. Sometimes this is good for debugging purposes because you can provide ingresses which break nginx config.

I suggest to use the way youĀ“re comfortable with, the extra annotations you see are generated anyway, whether UI or not. For my projects I have a simple yaml file with my kubernetes deployment descriptions or a helm chart for more complex projects (with the need for use staging etc).

Best,
Christian

1 Like

Hello Christian,
thanks a lot for the explanations. Then this is the way for me to go as well, as my hope is to be able to automate the entire cluster setup process.

Best regards,
Christoph

Iā€™ve updated my post with the new information just in case anyone is still interested in deploying cert-manager via the UI.

Hopefully Iā€™ll stop receiving the LetsEncrypt automated e-mails with ā€œACTION REQUIRED: Letā€™s Encrypt is blocking old cert-manager versionsā€ :slight_smile: