Hello, I got stuck in making Traefik dashbord accessible. I know this topic is not new, so briefly:
I run k3s 1.28.9+k3s1 on Raspbbery Pi 4/Ubuntu 20.04 cluster with Traefik onboarded by the default install. After a rather standard addition of Traefik IngressRoute with web entryPoint web
, Traefik dashboard is not accessible through a Web browser, with the response "404 page not found"
. The same happens when curling from any host of the cluster. Notably, when checking the readiness probe
with curl http://10.42.2.14:9000/ping
sent from inside the cluster to Traefik container then HTTP/1.1 200 OK is received (see below; 10.42.2.14
is the node where the Traefik pod runs). Seems the container is living, but some misconfiguration exists. No idea how to resolve it. Selected Traefik settings are given in the boxes below. Strangely enough, all this DOES work with earlier k3s release v1.25.5+k3s2
(and respective Traefik). Actually, Iām now refreshing Kubernetes lab for my students and it would be a pity if I did not make it. For any case, posted this also on TraefikLabs forum. Any help will be appreciated.
Accessing the dashboard fails, browser and curl
browser
======
http://192.168.2.95/dashboard/
404 page not found
tshark on cni0 after above browser query
168 47.305818128 10.42.0.0 ā 10.42.2.14 HTTP 442 GET /dashboard/ HTTP/1.1
170 47.306858226 10.42.2.14 ā 10.42.0.0 HTTP 230 HTTP/1.1 404 Not Found (text/plain)
curl:
======
ubuntu@kpi091:~$ curl http://10.42.2.14:8000/dashboard/ -v
* Trying 10.42.2.14:8000...
* TCP_NODELAY set
* Connected to 10.42.2.14 (10.42.2.14) port 8000 (#0)
> GET /dashboard/ HTTP/1.1
> Host: 10.42.2.14:8000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Sun, 19 May 2024 11:31:47 GMT
< Content-Length: 19
Checking the readiness probe goes smoothly
ubuntu@kpi091:~$ curl http://10.42.2.14:9000/ping -v
* Trying 10.42.2.14:9000...
* TCP_NODELAY set
* Connected to 10.42.2.14 (10.42.2.14) port 9000 (#0)
> GET /ping HTTP/1.1
> Host: 10.42.2.14:9000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Sun, 19 May 2024 09:35:19 GMT
< Content-Length: 2
< Content-Type: text/plain; charset=utf-8
File traefik.yaml
Dashbord not enabled here, but enabled in container args section in traefic deployment manifest
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik-crd
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-crd-25.0.3+up25.0.0.tgz
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-25.0.3+up25.0.0.tgz
set:
global.systemDefaultRegistry: ""
valuesContent: |-
deployment:
podAnnotations:
prometheus.io/port: "8082"
prometheus.io/scrape: "true"
providers:
kubernetesIngress:
publishedService:
enabled: true
priorityClassName: "system-cluster-critical"
image:
repository: "rancher/mirrored-library-traefik"
tag: "2.10.7"
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
service:
ipFamilyPolicy: "PreferDualStack"
Traefik deployment (output from kubectl get deployment
ā¦)
Is it possible that some args are missing? Maybe --api.dashboard=true
does not work well and it should be enabled in traefik.yaml or so?
$ kubectl get deployment -n kube-system traefik -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "3"
kubectl.kubernetes.io/last-applied-configuration: |
{ ... removed for brevity ...}
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
creationTimestamp: "2024-05-13T14:29:05Z"
generation: 3
labels:
app.kubernetes.io/instance: traefik-kube-system
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-25.0.3_up25.0.0
name: traefik
namespace: kube-system
resourceVersion: "79170"
uid: 1dc70283-c94f-4a16-a599-015c180aa285
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: traefik-kube-system
app.kubernetes.io/name: traefik
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "9100"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app.kubernetes.io/instance: traefik-kube-system
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-25.0.3_up25.0.0
spec:
containers:
- args:
- --global.checknewversion
- --global.sendanonymoususage
- --entrypoints.metrics.address=:9100/tcp
- --entrypoints.traefik.address=:9000/tcp
- --entrypoints.web.address=:8000/tcp
- --entrypoints.websecure.address=:8443/tcp
- --api.dashboard=true
- --ping=true
- --metrics.prometheus=true
- --metrics.prometheus.entrypoint=metrics
- --providers.kubernetescrd
- --providers.kubernetesingress
- --providers.kubernetesingress.ingressendpoint.publishedservice=kube-system/traefik
- --entrypoints.websecure.http.tls=true
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: rancher/mirrored-library-traefik:2.10.7
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
name: traefik
ports:
- containerPort: 9100
name: metrics
protocol: TCP
- containerPort: 9000
name: traefik
protocol: TCP
- containerPort: 8000
name: web
protocol: TCP
- containerPort: 8443
name: websecure
protocol: TCP
readinessProbe:
failureThreshold: 1
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
- mountPath: /tmp
name: tmp
dnsPolicy: ClusterFirst
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroupChangePolicy: OnRootMismatch
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
serviceAccount: traefik
serviceAccountName: traefik
terminationGracePeriodSeconds: 60
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
volumes:
- emptyDir: {}
name: data
- emptyDir: {}
name: tmp
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2024-05-13T14:29:05Z"
lastUpdateTime: "2024-05-18T23:50:54Z"
message: ReplicaSet "traefik-7d5f6474df" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2024-05-19T08:55:18Z"
lastUpdateTime: "2024-05-19T08:55:18Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 3
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Traefik dashboard service manifest (output from kubectl get svc ...
)
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
metallb.universe.tf/ip-allocated-from-pool: first-pool
creationTimestamp: "2024-05-13T14:29:05Z"
labels:
app.kubernetes.io/instance: traefik-kube-system
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-25.0.3_up25.0.0
name: traefik
namespace: kube-system
resourceVersion: "72460"
uid: 38030d1a-3ada-4e74-8299-71d7232c6690
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.43.68.92
clusterIPs:
- 10.43.68.92
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: PreferDualStack
ports:
- name: web
nodePort: 31949
port: 80
protocol: TCP
targetPort: web
- name: websecure
nodePort: 30470
port: 443
protocol: TCP
targetPort: websecure
selector:
app.kubernetes.io/instance: traefik-kube-system
app.kubernetes.io/name: traefik
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.2.95
IngressRoute for dashboard (output from kubectl get ingressroute ...
)
$ kubectl get ingressroute -n kube-system traefik-dashboard -o yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"traefik.containo.us/v1alpha1","kind":"IngressRoute","metadata":{"annotations":{},"name":"traefik-dashboard","namespace":"kube-system"},"spec":{"entryPoints":["web"],"routes":[{"kind":"Rule","match":"PathPrefix(`/dashboard`) || PathPrefix(`/api`)","services":[{"kind":"TraefikService","name":"api@internal"}]}]}}
creationTimestamp: "2024-05-18T22:35:39Z"
generation: 2
name: traefik-dashboard
namespace: kube-system
resourceVersion: "75367"
uid: 357d17d5-563c-4022-8998-4e326d0c7310
spec:
entryPoints:
- web
routes:
- kind: Rule
match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
services:
- kind: TraefikService
name: api@internal