It’s installed, the cluster works, I have deploy a nginx pod for test.
I created node-port and cluster-ip services, curl works from the master or the worker
OK but :
How to access to nginx from the machine A ? (from rancher srv ‘ip route’ and it’s OK)
And by the way; it must be possible that opnsense manages the allocation of ip for services ? How to do this ?
Generally speaking you’d set up an ingress and use the nginx-ingress-controller that should be on all your worker nodes if you set up RKE2. If you have your ingress & service set up right you’ll be able to access it from any of the worker nodes on port 80 or 443 as normally (just keep in mind that whatever hostname or path you setup is what you’ll need to get to it, so you’d make sure machine A thinks that hostname goes to one of your workers).
If you use wildcard DNS or different paths, you don’t actually need any more IPs.
Doesn’t work from where? That shouldn’t work outside the cluster but should within. Within the cluster it’d normally be serviced by the default coreDNS service.
Also note that RKE2 should’ve installed nginx-ingress-controller (maybe prefixed with rke2-) as well. What you normally do externally is make sure your external DNS or a load balancer points to nodes with nginx-ingress-controller and then you set up ingress resources for those DNS resolvable hosts and paths within them to point to your various services.
Why 127.0.0.53 ? For me it should be, the address of the coredns service (and I tested it works) or the address of a public DNS server like 8.8.8.8.
But 127.0.0.53 ?
Localhost is commonly used as 127.0.0.1, but I believe network wise it’s 127.0.0.1/8, which means it’s 127. followed by anything. If you’re in the container poking around it might have a local DNS installed that it’s pointing to and that might be misconfigured to not go to coreDNS. Possibly the comments in the file might tell you, additionally you could do netstat -anp | grep 53 and look for what’s listening on TCP 53 & UDP 53 and that tells you the process (which you can then trace back if not already obvious from there).
On the other hand, I’ve never had a need to look at DNS inside my containers and possibly Kubernetes plays with things this way normally?
rancher@rke2-master1:~$ sudo cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search mydomain.com
rancher@rke2-master1:~$ resolvectl status
Global
LLMNR setting: no
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
168.192.in-addr.arpa
17.172.in-addr.arpa
18.172.in-addr.arpa
19.172.in-addr.arpa
20.172.in-addr.arpa
21.172.in-addr.arpa
22.172.in-addr.arpa
23.172.in-addr.arpa
24.172.in-addr.arpa
25.172.in-addr.arpa
26.172.in-addr.arpa
27.172.in-addr.arpa
28.172.in-addr.arpa
29.172.in-addr.arpa
30.172.in-addr.arpa
31.172.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 8 (caliba83fcc5606)
Current Scopes: none
DefaultRoute setting: no
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 7 (calia5fef82fdda)
Current Scopes: none
DefaultRoute setting: no
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 4 (flannel.1)
Current Scopes: none
DefaultRoute setting: no
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 3 (calibd7b7b49b6f)
Current Scopes: none
DefaultRoute setting: no
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
DefaultRoute setting: yes
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Current DNS Server: 172.16.0.1
DNS Servers: 172.16.0.1
DNS Domain: mydomain.com
rancher@rke2-master1:~$ sudo cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search mydomain.com
rancher@rke2-master1:~$ resolvectl status
Global
LLMNR setting: no
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
168.192.in-addr.arpa
17.172.in-addr.arpa
18.172.in-addr.arpa
19.172.in-addr.arpa
20.172.in-addr.arpa
21.172.in-addr.arpa
22.172.in-addr.arpa
23.172.in-addr.arpa
24.172.in-addr.arpa
25.172.in-addr.arpa
26.172.in-addr.arpa
27.172.in-addr.arpa
28.172.in-addr.arpa
29.172.in-addr.arpa
30.172.in-addr.arpa
31.172.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 8 (caliba83fcc5606)
Current Scopes: none
DefaultRoute setting: no
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 7 (calia5fef82fdda)
Current Scopes: none
DefaultRoute setting: no
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 4 (flannel.1)
Current Scopes: none
DefaultRoute setting: no
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 3 (calibd7b7b49b6f)
Current Scopes: none
DefaultRoute setting: no
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Link 2 (eth0)
Current Scopes: DNS
DefaultRoute setting: yes
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Current DNS Server: 172.16.0.1
DNS Servers: 172.16.0.1
DNS Domain: mydomain.com
But unless I am mistaken, nothing is passed to coredns.
I can never come up with a worse thing to happen to Linux than systemd. I’ve never knowingly used systemd-resolve, so am not 100% certain, but I’d think that the 172.16.0.1 should list the coreDNS internal IP first and then that second (assuming that’s your normal network DNS for the Kubernetes nodes). Sadly, saying that I’m not sure why it’d be doing that either. I checked a random container in my cluster and resolv.conf just gives me my coreDNS internal service IP (kubectl get services -n kube-system -o wide | grep coredns).
Maybe there’s something weird with the container? Maybe try deploying an app from the Rancher marketplace to see?
I had a problem with my vpn and … I no longer touched K8S / RKE2 during the Christmas and New Years period.
Ok, So I have destroy all my RKE2 cluster and I have rebuild 2 cluster.
the first with kubadm
the second with RKE2
And … I understood the notion that I was missing.
When you build a new cluster with kubadm, you have not this Ingress. But you can install metalLB … and it’s work
And for our DNS problem, it doesn’t have one !
It is normal not to be able to solve the names of the services, pods, deployment outside the cluter. Deployements or pods should not be displayed via their internal names.