Problem creating a cluster with VMs

Trying to setup an environment on laptop as close to real life as possible.
Host (Laptop) running Centos 8, has two VMs (Ubuntu 20.04). one master one agent.
Goal: to have one master server and to add in the future more VMs that act as K8s Nodes.

I’ve installed the UI on the Host following quick-start command:

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher

using the UI I have tried to setup a new cluster and ran the generated command on the two VMs.
referring below just to the master VM.

Problem: No connection between the VMs and the host

alternative 1 : --server localhost :point_up_2:

when I use the following docker run command with --server localhost:

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.4 --server https://localhost --token tsxgjh2xfcsknlxtjm5blgrrdvchgzdxbrlwk8bvhnxqbr9md4fwq4 --ca-checksum bcbb425e24f4f330321565f85e4f499c7187925efa02548f65f8a3d60843f1ec --etcd --controlplane --worker

I get this error:

INFO: Arguments: --server https://localhost --token REDACTED --ca-checksum bcbb425e24f4f330321565f85e4f499c7187925efa02548f65f8a3d60843f1ec --etcd --controlplane --worker
INFO: Environment: CATTLE_ADDRESS=172.16.101.128 CATTLE_INTERNAL_ADDRESS= CATTLE_NODE_NAME=ubuntu CATTLE_ROLE=,etcd,worker,controlplane CATTLE_SERVER=https://localhost CATTLE_TOKEN=REDACTED
INFO: Using resolv.conf: nameserver 172.16.101.2 search localdomain
ERROR: https://localhost/ping is not accessible (Failed to connect to localhost port 443: Connection refused)

alternative 2 : --server 192.168.122.1 :v:

docker on master VM keeps restarting - probably can’t connect to the management service on host. i have changed the docker run from https://localhost to the vibr0 IP address (192.168.122.1).
the container log on the master VM show:
time="2020-06-04T05:48:22Z" level=fatal msg="Get https://192.168.122.1: x509: certificate is valid for 127.0.0.1, 172.17.0.2, not 192.168.122.1"

Host’s interfaces:

[liran@localhost ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp7s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether f8:75:a4:31:51:b1 brd ff:ff:ff:ff:ff:ff
3: wlp0s20f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4c:1d:96:05:98:1c brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.5/24 brd 10.0.0.255 scope global dynamic noprefixroute wlp0s20f3
       valid_lft 2522sec preferred_lft 2522sec
    inet6 fe80::595c:2b38:6a3f:ef95/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: vmnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 00:50:56:c0:00:01 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.1/24 brd 172.16.1.255 scope global vmnet1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fec0:1/64 scope link 
       valid_lft forever preferred_lft forever
5: vmnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 00:50:56:c0:00:08 brd ff:ff:ff:ff:ff:ff
    inet 172.16.101.1/24 brd 172.16.101.255 scope global vmnet8
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fec0:8/64 scope link 
       valid_lft forever preferred_lft forever
6: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:05:78:06 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
7: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:05:78:06 brd ff:ff:ff:ff:ff:ff
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:d8:f7:59:3f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d8ff:fef7:593f/64 scope link 
       valid_lft forever preferred_lft forever
10: veth2ad56f8@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ce:51:ea:c7:43:6c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::cc51:eaff:fec7:436c/64 scope link 
       valid_lft forever preferred_lft forever

VM Master:

docker0 - 172.17.0.1
ens33 - 172.16.101.128

liran@ubuntu:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:d5:5b:b5 brd ff:ff:ff:ff:ff:ff
    inet 172.16.101.128/24 brd 172.16.101.255 scope global dynamic noprefixroute ens33
       valid_lft 1621sec preferred_lft 1621sec
    inet6 fe80::f2fb:7d5f:d422:8c3f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:cd:98:a1:ff brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

liran@ubuntu:~$ ip route
default via 172.16.101.2 dev ens33 proto dhcp metric 100 
169.254.0.0/16 dev ens33 scope link metric 1000 
172.16.101.0/24 dev ens33 proto kernel scope link src 172.16.101.128 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 

liran@ubuntu:~$ netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         172.16.101.2    0.0.0.0         UG        0 0          0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 ens33
172.16.101.0    0.0.0.0         255.255.255.0   U         0 0          0 ens33
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0

liran@ubuntu:~$ cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 127.0.0.53
options edns0
search localdomain

IPTABLES
on Host

[liran@localhost ~]$ sudo iptables --list
[sudo] password for liran: 
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[liran@localhost ~]$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination 

on Master VM

liran@ubuntu:~$ sudo iptables --list
[sudo] password for liran: 
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            

VM Agent:
docker0 - 172.17.0.1
ens33 - 172.16.101.129

My assumption was wrong -
I thought that rancher/rancher was some UI from which I install on one VM the server
and on the other VM the agent.
the main problem of communicating between the host and it’s internal VM was not solved.
I guess there are some configurations to edit first in order to enable this communication.
this post can be deleted.