I am having an interesting issue with configuring the MTU on my KVM guest VMs. The VMs have two nics. One for management (eth0) which connects to vibro and eth1 which connects to a host device macvtap (which is configured for MTU 9000) and will be used for NFS/iSCSI traffic. As such I need to configure the MTU on the interface to 9000.
I am able to ping from the host through macvtap device at 9000. Unfortunately, the machine is ignoring the configuration and reports the MTU as 1500. I believe I need to change something in the virtual network configuration, but I do not know where to look. I found someone who had the same issue, however there was no concrete response.
kvm-host # ping -I vlan100 11.11.11.12 -s 9000
PING 11.11.11.12 (11.11.11.12) from 11.11.11.22 vlan100: 9000(9028) bytes of data.
9008 bytes from 11.11.11.12: icmp_seq=1 ttl=255 time=0.186 ms
9008 bytes from 11.11.11.12: icmp_seq=2 ttl=255 time=0.183 ms
9008 bytes from 11.11.11.12: icmp_seq=3 ttl=255 time=0.184 ms
9008 bytes from 11.11.11.12: icmp_seq=4 ttl=255 time=0.162 ms
9008 bytes from 11.11.11.12: icmp_seq=5 ttl=255 time=0.139 ms
kvm-guest
ping -I eth1 11.11.11.12 -s 1490
PING 11.11.11.12 (11.11.11.12) from 11.11.11.33 eth1: 1490(1518) bytes of data.
1498 bytes from 11.11.11.12: icmp_seq=1 ttl=255 time=0.413 ms
1498 bytes from 11.11.11.12: icmp_seq=2 ttl=255 time=0.464 ms
1498 bytes from 11.11.11.12: icmp_seq=3 ttl=255 time=0.290 ms
1498 bytes from 11.11.11.12: icmp_seq=4 ttl=255 time=0.279 ms
just let me summarize your setup and please correct me if I’m wrong:
host has IP 11.11.11.22 on interface vlan100
guest1 has IP 11.11.11.12 (probably on interface eth1)
guest2 has IP 11.11.11.33 on interface eth1
the host (11.11.11.22, interface “vlan100”) is pinging guest1 (11.11.11.12) successfully with packet size 9000
guest2 (11.11.11.33, interface eth1) is pinging guest1 (11.11.11.12) successfully with packet size 1490
guest2 (11.11.11.33, interface eth1) does fail pinging guest1 (11.11.11.12) with packet size 9000
all machines are SLES12 (not SP1).
I assume you have Linux bridging set up inside the host, and have vlan100 and the KVM VMs’ interfaces connected to a bridge instance. If so, you might want to try to set the MTU for all affected interfaces (vlan100 plus the virtual VM interfaces) to 9000 on the host - IIRC Linux bridge uses the smallest MTU size of all interfaces in the bridge.
no, it was just for clarification of the environment.
Again, not that I see any problem, it’s just for clarity’s sake.
Then how’s your guest’s eth1 connected to the outside world (IOW: how’s the network setup on the host with regard to the guest connection)? I’m no KVM guy, but from what I know, it’s using Linux bridge if you decide not to use “user-mode networking”. Since you’re using an IP right out of the “external” network, routing/NATing doesn’t sound applicable… and a bridge to connect vlan100 and the guest’s interface seems the expected way to go.
You seem to know your way around the subject, but let me ask nevertheless: “brtcl show” on the host doesn’t by chance list a bridge to which the guest’s vif (and host’s vlan100) is connected?
no, it was just for clarification of the environment.
Again, not that I see any problem, it’s just for clarity’s sake.
Then how’s your guest’s eth1 connected to the outside world (IOW: how’s the network setup on the host with regard to the guest connection)? I’m no KVM guy, but from what I know, it’s using Linux bridge if you decide not to use “user-mode networking”. Since you’re using an IP right out of the “external” network, routing/NATing doesn’t sound applicable… and a bridge to connect vlan100 and the guest’s interface seems the expected way to go.
You seem to know your way around the subject, but let me ask nevertheless: “brtcl show” on the host doesn’t by chance list a bridge to which the guest’s vif (and host’s vlan100) is connected?
Regards,
Jens[/QUOTE]
Hi Jens,
You raised an interesting point. I then created a bridge (br1) and used vlan100 as the outbound interface. Nevertheless, the behavior of the device was exactly the same. This led me down a different path and I believe I may have found the issue. I will post shortly with my results after I finish my testing.
You raised an interesting point. I then created a bridge (br1) and used vlan100 as the outbound interface. Nevertheless, the behavior of the device was exactly the same. This led me down a different path and I believe I may have found the issue. I will post shortly with my results after I finish my testing.
Regards,
WS[/QUOTE]
Update:
I have isolated the issue to the device model in the virtual network interface. The default device model rtl8139 does not allow MTU 9000 regardless of the source mode (VEPA or Bridge.) Once I changed the device to virtio I was able to pass traffic without issue. In Addition, if the source mode is configured for as Bridge I can pass traffic between guests. If VEPA is used both guests can reach storage at 9k, but cannot pass traffic between each other. I am not sure if this is expected behavior, but this can be an advantage in a multi tenant situation. Furthermore, I cannot pass traffic between guest and host however I believe this is because I am not using a bridge on the host interface. Again, I believe this is desirable for multi-tenancy. Nevertheless, this would be in interesting test and I will plan to give it a try sometime next week.
guest to storage device: #ping -I vlan100 11.11.11.12 -s 9000
PING 11.11.11.12 (11.11.11.12) from 11.11.11.33 eth1: 9000(9028) bytes of data.
9008 bytes from 11.11.11.12: icmp_seq=1 ttl=255 time=12.9 ms
9008 bytes from 11.11.11.12: icmp_seq=2 ttl=255 time=0.356 ms
9008 bytes from 11.11.11.12: icmp_seq=3 ttl=255 time=0.356 ms
9008 bytes from 11.11.11.12: icmp_seq=4 ttl=255 time=0.371 ms
guest .33 to guest .44:
ping -I eth1 11.11.11.44 -s 9000
PING 11.11.11.44 (11.11.11.44) from 11.11.11.33 eth1: 9000(9028) bytes of data.
9008 bytes from 11.11.11.44: icmp_seq=1 ttl=64 time=0.863 ms
9008 bytes from 11.11.11.44: icmp_seq=2 ttl=64 time=0.367 ms
9008 bytes from 11.11.11.44: icmp_seq=3 ttl=64 time=0.549 ms
9008 bytes from 11.11.11.44: icmp_seq=4 ttl=64 time=0.313 ms
guest .44 to guest .33:
ping -I eth1 11.11.11.33 -s 9000
PING 11.11.11.33 (11.11.11.33) from 11.11.11.44 eth1: 9000(9028) bytes of data.
9008 bytes from 11.11.11.33: icmp_seq=1 ttl=64 time=0.429 ms
9008 bytes from 11.11.11.33: icmp_seq=2 ttl=64 time=0.272 ms
9008 bytes from 11.11.11.33: icmp_seq=3 ttl=64 time=0.290 ms
I have isolated the issue to the device model in the virtual network interface. The default device model rtl8139 does not allow MTU 9000 regardless of the source mode (VEPA or Bridge.) Once I changed the device to virtio I was able to pass traffic without issue. In Addition, if the source mode is configured for as Bridge I can pass traffic between guests. If VEPA is used both guests can reach storage at 9k, but cannot pass traffic between each other. I am not sure if this is expected behavior, but this can be an advantage in a multi tenant situation. Furthermore, I cannot pass traffic between guest and host however I believe this is because I am not using a bridge on the host interface. Again, I believe this is desirable for multi-tenancy. Nevertheless, this would be in interesting test and I will plan to give it a try sometime next week.
guest to storage device: #ping -I vlan100 11.11.11.12 -s 9000
PING 11.11.11.12 (11.11.11.12) from 11.11.11.33 eth1: 9000(9028) bytes of data.
9008 bytes from 11.11.11.12: icmp_seq=1 ttl=255 time=12.9 ms
9008 bytes from 11.11.11.12: icmp_seq=2 ttl=255 time=0.356 ms
9008 bytes from 11.11.11.12: icmp_seq=3 ttl=255 time=0.356 ms
9008 bytes from 11.11.11.12: icmp_seq=4 ttl=255 time=0.371 ms
guest .33 to guest .44:
ping -I eth1 11.11.11.44 -s 9000
PING 11.11.11.44 (11.11.11.44) from 11.11.11.33 eth1: 9000(9028) bytes of data.
9008 bytes from 11.11.11.44: icmp_seq=1 ttl=64 time=0.863 ms
9008 bytes from 11.11.11.44: icmp_seq=2 ttl=64 time=0.367 ms
9008 bytes from 11.11.11.44: icmp_seq=3 ttl=64 time=0.549 ms
9008 bytes from 11.11.11.44: icmp_seq=4 ttl=64 time=0.313 ms
guest .44 to guest .33:
ping -I eth1 11.11.11.33 -s 9000
PING 11.11.11.33 (11.11.11.33) from 11.11.11.44 eth1: 9000(9028) bytes of data.
9008 bytes from 11.11.11.33: icmp_seq=1 ttl=64 time=0.429 ms
9008 bytes from 11.11.11.33: icmp_seq=2 ttl=64 time=0.272 ms
9008 bytes from 11.11.11.33: icmp_seq=3 ttl=64 time=0.290 ms
Regards,
WS
[/QUOTE]
[QUOTE=sorgenfw;31567] Final Update:
I cannot pass traffic between guest and host however I believe this is because I am not using a bridge on the host interface. Again, I believe this is desirable for multi-tenancy. Nevertheless, this would be in interesting test and I will plan to give it a try sometime next week.[/QUOTE]
Test complete. I temporarily reinstated the bridge for testing on VLAN100. As I surmised traffic was able to pass between the host and guest interfaces.