running instances in external VLANs directly

Hi,

I’m trying to explore the possibilities available in SUSE Openstack Cloud 5, one of the areas is networking.

Our physical network is divided into VLANs, the switch ports for the compute nodes are configured to handle untagged traffic via the “cloud admin VLANs” and support 802.1q tagged frames for the other VLANs configured for Cloud and those for the non-cloud environment.

One tenant will be creating instances that will be part of the standard production network and I would like to configure the Cloud in a way that communications for those instances will not need to cross the control node, but rather have the instance network interfaces be part of the corresponding VLAN.

Other tenants ought to work similarly - we would have a VLAN per tenant and an external router should do all the work.

In other words: I want the packet flow to be “instance” - “bridge on compute node” - “VLAN trunk to physical switch” (and then, if needed, going to physical router). No “control node”, GRE or alike involved in the packet flow.

The SUSE Cloud documentation mentions external routers, but for some reason I have not been able to set things up in a way that would put the instance interfaces into the VLANs, right on the compute nodes. I don’t even see according bridges on the compute node, neither via brctl nor via ovs commands.

Could somebody give me a push into the right direction, please?

Regards,
Jens

Right now there’s no automatic way to put a physical router in the virtual topology that takes care of the traffic above L2.
The external router mentioned in the documentation is the router that is connected to the external network and can act as a gateway.
You should be able to see the software switches on the compute node, which ML2 mechanism driver are you using for Neutron? If you are using ovs, do ‘ovs-vsctl show’ on the compute node, you should be able to see br-int, br-tunnel.

[QUOTE=jmozdzen;28292]Hi,

I’m trying to explore the possibilities available in SUSE Openstack Cloud 5, one of the areas is networking.

Our physical network is divided into VLANs, the switch ports for the compute nodes are configured to handle untagged traffic via the “cloud admin VLANs” and support 802.1q tagged frames for the other VLANs configured for Cloud and those for the non-cloud environment.

One tenant will be creating instances that will be part of the standard production network and I would like to configure the Cloud in a way that communications for those instances will not need to cross the control node, but rather have the instance network interfaces be part of the corresponding VLAN.

Other tenants ought to work similarly - we would have a VLAN per tenant and an external router should do all the work.

In other words: I want the packet flow to be “instance” - “bridge on compute node” - “VLAN trunk to physical switch” (and then, if needed, going to physical router). No “control node”, GRE or alike involved in the packet flow.

The SUSE Cloud documentation mentions external routers, but for some reason I have not been able to set things up in a way that would put the instance interfaces into the VLANs, right on the compute nodes. I don’t even see according bridges on the compute node, neither via brctl nor via ovs commands.

Could somebody give me a push into the right direction, please?

Regards,
Jens[/QUOTE]

Hi rsblendido,

thank you for your quick response.

[QUOTE=rsblendido;28310]Right now there’s no automatic way to put a physical router in the virtual topology that takes care of the traffic above L2.
The external router mentioned in the documentation is the router that is connected to the external network and can act as a gateway.
You should be able to see the software switches on the compute node, which ML2 mechanism driver are you using for Neutron? If you are using ovs, do ‘ovs-vsctl show’ on the compute node, you should be able to see br-int, br-tunnel.[/QUOTE]

Yes, I can see that configuration on the compute node. I’m new to ovs, so I’m still hunting down the details, but especially the GRE seems to imply that all instance traffic is sent to the control node first and only there is forwarded to the external network.

If there’s no automatic way, to what extend may I modify the network configuration on the compute node manually, without the configuration being overwritten upon reboot?

Prior to enabling ovs, I saw a “traditional” bridged configuration via brctl on the compute node - to get the desired effect, all I’d have to do is add a VLAN interface and include that in the according bridge. I’d know how to do that (we’re running a similar SLES11SP3 Xen server cluster that way) but am unsure to what extend I may interfere manually (or rather “by startup scripts on the compute node”).

Regards,
Jens

[QUOTE=jmozdzen;28315]Hi rsblendido,

thank you for your quick response.

Yes, I can see that configuration on the compute node. I’m new to ovs, so I’m still hunting down the details, but especially the GRE seems to imply that all instance traffic is sent to the control node first and only there is forwarded to the external network.
[/QUOTE]

All the traffic above L2 is forwarded to the network node. That node has the logic to route the packet and also the access to the external network.

If you are using SUSE Cloud you can modify the network configuration in Crowbar UI

[QUOTE=jmozdzen;28315]
Prior to enabling ovs, I saw a “traditional” bridged configuration via brctl on the compute node - to get the desired effect, all I’d have to do is add a VLAN interface and include that in the according bridge. I’d know how to do that (we’re running a similar SLES11SP3 Xen server cluster that way) but am unsure to what extend I may interfere manually (or rather “by startup scripts on the compute node”).

Regards,
Jens[/QUOTE]

You can keep using linuxbridge if you are more familiar with it. Just configure Neutron to use linuxbridge in Crowbar UI.