I’m working on an NSX-T 2.5 design for a customer. Since there are some changes with the load balancing options and design scenarios for Edge Node VMs and N-VDS, I first wanted to know what was changed, and how did it work.
Named Teaming Policies is a feature in that was introduced in NSX-T 2.3 and since NSX-T 2.4 a N-VDS can have multiple Named Teaming Policies attached. This section becomes now even more interesting since the NSX-T Edge Node recommended design has changed with NSX-T 2.5. See this blogpost from Rutger Blom to gain more information about this change.
In this blogpost I’m only focusing on the load balancing part itself when you make use of multiple teaming policies and not how to configure it.
Before we are heading to the bits itself, it’s good to know how my setup looks like. I have a physical ESXi host installed with N-VDS backed with two physical network interfaces; vmnic2 and vmnic3 in the hypervisor.
On top of that I have an NSX-Edge Node VM installed with a tier-0 router using two separate Uplink VLANs to peer with the physical world.
As you can see the uplinks of this tier-0 router are connected to separate uplink logical switches in NSX-T. Those segments are using Named Teaming Policies within NSX-T to send the uplink traffic over dedicated NICs. In this example all the traffic over uplink-vlan-A will be send over the vmnic2 physical adapter, and all the traffic over uplink-vlan-B will be send over vmnic3 physical adapter.
So at first a uplink profile for the ESXi host is created with additional teaming policies which defines a active and standby network adapter.
When that is done, those named teaming policies need to be attached to the transport zone used for vlan based segments/logical switches.
The last step is to apply the teaming to the logical switches. This is done by selecting a specific Teaming Policy at the Logical Switch/Segment level. In the screenshot below you can see that I selected the vmnic2 Teaming Policy for the VLAN-trunk-A segment, and the vmnic3 Teaming Policy for the VLAN-trunk-B segment.
One of the interesting parts is how does this differ from when you do this on a VDS based port group. When I select one of the uplink VLAN logical switches in vSphere it seems that the uplink policy is not applied because both of the vmnics are marked.

So, how does this work then? I will demonstrate this with a simple PING command to one of the peering IP addresses on the tier-0 router which is on uplink-vlan-A and therefore in a normal situation the traffic needs to go outside through vmnic2.
With esxtop I can see in detail which vmnic and/or vnic are generating traffic. With no ping esxtop looks like this; en01.eth1 is connected to the uplink-vlan-A and therefore pinned to the vmnic2. The en01.eth2 is connected to the uplink-vlan-B and therefore pinned to vmnic3. So far so good, policy looks implemented. Let’s test this!
With a constant ping towards the IP address you can see below in the PKTTX/s column that there is actual traffic traversing the vmnic2.
With the physical switchport connected to vmnic2 being disabled you can see that the en0.eth1 is now is still able to receive network traffic but that the hypervisor the vnic has switched towards vmnic3 (as defined in the NSX-T Named Teaming Policy; using the other NIC as standby).
After enabling the switchport again, you can see that the hypervisor switches back to the original vmnic2 for en01.eth1 and that traffic is still flowing.
During the switchover back to the vmnic2 there were some ping losses.
The usage of ‘standby uplinks’ in NSX-T is preemptive. So, this means that whenever a NIC comes back again the traffic will go back to the initially defined interface. With this configuration you can’t protect yourself from flapping interfaces.
*NOTE: When using standby interfaces for your uplink VLANs, make sure both VLANs are tagged on both interfaces.
sources:
vmware.com
rutgerblom.com
versions:
vSphere 6.7 U1
NSX-T 2.5
If you have any questions or remarks, feel free to reach out!