Visit VMARENA.COM For More Advanced Technical Posts , Learn More .

Thursday, August 28, 2014

What’s New in VMware NSX For vSphere 6.1

What’s New in VMware NSX For vSphere 6.0

NSX for vSphere 6.1 shall be available during VMworld or the week after VMworld US. During the presentation which has been done by Brad Hedlund who works as an engineering architect at VMware, we had the chance to see all the features introduced and presented.
There is quite a few of them and it seems that VMware also going to come out with NSX for Multi hypervisor, as well as OpenStack Integration,  Also interesting in this post – Micro Segmentation which allows to protect inside of the datacenter – is feature which is part of NSX for vSphere 6.1.
While the product’s name announced is NSX for 6.1 one would think that vSphere 6.1 is already out. The reality is that the vSphere 6 (or 6.1) hasn’t been announced yet and for now there is still the Public Beta of vSphere 6 where you can join, but you can’t talk about what’s in. The only info from the public beta that is out is the info about VVoLS. Virtual volumes (VVoLS) encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.


DHCP Relay

This is new feature which is new in NSX for vSphere 6.1 and allows integrating external DHCP servers in the physical world. There can be several DHCP servers which can be configured per logical router ports.
You can have distributed router where the kernel module in all the hypervisors which is providing the default gateway for the VMs, but also providing the routing from one logical switch to another right in the kernel or routing between logical switches and port groups.
It’s possible to setup external DHCP servers to respond to DHCP requests from VMs which are attached to logical switches or distributed port group within NSX to which distributed router is attached to

NSX for vSphere 6.1 - DHCP Relay How-it works
It’s possible to define several DHCP servers.

Two Stage ECMP Support

Equal cost multi path routing support for distributed logical router and also for NSX edges. There is the distributed router, which can have multiple upstream NSX Edges to egress traffic from, and also in the upper layer the NSX edges can have multiple upstream physical routers with Equal cost multi path routing to egress traffic from as well as receive.
vSphere 6.1 and NSX - Two Stage ECMP support
Allows High availability and scale out.

L2 VPN: Enterprise Migration (NSX on both sides)

This feature is an enhancements to previous version of NSX where previously on one VLAN could be trunked. It can be used for migration of workflows between datacenters. You can trunk multiple VLANS of multiple VXLANs from one datacenter to the other.
VMware vSphere 6.1 and NSX - L2 VPN - migration of workflows between datacenters
This function brings similar functionality as Cisco OTV (Example here), which is providing is a “MAC in IP” technique for supporting Layer 2 Vans over any transport. The L2 VPN provides trunking L2 network from one side to another over L3 network encrypted by SSL VPN tunnel.
Another use case would be for example to extend an NSX datacenter to non-NSX datacenter. See image below… In this case the remote site is not running an NSX. You can extend the L2 network from the remote site where you can deploy an NSX Edge and provide the VLAN trunking, to extend the L2 networks from the remote site into an NSX deployment into the datacenter.
  • Both the sites could be NSX Free Edge
VMware vSphere 6.1 - NSX new features

NSX for vSphere 6.1 Load Balancing Enhancements

The load balancer can be turned On on the NSX Edge. So you’re able to have TCP as well as UDP load balancing.
NSX vSphere 6.1 load Balancing Enhancements
Just a few enhancements here.

F5 and NSX

VMware is partnering with F5 to inject the nextgen firewall in NSX deployments.
When customer wants to deploy load balancer for their application, they have the choice between the load balancer present in NSX and F5 load balancer.
NSX for vSphere 6.1 + F5 Palo Alto Networks
As concerning the deployment within vCenter, the admin will have the possibility to check a box to allow a service insertion, which would allow the F5 insertion.

Firewall Enhancements in NSX

  • Firewall Reject action (not only allow or deny)
  • Troubleshooting and Monitoring
  • Advanced filtering of rules ( you can filter to find rules)
  • CPU/Memory Thresholds (if CPU thresholds are reached, the admin will get notified)
  • IPFIX Support in DFW (distributed firewall)
  • Combined Edge and DFW Management (single management of rules for Distributed firewall, Edge or both)
  • Network Oriented service insercion (NetSec Partner Redirection)

NSX Multi Hypervisor 4.2

  • NSX Multi-Hypervisor is a minor release
  • controller HA/Hitless upgrade
  • DHCP Relay
  • OVS performance enhancements
  • Security profile scale enhancements
  • Scale Targets unchanged
  • Upgrade from any 4.1x release is supported
  • Hearbleed issue fixed in 4.2 release
  • GA in Q3 2014

Micro Segmentation

Apparently the micro segmentation is the feature that makes VMware to actually sell a lot of NSX, and clients are buying NSX to use that particular feature in their datacenters. Usually in traditional datacenters where a single firewall is protecting the whole datacenter – a problem can occur if someone break in. If does, then he(she) can do whatever he (she) want, because single firewall is protecting the environment.
Usuall datacenter has 2 firewalls but the number of VMs counts in hundreds (thousands) .

VMware NSX for vSphere 6.1 Micro Segmentation
That’s why micro-segmentation (isolation) provides the best results. The firewall in VMs does not really help.
A physical firewall per workflow is not cost effective (too expensive)

VMware NSX for vSphere 6.1 Micro Segmentation
That’s why micro-segmentation (isolation) provides the best results. The firewall in VMs does not really help.
A physical firewall per workflow is not cost effective (too expensive)
Micro Segmentation challenges
The solution is to provide firewall services through the hypervisor’s kernel module. Distributed firewall kernel module provides a protection to VMs so when VM is created, a firewall policy is attached to that VM. So if the VM is moved to another host, the policy follows.
If VM is deleted, the policy gets deleted as well. It’s not VLAN centric security deployment but rather creating in security groups which can be static or dynamic. And VMs are attached to those groups. The policy is applied to the group. It simplifies the topology.
Achieving Micro Segmentation with NSX
It works through the identification of workflows, use attributes to create security groups and then apply policies to those security groups. Here is another screenshot to ilustrate.
VMware NSX for vSphere 6.1 configuration of policy with security groups
The micro segmentation provides better security inside of the datacenter.

Example 1:

Micro Segmentation Use Case 1

Example 2:

Segmentation between tenants

vMotion Enhancements for vSphere 6.0

vMotion Enhancements for vSphere 6.0

  • vMotion across vCenter Servers (VCs)
  • vMotion across virtual switches: Virtual Standard Switch (VSS), Virtual Distributed Switches (VDSs)
  • vMotion using routed vMotion networks
  • Long-distance vMotion for use cases such as:
    • Permanent migrations (often for Datacenter expansions, acquisitions, consolidations)
    • Disaster avoidance
    • SRM and disaster avoidance testing
    • Multi-site capacity utilization
    • Follow-the-sun scenarios
    • Onboarding onto vSphere-based public clouds (including VMware vCloud Air)

  • vMotion Across Virtual Switches

    You’re no longer restricted by the network created with the switch. vMotion across virtual switches (standard or VDS). It transfers all the metadata from the VDS ports (portgroups etc) with the VM during the vMotion process. Its transparent to the VMs (VMs are not aware that they are moved) – no downtime for applications.

    Requirements:
    • L2 VM network connectivity
    It’s possible to move VMs:
    • from VSS to VSS
    • from VSS to VDS
    • from VDS to VDS

    vMotion Across vCenters

    Allows to change compute, storage, networks and management. In single operation you’re able to move a VM from vCenter 1 where this VM is placed on certain Host, lays on some datastore and is present in some resource pool, into a vCenter 2 where the VM lays on different datastore, is on different host and it’s part of different resource
    As a vSphere engineer, we’re usually limited to vMotion domains that are limited by a vCenter Server construct (or more specifically, the data center in many cases due to network configurations). vSphere 6 will allow VMs to vMotion across Datacenter and VC boundaries using a new workflow. You’ll also be able to take advantage of a workflow that lets you hop from one network (source) to another network (destination), eliminating the need to have a single vSwitch construct spanning the two locations.
    vMotion Diagram

    Long-Distance vMotion

    This type of vMotion can move VMs from your datacenter into a cloud datacenter. This makes me thin when to use this kind of vMotion and why?

    vMotion using routed networks

    The v6.0 of vSphere will allow using routed networks for vMotion networks, which is curently impossible.
    When use Long-distance vMotion?
    Some of the use cases would be…
    • Disaster Avoidance
    • Permanent Migrations
    • SRM/DA testing
    • Multi-site Load balancing

    Routed vMotion and Increased RTT Tolerance

    And while you can request a RFQ (request for qualification) to use Layer 3 for vMotion, most of us are limited (or comfortable) with Layer 2 vMotion domains. Essentially, this means one large subnet and VLAN stretched between compute nodes for migrating workloads. An upcoming feature will allow VMs to vMotion using routed vMotion networks without need for special qualification. In addition, another useful feature planned will revolve around the ability to vMotion or clone powered off VMs over NFC networks.
    And finally, the latency requirements for vMotion are being increased by 10x. Enterprise Plus customers today can tolerate vMotion RTTs (round trip times) of 10 ms or less. In the new release, vMotion can withstand 100 ms of RTT

    VLAN handling in virtual switches

    VLAN handling in virtual switches
     
    There are 3 modes of accessing VLANs in vswitches on esxi.
     
    • EST (External Switch Tagging)
    • VST (Virtual Switch Tagging)
    • VGT (Virtual Guest Tagging)
    EST (External Switch Tagging)
     
    In this method your physical switch port is configured as Access port, and no VLAN configured on virtual port group, Physical switch handle VLAN tagging. vSwitches receives untagged traffic. Downside of this method it will consume lots of NICs, if you want to use different VLANs
     
     
    VST (Virtual Switch Tagging)
     
    This is the very common, popular and recommended best method. Virtual Port Groups are configured with VLAN. To work this design connected physical switch port should be configured as Trunk port and can be configured with either one VLAN or multiple VLANs. Traffic with VLAN tag is sent down to vSwitches. vSwitches will forward that traffic to concerned port group by stripping the VLAN tag. Tagging is added when traffic is left from vSwitches to uplink port. There is little CPU cycle involved using this technique.
     
     


    VGT (Virtual Guest Tagging)

    Configuration for this method is as same as VST at physical switch. Physical switch port should be configured as trunk. Actual VLAN is configured on VM in the virtual NIC settings and VLAN 4095 configured on virtual port group. (4095 can read all VLANs traffic, this VLAN is generally used for monitoring or sniffing traffic)
     
    VLAN id option in vmxnet3 Ethernet adapter only.