Network virtualization is for sure one of the big abstractions brought in by Windows Server 2012 and System Center 2012 Service Pack 1 Virtual Machine Manager (vmm). If you are involved like I am in private and hybrid cloud deployments, network virtualization is one of the key strategic decision to be taken. This article is a collection of notes from the field and useful links primarily thought as my own reference book.
Currently I’m not working with hosters, so these notes will only incidentally take into account network isolation. In any case one thing is sure network virtualization is badly needed for a private cloud deployment, this technology is not only for hosters. In fact thanks to network virtualization it is possible to manage the network connection in a standardized way and optimize physical network utilization.
The key terms for Network Virtualization in VMM in the order they should be defined are:
Logical Network – abstraction of the underlying network, logical networks are “networks roles” such as “production”, “dmz”, “management”, …. A VM is connected to a Logical Network via a VM Network reference and a Logical Switch
Network site – simply a network site. A Logical Network is typically implemented in multiple sites, so Logical networks contain multiple site, each site has its own associated subnets (and VLAN). Network sites are typically scoped to specific host groups.
IP Address pool – a range of IP addresses associated to a network site. Similar thing for MAC address pool. At provisioning time a VM can allocate an IP address from the pool the VM is connected to.
Native Uplink port profile – defines the characteristics of the physical connection between a logical switch and the physical network. It’s an “uplink” in networking terms. It defines: teaming behavior if any, which logical network are connected by means of the port.
Native Virtual Network Adapter Port Profiles – defines the virtual network adapter behavior (offloading, security, bandwidth). The profiles applies both to VM virtual NICs *and* host virtual NICs.
Port Classification – is a friendly name for the Virtual Port, it is commonly associated to a Native Virtual Network Adapter Port Profile during the definition of Virtual Port in Logical Switches.
Logical switch – defines a switch (it’s the *new* virtual switch). It must be considered like a network switch, it defines Uplink ports and Virtual Ports. It defines if teaming is supported or not. It can support extensions such as filtering or captures. Uplink ports are the connections towards the physical network (via a Network site and as a consequence a Logical Network). Uplink ports are defined through Native Uplink Port Profiles; Virtual Ports are defined through Port Classification and Native Virtual Network Adapter Port profiles.
Virtual machine Network – is the abstraction for the network the VMs connect to. It can be a standard network or an isolated network, in both cases it refers to a Logical Network. Isolated networks are commonly used in hosting scenarios when multiple IP subnets in the same IP address space need to coexist in an isolated manner.
Let’s start with a relationship diagram between all the physical and logical entities involved in network virtualization with vmm.
I designed this reference schema to try to give a logical order to network virtualization, from the schema we can deduce a few clear rules:
– Every VM can have one or more Virtual Network Adapter
– Every VA is bound to a VM Network and to a logical switch with an optional port classification
o The VM Network can be a standard or an isolated one, in the latter case an isolated subnet needs to be chosen
o The Logical Switch sets which network sites are reachable through the uplink port profile
o The port classification sets the properties of the Virtual Netwok Adapter and it’s optional
– Every VM Network is bound to a Logical Network
– Every Logical Network can have multiple sites who in turns can have multiple subnets and IP Address Pool. Sites can be bound to specific host groups.
– Logical switches need to be defined at the host level, they set which physical NICs are used and which Network Sites and Logical Networks (as a consequence) are reachable from the Logical Switch. More NICs can be teamed is the Logical Switch is configured to support teaming and the NICs are bound to the same uplink profile.
– The use network convergence i.e. be able to use a network channel for multiple distinct purposes Virtual Network Adapter need to be added to the Logical Switch on the Host, every Virtual Network Adapter is shown as virtual NIC at the host level. Since every Virtual Network Adapter can have its own classification, network traffic and bandwidth can be different for each Virtual Network Adapter. Important: Virtual Network Adapter created for the Logical Switches at the host level are used for exposing to the host different networks, these adapters are not used by VMs.
A VM can be connected only to a Virtual Machine Network for which a Logical Switch with an uplink port to a Network Site contained in the Logical Network referenced by the Virtual Machine Network has been defined on the host. OK just follow the green arrows
Lessons learned and FAQ
Every physical NIC can only be associated to one logical switch. 1 NIC : 1 Logical Switch
Every Logical switch can use multiple NICs (teamed or not): 1 Logical Switch : n NICs
As a best practice define at least one logical switch for each different physical connection needed, for example if I want a host and the hosted VMs to communicate using three different NICs without using teaming (production, management, dmz) I would define three different logical switches. The logical switch definition is determined by the physical NICs configuration on the hosts, also.
Once a Logical switch is used by a VM or a Host the Logical switch properties cannot be changed.
The IP address pools associated to Network Sites are used only when a new VM is provisioned starting from a VM Template or a Service Template.
Currently it is not possible to have overlapping IP address range between the provider and the customer.
Currently there’s no way to hide from VMs a logical switch used for converged networks at the host level.
Network teams must be configured and used via VMM and not via the OS.
Migrating from legacy virtual switches to logical switches and network virtualization
While for green fields deployment it’s only a matter of proper design, in case of an existing virtualization infrastructure things are a little more complicated. I’m assuming an Hyperv 2008 R2 to Hyperv 2012 migration.
I didn’t find a way to migrate from the legacy model (virtual switch based) to the network virtualization model (logical switch based) without disrupting service operations: at the very least VMs need to be stopped before the virtual network adapter can be reconfigured to use a logical switch.
– Side by side or leap frog – using a spare brand new Windows Server 2012 server
– In place using the existing hosts
The first model is very similar to a green field deployment, once the virtualized network infrastructure has been defined in VMM a WS2012 host is prepared and configured. The VMs are migrated to the new host using Storage migration (typically with a downtime of a few dozen seconds). Once the first legacy host is evacuated, it can be upgraded and integrated in network virtualization. VMs can then be moved back without downtime using shared nothing Live Migration. The process is repeated for every host to be upgraded. The same process can be applied to failover clusters.
The in place model requires downtime, if there’s spare capacity in the data center it can be reported to the side by side model. If this is not an option, the risk goes up and the procedure is basically the following:
– Upgrade the host to WS2012 (inplace upgrade is supported). Downtime.
– Get a free NIC, for example removing a NIC from an existing team. Build the first Logical Switch.
– Using powershell rebind the VMs to the new switch. At this point the NIC(s) used by the legacy switch can be recycled. Downtime, to rebind the virtual network adapter from the virtual switch to the logical switch the VMs need to be turned off.
– Repeat the process for every existing Virtual Switch.
If you want to migrate an existing dedicated management NIC to a converged network model you cannot use a teaming Logical Switch and migrate the existing NIC configuration. When the team is created by VMM the DNS configuration is lost, the new network is identified as public and basically any communication with host is lost, thus leaving the host only partially configured.