Life of Interrupts: Remapping

Introduction


In early versions of Hyper-V, we used to start HV after NT kernel in the host is already loaded. In this case APIC was already initialized by NT and serving interrupts to NT kernel. Before we started HV, we disabled interrupts, copied APIC state to the virtual APIC and then started the hypervisor. From this point onwards, HV became owner of the physical APIC and NT kernel in host OS managed only virtual APIC state. In this case, HV needed to provide identity mapping for the interrupts as NT had already programmed various devices to generate specific interrupts and HV had no way to reprogram those devices. So even though hypervisor would intercept the interrupt, it would simply deliver the interrupt to NT. For its own use, HV relied upon NT to reserve interrupt vectors or use NMIs. For example, we used NMIs for inter-processor synchronization. HV kept track of whether NMI was requested by it or not, and if a phantom NMI was received, it was delivered to NT kernel for processing. This ensured that if NMIs are delivered due to a fault or other critical event that doesn’t belong to HV, NT received the NMI and handled it correctly. (more…)

Read More

Windows Network Controller Architecture

Overview


Windows network controller (WNC) is an SDN controller built for the next version of Windows Server. It is designed as a scalable and highly reliable distributed application to program physical and virtual elements of a datacenter, to provide autonomous datacenter network management. The north pole of WNC is to provide autonomous datacenter network management such that human intervention is needed only when there is a hardware failure. (more…)

Read More

Dynamic VMQ

After working in the hypervisor team for few years, during Windows 8 time frame, I decided to move back to networking as a lead, to lead the increased investments in networking. We built a lot of features such as SR-IOV support, Dynamic VMQ, Extensible Virtual Switch etc.

In this post I would talk about a feature we built called dynamic VMQ, a feature designed to provide optimal processor utilization across changing workload that was not possible with static processor allocation for VMQ as done in Windows 7 (or Windows Server 2008 R2 release). However, before we dig deeper into dynamic VMQ, let me recap the processor utilization for no VMQ and static VMQ cases. (more…)

Read More

Virtual Machine Queue (VMQ)

In my last post, I talked about how various NIC offloads are supported in VMSWITCH to provide high performance network device virtualization. In this post, I would talk about another networking performance technique called virtual machine queues (or VMQ).

Background

In Windows networking stack, to utilize multiple processors in a machine, a feature called RSS (or receive side scaling) is used. This feature was co-developed by Microsoft working with hardware partners. It provides two main features:

  • It allows incoming traffic to be put on different queues that get processed on different processors based on TCP/UDP stream information i.e. source and destination IP and ports.
  • It allows sent traffic to be put on specific queues and completion for sent traffic to be handled on a specific processor based on the TCP/UDP stream information.

(more…)

Read More

Virtual Switch Performance using Offloads

In my last post, I talked about the architecture of Hyper-V Virtual Switch (VMSWITCH), that powers some of the largest data centers in the world, including but not limited to Windows Azure. In this post I would talk about how it is able to meet the networking performance requirements of the demanding workloads that runs in these data centers.

VMSWITCH provides an extremely high performance packet processing pipeline by using various techniques such as lock free data path, using pre-allocated memory buffers, batch packet processing etc. In addition, it leverages the packet processing offloads provided by underlying physical NIC hardware. These offloads do some of the packet processing in NIC hardware, thereby reducing the overall CPU usage and providing a high performance networking. If you are unfamiliar with NIC offloads, you may want to first read about them here and here. (more…)

Read More

Architecture of Hyper-V Virtual Switch

Hyper-V Virtual Switch (referred also as VMSWITCH) is the foundational component of network device virtualization in Hyper-V. It powers some of the largest data centers in the world, including Windows Azure. In this post, I would talk about its high level architecture.

Standards Compliant Virtual Switch

As I mentioned in my previous post, the main goal in building VMSWITCH was to build a standards compliant, high performance virtual switch. There are many ways to interpret standards compliant because there are many standards. So to be specific, our goal was to build packet forwarding based on 802.1q and mimic physical network semantics (such as link up/down) as much as possible. This was to make sure that whatever works in a physical network, works in the virtual environment as well. We defined three main objects in VMSWITCH, vSwitch, vPort and NIC. A vSwitch is an instance of a virtual switch that provides packet forwarding and various other features provided by a switch such as QoS, ACL etc. A vPort is analogous to a physical switch port and has configuration associated with it for various features. And finally, a NIC objects, that acts as the endpoint connecting to a vPort. This is similar to the physical network, where a host has a physical NIC that connects to a physical port on a physical switch. (more…)

Read More

Hyper-V Inception

Back in 2003, I joined the team that was venturing into the world of server virtualization, and little did I know that it would take me on journey, that remains as exciting today as it was back then. This was the time, when leaders in our team were contemplating building a brand new hypervisor based virtualization solution. Who knew, back then, that one day it would become the defining feature of our server and cloud solutions. I remember, new to the team, wondering what role I would play in this product, which sounded like rocket science at that time. There were intense architectural meetings, long discussions on finding a code name for the project and both excitement and nervousness to see what I would get to do. Eventually Viridian was born and I, along with one of my colleague Jeffrey, was given the charter to build network device virtualization for Viridian. (more…)

Read More