Files

Abstract

In recent years, there has been a rapid growth in the adoption of virtual machine technology in data centers and cluster environments. This trend towards server virtualization is driven by two main factors: the savings in hardware cost that can be achieved through the use of virtualization, and the increased flexibility in the management of hardware resources in a cluster environment. An important consequence of server virtualization is the negative impact it has on the networking performance of server applications running in virtual machines (VMs). In this thesis, we address the problem of efficiently virtualizing the network interface in Type-II virtual machine monitors. In the Type-II architecture, the VMM relies on a special, 'host' operating system to provide the device drivers to access I/O devices, and executes the drivers within the host operating system. Using the Xen VMM as an example of this architecture, we identify fundamental performance bottlenecks in the network virtualization architecture of Type-II VMMs. We show that locating the device drivers in a separate host VM is the primary reason for performance degradation in Type-II VMMs, because of two reasons: a) the switching between the guest and the host VM for device driver invocation, and, b) I/O virtualization operations required to transfer packets between the guest and the host address spaces. We present a detailed analysis of the virtualization overheads in the Type-II I/O architecture, and we present three solutions which explore the performance that can be achieved while performing network virtualization at three different levels: in the host OS, in the VMM, and in the NIC hardware. Our first solution consists of a set of packet aggregation optimizations that explores the performance achievable while retaining the Type-II I/O architecture in the Xen VMM. This solution retains the core functionality of I/O virtualization, including device driver execution, in the Xen 'driver domain'. With this set of optimizations, we achieve an improvement by a factor of two to four in the networking performance of Xen guest domains. In our second solution, we move the task of I/O virtualization and device driver execution from the host OS to the Xen hypervisor. We propose a new I/O virtualization architecture, called the TwinDrivers framework, which combines the performance advantages of Type-I VMMs with the safety and software engineering benefits of Type-II VMMs. (In a Type-I VMM, the device driver executes directly in the hypervisor, and gives much better performance than a Type-II VMM). The TwinDrivers architecture results in another factor of two improvements in networking performance for Xen guest domains. Finally, in our third solution, we describe a hardware based approach to network virtualization, in which we move the task of network virtualization into the network interface card (NIC). We develop a specialized network interface (CDNA) which allows guest operating systems running in VMs to directly access a private, virtual context on the NIC for network I/O, bypassing the host OS entirely. This approach also yields performance benefits similar to the TwinDrivers software-only approach. Overall, our solutions help significantly bridge the gap between the network performance in a virtualized environment and a native environment, eventually achieving network performance in a virtual machine within 70% of the native performance.

Details

Actions

Preview