VMware ESX Server

From Wikipedia, the free encyclopedia

Jump to: navigation, search
VMware ESX Server
Developed by VMware, Inc.
Latest release 3.5 Update 4 (build 153875) / 2009-3-30; 24 days ago[1]
Platform x86-compatible
Type Virtual machine
License Proprietary
Website VMware ESX Server

VMware ESX Server is an enterprise-level virtualization product offered by VMware, Inc. ESX Server is a component of VMware's larger offering, VMware Infrastructure, which adds management and reliability services to the core server product. The basic server requires some form of persistent storage—typically, an array of hard disk drives—for storing the virtualization kernel and support files. A variant of this design, called ESX Server ESXi Embedded, does away with the first requirement by moving the server kernels into a dedicated hardware device. Both variants support the services offered by Virtual Infrastructure.[2]

Contents

[edit] Technical description

[edit] Terms and wording

VMware, Inc. refers to the hypervisor used by VMware ESX Server as "vmkernel".

[edit] Architecture

VMware states that the ESX Server product runs on "bare metal".[3] In contrast to other VMware products, it does not run atop a third-party operating system[4], but instead includes its own kernel. Up through the current ESX version 3.5, a Linux kernel is started first[5] and is used to load a variety of specialized virtualization components, including VMware's 'vmkernel' component. This previously-booted Linux kernel then becomes the first running virtual machine and is called the service console. Thus, at normal run-time, the vmkernel is running on the bare computer and the Linux-based service console runs as the first virtual machine (and cannot be terminated or shutdown without shutting down the entire system).

The vmkernel itself, which VMware claims is a microkernel,[6] has three interfaces to the outside world:

  • hardware
  • guest systems
  • service console (Console OS)

[edit] Interface to hardware

The vmkernel handles CPU and memory directly, using Scan-Before-Execution (SBE) to handle special or privileged CPU instructions.[7]

Access to other hardware (such as network or storage devices) takes place using modules. At least some of the modules derive from modules used in the Linux kernel. To access these modules, an additional module called vmklinux implements the Linux module interface. Excerpt from the README: "This module contains the linux emulation layer used by the vmkernel."[8]

The vmkernel uses the device drivers:[8]

  1. net/e100
  2. net/e1000
  3. net/bnx2
  4. net/tg3
  5. net/forcedeth
  6. net/pcnet32
  7. block/cciss
  8. scsi/adp94xx
  9. scsi/aic7xxx
  10. scsi/aic79xx
  11. scsi/ips
  12. scsi/lpfcdd-v732
  13. scsi/megaraid2
  14. scsi/mptscsi_2xx
  15. scsi/qla2200-v7.07
  16. scsi/megaraid_sas
  17. scsi/qla4010
  18. scsi/qla4022
  19. scsi/vmkiscsi
  20. scsi/aacraid_esx30
  21. scsi/lpfcdd-v7xx
  22. scsi/qla2200-v7xx

These drivers mostly equate to those described in VMware's hardware compatibility list.[9] All these modules fall under the GPL. Programmers have adapted them to run with the vmkernel: VMware Inc has changed the module-loading and some other minor things.[8]

[edit] Guest systems

The vmkernel offers an interface to guest systems which simulates hardware. This takes place in such a way that a guest system itself can run unmodified atop the hypervisor. Because using unmodified drivers in the guest system uses up some system resources, VMware Inc offers special drivers for different operating systems to increase performance.[citation needed] These enhanced drivers are typically installed on the guest OS as part of VMTools, which also add utilities to better connect the guest OS with the underlying vmkernel and/or service console, for things such as better clock synchronization and automatic guest OS shutdown.

[edit] Service console

The Service Console is a vestigial general purpose operating system most significantly used as the bootstrap for the VMware kernel, vmkernel, and secondarily used as a management interface. Both of these Console OS functions are being deprecated as VMware migrates to exclusively the 'embedded' ESX model, current version being ESXi.[citation needed]

[edit] Linux dependencies

ESX Server uses a Linux kernel to load additional code: often referred to by VMware, Inc. as the "vmkernel". The dependencies between the "vmkernel" and the Linux part of the ESX server have changed drastically over different major versions of the software. The VMware FAQ[10] states: "ESX Server also incorporates a service console based on a Linux 2.4 kernel that is used to boot the ESX Server virtualization layer". The Linux kernel runs before any other software on an ESX host.[5] On ESX versions 1 and 2, no VMkernel processes run on the system during the boot process.[11] After the Linux kernel has loaded, the S90vmware script loads the vmkernel.[11] VMware Inc states that vmkernel does not derive from Linux, but acknowledges that it has adapted certain device-drivers from Linux device drivers. The Linux kernel continues running, under the control of the vmkernel, providing functions including the proc file system used by the ESX and an environment to run support applications.[11] ESX version 3 loads the VMkernel from the Linux initrd, thus much earlier in the boot-sequence than in earlier ESX versions.

In traditional systems, a given operating system runs a single kernel. The VMware FAQ mentions that ESX has both a Linux 2.4 kernel and vmkernel — hence confusion over whether ESX has a Linux base. An ESX system starts a Linux kernel first, but it loads vmkernel (also described by VMware as a kernel), which according to VMware 'wraps around' the linux kernel, and which (according to VMware Inc) does not derive from Linux..

The ESX userspace environment, known as the "Service Console" (or as "COS" or as "vmnix"), derives from a modified version of Red Hat Linux, (Red Hat 7.2 for ESX 2.x and Red Hat Enterprise Linux 3 for ESX 3.x). In general, this Service Console provides management interfaces (CLI, webpage MUI, Remote Console). This VMware ESX hypervisor virtualization approach provides lower overhead and better control and granularity for allocating resources (CPU-time, disk-bandwidth, network-bandwidth, memory-utilization) to virtual machines, compared to so-called "hosted" virtualization, where a base OS handles the physical resources. It also increases security, thus positioning VMware ESX as an enterprise-grade product.

As a further detail which differentiates the ESX from other VMware virtualization products: ESX supports the VMware proprietary cluster file system VMFS. VMFS enables multiple hosts to access the same SAN LUNs simultaneously, while file-level locking provides simple protection to file-system integrity.

[edit] Related products

Two other products operate in conjunction with ESX Server - VirtualCenter and Converter.[12]

  • VirtualCenter allows monitoring and management of multiple ESX or GSX servers. In addition, users must install it to run infrastructure services such as:
    • VMotion (transferring virtual machines between servers on the fly, with almost zero downtime)
    • SVMotion (transferring virtual machines between Shared Storage LUNs on the fly, with almost zero downtime)
    • DRS (automated VMotion based on host/VM load requirements/demands)
    • HA (restarting of Virtual Machine Guests in the event of a physical ESX Host failure)
  • Converter allows users to create VMware ESX Server- or Workstation-compatible virtual machines from either physical machines or from virtual machines made by other virtualization products. Converter replaces the VMware "P2V Assistant" and "Importer" products — P2V Assistant allowed users to convert physical machines into virtual machines; and Importer allowed the import of virtual machines from other products into VMware Workstation.

[edit] Known limitations

Known limitations of VMware ESX Server, as of December 2007, include the following:

[edit] Infrastructure limitations

Some limitations in ESX Server 3 may constrain the design of data centers:[13]

  • Guest system maximum RAM: 64 GB
  • Number of hosts in a HA cluster: 32
  • Number of hosts in a DRS cluster: 32
  • Maximum number of processors per virtual machine: 4

[edit] Performance limitations

In terms of performance, virtualization imposes a cost in the additional work the CPU has to perform to virtualize the underlying hardware. Instructions that perform this extra work, and other activities that require virtualization, tend to lie in operating system calls. In an unmodified operating system, OS calls introduce the greatest portion of virtualization overhead.

Paravirtualization or other virtualization techniques may help with these issues. VMware and XenSource invented the Virtual Machine Interface for this purpose, and selected operating systems currently support this. A comparision between full virtualization and paravirtualization for the ESX Server [14] shows that in some cases paravirtualization is much faster.

[edit] See also


External Links:

VMware FAQs / Brochures:

[edit] References

[edit] External links

Personal tools