US10146589B2 - Processor unplug in virtualized computer systems - Google Patents
Processor unplug in virtualized computer systems Download PDFInfo
- Publication number
- US10146589B2 US10146589B2 US15/049,814 US201615049814A US10146589B2 US 10146589 B2 US10146589 B2 US 10146589B2 US 201615049814 A US201615049814 A US 201615049814A US 10146589 B2 US10146589 B2 US 10146589B2
- Authority
- US
- United States
- Prior art keywords
- virtual
- processor
- system load
- measure
- hypervisor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5022—Workload threshold
Definitions
- This disclosure relates to computer systems, and more particularly, to virtualized computer systems.
- a virtual machine is a portion of software that, when executed on appropriate hardware, creates an environment allowing the virtualization of an actual physical computer system (e.g., a server, a mainframe computer, etc.).
- the actual physical computer system is typically referred to as a “host machine,” and the operating system of the host machine is typically referred to as the “host operating system.”
- a virtual machine may function as a self-contained platform, executing its own “guest” operating system and software applications.
- software on the host machine known as a “hypervisor” (or a “virtual machine monitor”) manages the execution of one or more virtual machines, providing a variety of functions such as virtualizing and allocating resources, context switching among virtual machines, etc.
- a virtual machine may comprise one or more “virtual processors,” each of which maps, possibly in a many-to-one fashion, to a central processing unit (CPU) of the host machine.
- a virtual machine may comprise one or more “virtual devices,” each of which maps, in either a one-to-one or one-to-many fashion, to a physical device of the host machine (e.g., a network interface device, a CD-ROM drive, etc.).
- the hypervisor typically manages these mappings in a transparent fashion, thereby enabling the guest operating system and applications executing on the virtual machine to interact with the virtual processors and virtual devices as though they were actual physical entities.
- FIG. 1 depicts an illustrative computer system architecture, in accordance with an embodiment of the present disclosure.
- FIG. 2 depicts a flow diagram of one embodiment of a method by which a hypervisor withdraws a virtual processor from a virtual machine.
- FIG. 3 depicts a flow diagram of one embodiment of a method by which a hypervisor monitors and detects system load conditions.
- FIG. 4 depicts a block diagram of an illustrative computer system operating in accordance with embodiments of the disclosure.
- a hypervisor may withdraw a virtual processor from a virtual machine (VM), a technique referred to as “dynamic processor unplug.” More particularly, in accordance with one embodiment, the hypervisor monitors one or more measures of system load, and when the hypervisor detects that a measure of system load has fallen below a threshold, the hypervisor withdraws a virtual processor from the virtual machine.
- VM virtual machine
- the hypervisor includes a system load monitor that monitors various measures of system load, detects when a measure of system load falls below a threshold, and, when appropriate, withdraws a virtual processor from the virtual machine. It should be noted that some alternative embodiments may not employ such a system load monitor, and may instead perform these functions via some other mechanism.
- the measure(s) of system load may include one or more measures for a particular virtual processor of the virtual machine, or one or more measures for a set of virtual processors belonging to a particular virtual non-uniform access (NUMA) node of the virtual machine, or one or more measures for all of the virtual processors of the virtual machine, or some combination of these measures.
- NUMA virtual non-uniform access
- measures for a particular virtual processor may include the number of physical CPU cycles consumed executing a particular virtual processor, the number of context switches (storing and restoring the state of a processor so that execution can be resumed from the same point at a later time) per unit of time between threads executed by a particular virtual processor, the percentage of processor cycles consumed by context switches between threads executed by a particular virtual processor, the number of processes in a runnable state executing concurrently by a particular virtual processor, and so forth.
- measures of system load across virtual processors belonging to a particular group may include the number of physical CPU cycles consumed executing virtual processors of the group, the number of context switches per unit of time between threads belonging to the same process, regardless of which virtual processor of the group executes the process; the percentage of processor cycles consumed by context switches between threads executed by the same virtual processor, regardless of which virtual processor of the group; the number of threads in a runnable state executing concurrently (e.g., the average number of threads in a runnable state executed concurrently by the virtual processors of the group, the maximum number of threads in a runnable state executed concurrently by the virtual processors of the group, etc.), and so forth.
- the hypervisor may select one of the virtual processors for withdrawal (e.g., the least-loaded virtual processor, etc.).
- the hypervisor may select one of the virtual processors for withdrawal (e.g., the least-loaded virtual processor, etc.).
- a measure of system load is for a single virtual processor, in one embodiment that particular virtual processor is withdrawn when the measure falls below a threshold.
- Embodiments of the present disclosure are thus capable of determining when it may be advantageous to withdraw virtual processors from a virtual machines and, as a result, have those virtual processors available to improve the performance of applications executing on other virtual machines.
- embodiments of the present disclosure enable this functionality to be provided dynamically, without requiring virtual machines to be shut down and restarted.
- embodiments of the present disclosure may enable more efficient mapping of the virtual processors of the virtual machines to the physical CPUs of the system.
- FIG. 1 depicts an illustrative architecture of the salient elements of a computer system 100 , in accordance with an embodiment of the present disclosure.
- Other architectures for computer system 100 are possible, and the implementation of a computer system utilizing embodiments of the disclosure is not necessarily limited to the specific architecture depicted by FIG. 1 .
- the computer system 100 is connected to a network 150 and comprises one or more central processing units (CPUs) 160 - 1 through 160 -N, where N is a positive integer, main memory 170 , which may include volatile memory devices (e.g., random access memory (RAM)), non-volatile memory devices (e.g., flash memory) and/or other types of memory devices, and a storage device 180 (e.g., one or more hard disk drives, solid-state drives, etc.).
- main memory 170 may be non-uniform access (NUMA), such that memory access time depends on the memory location relative to CPUs 160 - 1 through 160 -N.
- NUMA non-uniform access
- the computer system 100 may be a server, a mainframe, a workstation, a personal computer (PC), a mobile phone, a palm-sized computing device, etc.
- the network 150 may be a private network (e.g., a local area network (LAN), a wide area network (WAN), intranet, etc.) or a public network (e.g., the Internet).
- LAN local area network
- WAN wide area network
- intranet e.g., the Internet
- Computer system 100 runs a host operating system (OS) 120 , which manages the hardware resources of the computer system and that provides functions such as interprocess communication, scheduling, memory management, and so forth.
- host operating system 120 also comprises a hypervisor 125 , which provides a virtual operating platform for virtual machine 130 and manages its execution. It should be noted that in some alternative embodiments, hypervisor 125 may be external to host OS 120 , rather than embedded within host OS 120 , or may replace host OS 120 .
- Virtual machine (VM) 130 is a software implementation of a machine that executes programs as though it were an actual physical machine. As shown in FIG. 1 , virtual machine 130 comprises a guest operating system (OS) 135 and one or more virtual processors 138 - 1 through 138 -K, where K is a positive integer. Guest operating system (OS) 135 comprises software that manages the execution of programs within virtual machine 130 . Each virtual processor 138 is software that emulates a physical processor, and that maps to one of central processing units (CPU) 160 , possibly in a many-to-one fashion. It should be noted that the number of virtual processors may or may not be the same as the number of CPUs (i.e., K may or may not equal N).
- OS guest operating system
- K central processing units
- hypervisor 125 manages these mappings in a transparent fashion, so that hypervisor 125 and applications executing on virtual machine 130 interact with virtual processors 138 as though they were actual physical processors. It should further be noted that although, for simplicity, a single virtual machine 130 is depicted in FIG. 1 , in some embodiments computer system 100 may host a plurality of VMs 130 .
- hypervisor 125 includes a system load monitor 127 that monitors one or more measures of system load (e.g., the number of physical processor cycles consumed executing a particular virtual processor of VM 130 , the number of context switches per unit of time between threads executed by VM 130 , the percentage of physical processor cycles consumed by context switches between threads executed by VM 130 , the number of threads in a runnable state executing concurrently, etc.) for one or more of virtual processors 138 - 1 through 138 -K.
- System load monitor 127 also comprises software that detects when a measure of system load falls below a threshold, and, upon such detection, withdraws a virtual processor from virtual machine 130 . Embodiments of system load monitor 127 are described in more detail below with respect to the flow diagrams of FIGS. 2 and 3 .
- FIG. 2 depicts a flow diagram of one embodiment of a method 300 by which a hypervisor withdraws a virtual processor from a virtual machine.
- the method is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both.
- the method is performed by the computer system 100 of FIG. 1 , while in some other embodiments, some or all of the method might be performed by another machine. It should be noted that blocks depicted in FIG. 2 can be performed simultaneously or in a different order than that depicted.
- hypervisor 125 monitors one or more measures of system load and detects that a measure of system load has fallen below a threshold. Embodiments of operations involved in performing block 201 are described in more detail below with respect to FIG. 3 . In one embodiment, block 201 is performed by system load monitor 127 .
- hypervisor 125 selects a virtual processor 138 - i , where i is an integer between 1 and K inclusive, for withdrawal from virtual machine 130 . More particularly, when the measure of system load of block 201 pertains to a plurality of virtual processors of VM 130 (e.g., all virtual processors 138 - 1 through 138 -K of VM 130 , a set of virtual processors belonging to a virtual NUMA node of VM 130 , etc.), hypervisor 125 may select one of these virtual processors for withdrawal from VM 130 .
- a least-loaded virtual processor may be selected, while in some other embodiments, some other strategy may be used to select the virtual processor (e.g., selecting a virtual processor from a least-loaded NUMA node, etc.), while in yet other embodiments a virtual processor may be selected at random. It should be noted that when the measure of system load of block 201 pertains to a single particular virtual processor, then in some embodiments, that particular virtual processor is automatically the one withdrawn from VM 130 , while in some other embodiments, a different virtual processor might possibly be selected for withdrawal based on some set of one or more criteria.
- block 202 is performed by system load monitor 127 .
- hypervisor 125 withdraws a virtual processor from virtual machine 130 .
- the virtual processor that was selected at block 202 is withdrawn from VM 130 .
- FIG. 3 depicts a flow diagram of one embodiment of a method by which a hypervisor monitors and detects system load conditions. It should be noted that blocks depicted in FIG. 3 can be performed simultaneously or in a different order than that depicted.
- hypervisor 125 monitors one or more measures of system load for each individual virtual processor of VM 130 (e.g., for virtual processor 160 - 1 , for virtual processor 160 - 2 , etc.).
- the monitoring may comprise measuring the percentage of time during which a virtual processor is executing on a physical CPU (for example, by measuring the times at which a virtual processor starts and stops running, computing the differences of the stop and start times, summing the differences over a time interval, and dividing by the length of the time interval).
- the monitoring may comprise measuring the amount of time spent by hypervisor 125 executing instructions on behalf of a virtual processor, or measuring the amount of system load (e.g., disk blocks accessed, network packets sent, page faults, etc.) caused by hypervisor 125 on behalf of a virtual processor, or both.
- the monitoring may comprise measuring CPU load (e.g., number of CPU cache misses, branch mispredictions, cross-NUMA node memory accesses, etc.) via one or more CPU performance counters.
- the measures of system load may include one or more of the following, each of which may be computed by system load monitor 127 of hypervisor 125 :
- hypervisor 125 checks whether any measure for any of the virtual processors has fallen below a respective threshold.
- the respective thresholds may be defined by an administrator via a configuration file or graphical user interface, while in some other embodiments, the respective thresholds may be hard-coded into system load monitor 127 , while in yet other embodiments, the respective thresholds may be defined in some other manner. If any measure for any of the virtual processors falls below a respective threshold, then execution continues at block 302 of FIG. 2 , otherwise execution proceeds to block 303 .
- hypervisor 125 computes one or more measures of system load for each virtual NUMA node of VM 130 (e.g., for each set of virtual processors 138 belonging to a respective virtual NUMA node of VM 130 ), at least some of which may be derived from the measures monitored at block 301 .
- the measures of system load may include one or more of the following:
- hypervisor 125 checks whether any of the measures computed at block 303 has fallen below a respective threshold. If so, execution proceeds to block 302 of FIG. 2 , otherwise execution proceeds to block 305 .
- hypervisor 125 computes one or more measures of system load across all virtual processors of VM 130 , at least some of which may be derived from the measures monitored at block 301 .
- the measures of system load may include one or more of the following:
- hypervisor 125 checks whether any of the measures computed at block 305 has fallen below a respective threshold. If so, execution proceeds to block 202 of FIG. 2 , otherwise execution continues back at block 301 .
- FIG. 4 illustrates an illustrative computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
- the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet.
- the machine may operate in the capacity of a server machine in client-server network environment.
- the machine may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- STB set-top box
- server a server
- network router switch or bridge
- the illustrative computer system 400 includes a processing system (processor) 402 , a main memory 404 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 406 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 416 , which communicate with each other via a bus 408 .
- processor processing system
- main memory 404 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)
- DRAM dynamic random access memory
- SDRAM synchronous DRAM
- static memory 406 e.g., flash memory, static random access memory (SRAM)
- SRAM static random access memory
- Processor 402 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 402 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets.
- the processor 402 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like.
- the processor 402 is configured to execute instructions 426 for performing the operations and steps discussed herein.
- the computer system 400 may further include a network interface device 422 .
- the computer system 400 also may include a video display unit 410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 412 (e.g., a keyboard), a cursor control device 414 (e.g., a mouse), and a signal generation device 420 (e.g., a speaker).
- a video display unit 410 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
- an alphanumeric input device 412 e.g., a keyboard
- a cursor control device 414 e.g., a mouse
- a signal generation device 420 e.g., a speaker
- the data storage device 416 may include a computer-readable medium 424 on which is stored one or more sets of instructions 426 (e.g., instructions corresponding to the method of FIG. 2 , etc.) embodying any one or more of the methodologies or functions described herein. Instructions 426 may also reside, completely or at least partially, within the main memory 404 and/or within the processor 402 during execution thereof by the computer system 400 , the main memory 404 and the processor 402 also constituting computer-readable media. Instructions 426 may further be transmitted or received over a network via the network interface device 422 .
- instructions 426 may also reside, completely or at least partially, within the main memory 404 and/or within the processor 402 during execution thereof by the computer system 400 , the main memory 404 and the processor 402 also constituting computer-readable media. Instructions 426 may further be transmitted or received over a network via the network interface device 422 .
- While the computer-readable storage medium 424 is shown in an illustrative embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
- the term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
- the term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
- Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
- ROMs read-only memories
- RAMs random access memories
- EPROMs EPROMs
- EEPROMs electrically erasable programmable read-only memory
- magnetic or optical cards or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
- Embodiments of the present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure.
- a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
- a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.), a machine (e.g., computer) readable transmission medium (electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.)), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
A system and method for withdrawing virtual processors from virtual machines (VMs) are disclosed. A hypervisor executed by a physical processor detects a measure of system load pertaining to a virtual processor of a virtual machine and withdraws the virtual processor from the virtual machine in view of the measure of system load.
Description
This application is a continuation of U.S. patent application Ser. No. 13/677,261, filed on Nov. 14, 2012, the entirety of which is incorporated herein by reference.
This disclosure relates to computer systems, and more particularly, to virtualized computer systems.
A virtual machine (VM) is a portion of software that, when executed on appropriate hardware, creates an environment allowing the virtualization of an actual physical computer system (e.g., a server, a mainframe computer, etc.). The actual physical computer system is typically referred to as a “host machine,” and the operating system of the host machine is typically referred to as the “host operating system.”
A virtual machine may function as a self-contained platform, executing its own “guest” operating system and software applications. Typically, software on the host machine known as a “hypervisor” (or a “virtual machine monitor”) manages the execution of one or more virtual machines, providing a variety of functions such as virtualizing and allocating resources, context switching among virtual machines, etc.
A virtual machine may comprise one or more “virtual processors,” each of which maps, possibly in a many-to-one fashion, to a central processing unit (CPU) of the host machine. Similarly, a virtual machine may comprise one or more “virtual devices,” each of which maps, in either a one-to-one or one-to-many fashion, to a physical device of the host machine (e.g., a network interface device, a CD-ROM drive, etc.). The hypervisor typically manages these mappings in a transparent fashion, thereby enabling the guest operating system and applications executing on the virtual machine to interact with the virtual processors and virtual devices as though they were actual physical entities.
The present disclosure is illustrated by way of example, and not by way of limitation, and can be more fully understood with reference to the following detailed description when considered in connection with the figures in which:
Described herein is a system and method by which a hypervisor may withdraw a virtual processor from a virtual machine (VM), a technique referred to as “dynamic processor unplug.” More particularly, in accordance with one embodiment, the hypervisor monitors one or more measures of system load, and when the hypervisor detects that a measure of system load has fallen below a threshold, the hypervisor withdraws a virtual processor from the virtual machine.
In one embodiment, the hypervisor includes a system load monitor that monitors various measures of system load, detects when a measure of system load falls below a threshold, and, when appropriate, withdraws a virtual processor from the virtual machine. It should be noted that some alternative embodiments may not employ such a system load monitor, and may instead perform these functions via some other mechanism.
In one embodiment, the measure(s) of system load may include one or more measures for a particular virtual processor of the virtual machine, or one or more measures for a set of virtual processors belonging to a particular virtual non-uniform access (NUMA) node of the virtual machine, or one or more measures for all of the virtual processors of the virtual machine, or some combination of these measures. In one embodiment, measures for a particular virtual processor may include the number of physical CPU cycles consumed executing a particular virtual processor, the number of context switches (storing and restoring the state of a processor so that execution can be resumed from the same point at a later time) per unit of time between threads executed by a particular virtual processor, the percentage of processor cycles consumed by context switches between threads executed by a particular virtual processor, the number of processes in a runnable state executing concurrently by a particular virtual processor, and so forth. Similarly, measures of system load across virtual processors belonging to a particular group (e.g., virtual processors belonging to a particular virtual NUMA node, all of the virtual processors in the virtual machine, etc.) may include the number of physical CPU cycles consumed executing virtual processors of the group, the number of context switches per unit of time between threads belonging to the same process, regardless of which virtual processor of the group executes the process; the percentage of processor cycles consumed by context switches between threads executed by the same virtual processor, regardless of which virtual processor of the group; the number of threads in a runnable state executing concurrently (e.g., the average number of threads in a runnable state executed concurrently by the virtual processors of the group, the maximum number of threads in a runnable state executed concurrently by the virtual processors of the group, etc.), and so forth.
In one embodiment, when a measure of system load is for a plurality of virtual processors, the hypervisor may select one of the virtual processors for withdrawal (e.g., the least-loaded virtual processor, etc.). Alternatively, when a measure of system load is for a single virtual processor, in one embodiment that particular virtual processor is withdrawn when the measure falls below a threshold.
Embodiments of the present disclosure are thus capable of determining when it may be advantageous to withdraw virtual processors from a virtual machines and, as a result, have those virtual processors available to improve the performance of applications executing on other virtual machines. Advantageously, embodiments of the present disclosure enable this functionality to be provided dynamically, without requiring virtual machines to be shut down and restarted. Similarly, when a virtualized computer system hosts multiple virtual machines, embodiments of the present disclosure may enable more efficient mapping of the virtual processors of the virtual machines to the physical CPUs of the system.
As shown in FIG. 1 , the computer system 100 is connected to a network 150 and comprises one or more central processing units (CPUs) 160-1 through 160-N, where N is a positive integer, main memory 170, which may include volatile memory devices (e.g., random access memory (RAM)), non-volatile memory devices (e.g., flash memory) and/or other types of memory devices, and a storage device 180 (e.g., one or more hard disk drives, solid-state drives, etc.). In some embodiments, main memory 170 may be non-uniform access (NUMA), such that memory access time depends on the memory location relative to CPUs 160-1 through 160-N.
The computer system 100 may be a server, a mainframe, a workstation, a personal computer (PC), a mobile phone, a palm-sized computing device, etc. The network 150 may be a private network (e.g., a local area network (LAN), a wide area network (WAN), intranet, etc.) or a public network (e.g., the Internet).
Virtual machine (VM) 130 is a software implementation of a machine that executes programs as though it were an actual physical machine. As shown in FIG. 1 , virtual machine 130 comprises a guest operating system (OS) 135 and one or more virtual processors 138-1 through 138-K, where K is a positive integer. Guest operating system (OS) 135 comprises software that manages the execution of programs within virtual machine 130. Each virtual processor 138 is software that emulates a physical processor, and that maps to one of central processing units (CPU) 160, possibly in a many-to-one fashion. It should be noted that the number of virtual processors may or may not be the same as the number of CPUs (i.e., K may or may not equal N). In one embodiment, hypervisor 125 manages these mappings in a transparent fashion, so that hypervisor 125 and applications executing on virtual machine 130 interact with virtual processors 138 as though they were actual physical processors. It should further be noted that although, for simplicity, a single virtual machine 130 is depicted in FIG. 1 , in some embodiments computer system 100 may host a plurality of VMs 130.
In accordance with one embodiment, hypervisor 125 includes a system load monitor 127 that monitors one or more measures of system load (e.g., the number of physical processor cycles consumed executing a particular virtual processor of VM 130, the number of context switches per unit of time between threads executed by VM 130, the percentage of physical processor cycles consumed by context switches between threads executed by VM 130, the number of threads in a runnable state executing concurrently, etc.) for one or more of virtual processors 138-1 through 138-K. System load monitor 127 also comprises software that detects when a measure of system load falls below a threshold, and, upon such detection, withdraws a virtual processor from virtual machine 130. Embodiments of system load monitor 127 are described in more detail below with respect to the flow diagrams of FIGS. 2 and 3 .
At block 201, hypervisor 125 monitors one or more measures of system load and detects that a measure of system load has fallen below a threshold. Embodiments of operations involved in performing block 201 are described in more detail below with respect to FIG. 3 . In one embodiment, block 201 is performed by system load monitor 127.
At optional block 202, hypervisor 125 selects a virtual processor 138-i, where i is an integer between 1 and K inclusive, for withdrawal from virtual machine 130. More particularly, when the measure of system load of block 201 pertains to a plurality of virtual processors of VM 130 (e.g., all virtual processors 138-1 through 138-K of VM 130, a set of virtual processors belonging to a virtual NUMA node of VM 130, etc.), hypervisor 125 may select one of these virtual processors for withdrawal from VM 130. In some embodiments, a least-loaded virtual processor may be selected, while in some other embodiments, some other strategy may be used to select the virtual processor (e.g., selecting a virtual processor from a least-loaded NUMA node, etc.), while in yet other embodiments a virtual processor may be selected at random. It should be noted that when the measure of system load of block 201 pertains to a single particular virtual processor, then in some embodiments, that particular virtual processor is automatically the one withdrawn from VM 130, while in some other embodiments, a different virtual processor might possibly be selected for withdrawal based on some set of one or more criteria. In one embodiment, block 202 is performed by system load monitor 127.
At block 203, hypervisor 125 withdraws a virtual processor from virtual machine 130. In embodiments in which block 202 has been executed, the virtual processor that was selected at block 202 is withdrawn from VM 130.
At block 301, hypervisor 125 monitors one or more measures of system load for each individual virtual processor of VM 130 (e.g., for virtual processor 160-1, for virtual processor 160-2, etc.). In some embodiments, the monitoring may comprise measuring the percentage of time during which a virtual processor is executing on a physical CPU (for example, by measuring the times at which a virtual processor starts and stops running, computing the differences of the stop and start times, summing the differences over a time interval, and dividing by the length of the time interval). In some other embodiments, the monitoring may comprise measuring the amount of time spent by hypervisor 125 executing instructions on behalf of a virtual processor, or measuring the amount of system load (e.g., disk blocks accessed, network packets sent, page faults, etc.) caused by hypervisor 125 on behalf of a virtual processor, or both. In yet other embodiments, the monitoring may comprise measuring CPU load (e.g., number of CPU cache misses, branch mispredictions, cross-NUMA node memory accesses, etc.) via one or more CPU performance counters.
In one embodiment, the measures of system load may include one or more of the following, each of which may be computed by system load monitor 127 of hypervisor 125:
-
- the number of physical processor cycles consumed executing a particular virtual processor;
- the number of context switches per unit of time between threads executed by a particular virtual processor;
- the number of context switches per unit of time between threads belonging to the same process executed by a particular virtual processor;
- the number of context switches per unit of time between threads belonging to different processes executed by a particular virtual processor;
- the percentage of physical processor cycles consumed by context switches between threads executed by a particular virtual processor;
- the percentage of physical processor cycles consumed by context switches between threads belonging to the same process executed by a particular virtual processor;
- the percentage of physical processor cycles consumed by context switches between threads belonging to different processes executed by a particular virtual processor;
- the number of threads in a runnable state executing concurrently by a particular virtual processor;
- the number of processes in a runnable state executing concurrently by a particular virtual processor; or
- the number of applications in a runnable state executing concurrently by a particular virtual processor.
At block 302, hypervisor 125 checks whether any measure for any of the virtual processors has fallen below a respective threshold. It should be noted that in some embodiments, the respective thresholds may be defined by an administrator via a configuration file or graphical user interface, while in some other embodiments, the respective thresholds may be hard-coded into system load monitor 127, while in yet other embodiments, the respective thresholds may be defined in some other manner. If any measure for any of the virtual processors falls below a respective threshold, then execution continues at block 302 of FIG. 2 , otherwise execution proceeds to block 303.
At block 303, hypervisor 125 computes one or more measures of system load for each virtual NUMA node of VM 130 (e.g., for each set of virtual processors 138 belonging to a respective virtual NUMA node of VM 130), at least some of which may be derived from the measures monitored at block 301. In one embodiment, the measures of system load may include one or more of the following:
-
- the number of physical processor cycles consumed executing virtual processors of the virtual NUMA node;
- the number of context switches per unit of time between threads executed by the same virtual processor (e.g., the number of context switches per unit of time for each individual virtual processor that belongs to a given virtual NUMA node, averaged over the virtual processors belonging to the NUMA node, etc.);
- the number of context switches per unit of time between threads belonging to the same process (e.g., averaged over the virtual processors belonging to the NUMA node);
- the number of context switches per unit of time between threads belonging to different processes (e.g., averaged over the virtual processors belonging to the NUMA node);
- the percentage of physical processor cycles consumed by context switches between threads executed by the same virtual processor (e.g., the average of the percentages for each of the virtual processors belonging to the NUMA node, etc.);
- the percentage of physical processor cycles consumed by context switches between threads belonging to the same process (e.g., averaged over the virtual processors belonging to the NUMA node);
- the percentage of physical processor cycles consumed by context switches between threads belonging to different processes (e.g., averaged over the virtual processors belonging to the NUMA node);
- the number of threads in a runnable state executing concurrently (e.g., an average of the virtual processors belonging to the NUMA node, a maximum of the virtual processors belonging to the NUMA node, etc.);
- the number of processes in a runnable state executing concurrently (e.g., an average of the virtual processors belonging to the NUMA node, a maximum of the virtual processors belonging to the NUMA node, etc.); or
- the number of applications in a runnable state executing concurrently (e.g., an average of the virtual processors belonging to the NUMA node, a maximum of the virtual processors belonging to the NUMA node, etc.).
It should be noted that a NUMA node is a specific type of processor group known as a proximity domain, and that in some other embodiments, block 303 may be performed more generally for each of a plurality of proximity domains ofVM 130. In one embodiment, block 303 is performed bysystem load monitor 127.
At block 304, hypervisor 125 checks whether any of the measures computed at block 303 has fallen below a respective threshold. If so, execution proceeds to block 302 of FIG. 2 , otherwise execution proceeds to block 305.
At block 305, hypervisor 125 computes one or more measures of system load across all virtual processors of VM 130, at least some of which may be derived from the measures monitored at block 301. In one embodiment, the measures of system load may include one or more of the following:
-
- the number of physical processor cycles consumed executing all of the virtual processors of VM 130 (i.e., virtual processors 138-1 through 138-K);
- the number of context switches per unit of time between threads executed by the same virtual processor (e.g., the number of context switches per unit of time for each individual virtual processor, averaged over all K virtual processors, etc.);
- the number of context switches per unit of time between threads belonging to the same process (e.g., averaged over all K virtual processors);
- the number of context switches per unit of time between threads belonging to different processes (e.g., averaged over all K virtual processors);
- the percentage of physical processor cycles consumed by context switches between threads executed by the same virtual processor (e.g., the average of the percentages for each of the K virtual processors, etc.);
- the percentage of physical processor cycles consumed by context switches between threads belonging to the same process (e.g., averaged over all K virtual processors);
- the percentage of physical processor cycles consumed by context switches between threads belonging to different processes (e.g., averaged over all K virtual processors);
- the number of threads in a runnable state executing concurrently (e.g., an average of all K virtual processors, a maximum of all K virtual processors, etc.);
- the number of processes in a runnable state executing concurrently (e.g., an average of all K virtual processors, a maximum of all K virtual processors, etc.); or
- the number of applications in a runnable state executing concurrently (e.g., an average of all K virtual processors, a maximum of all K virtual processors, etc.).
In one embodiment, block 305 is performed bysystem load monitor 127.
At block 306, hypervisor 125 checks whether any of the measures computed at block 305 has fallen below a respective threshold. If so, execution proceeds to block 202 of FIG. 2 , otherwise execution continues back at block 301.
The illustrative computer system 400 includes a processing system (processor) 402, a main memory 404 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 406 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 416, which communicate with each other via a bus 408.
The computer system 400 may further include a network interface device 422. The computer system 400 also may include a video display unit 410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 412 (e.g., a keyboard), a cursor control device 414 (e.g., a mouse), and a signal generation device 420 (e.g., a speaker).
The data storage device 416 may include a computer-readable medium 424 on which is stored one or more sets of instructions 426 (e.g., instructions corresponding to the method of FIG. 2 , etc.) embodying any one or more of the methodologies or functions described herein. Instructions 426 may also reside, completely or at least partially, within the main memory 404 and/or within the processor 402 during execution thereof by the computer system 400, the main memory 404 and the processor 402 also constituting computer-readable media. Instructions 426 may further be transmitted or received over a network via the network interface device 422.
While the computer-readable storage medium 424 is shown in an illustrative embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
Although the operations of the methods herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operation may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be in an intermittent and/or alternating manner.
In the foregoing description, numerous details have been set forth. It will be apparent, however, to one skilled in the art, that embodiments of the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure.
Some portions of the detailed descriptions are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, as apparent from the foregoing discussion, it is appreciated that throughout the description, discussions utilizing terms such as “receiving”, “transmitting”, “detecting”, “withdrawing”, or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. In addition, embodiments of the present disclosure are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. Embodiments of the present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.), a machine (e.g., computer) readable transmission medium (electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.)), etc.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Claims (19)
1. A method comprising:
detecting, by a hypervisor executed by a physical processor, a measure of system load for each of a plurality of virtual non-uniform access (NUMA) nodes associated with one or more virtual machines, wherein a respective measure of system load pertains to a load of multiple virtual processors belonging to a respective virtual NUMA node;
selecting, by the hypervisor executed by the physical processor, a least-loaded virtual NUMA node from the plurality of virtual NUMA nodes in view of the respective measure of system load; and
withdrawing, by the hypervisor, a virtual processor belonging to the least-loaded virtual NUMA node in view of a measure of system load of the virtual processor.
2. The method of claim 1 wherein the measure of system load of the least-loaded virtual NUMA node pertains to a plurality of virtual processors of one of the one or more virtual machines.
3. The method of claim 2 further comprising selecting, by the hypervisor, the virtual processor to withdraw from the one or more virtual machines.
4. The method of claim 3 wherein the virtual processor is a least-loaded virtual processor of the least-loaded virtual NUMA node and is selected for withdrawal from the virtual machine.
5. The method of claim 1 wherein the particular virtual processor is the virtual processor withdrawn from the one or more virtual machines.
6. The method of claim 1 wherein the one or more virtual machines comprise memory comprising non-uniform memory access (NUMA).
7. An apparatus comprising:
a memory to store a virtual machine; and
a physical processor operatively coupled to the memory, the physical processor to:
execute the virtual machine;
execute a hypervisor;
detect, via the hypervisor, a measure of system load for each of a plurality of virtual non-uniform access (NUMA) nodes associated with one or more virtual machines, wherein a respective measure of system load pertains to a load of multiple virtual processors belonging to a respective virtual NUMA node;
select a least-loaded virtual NUMA node from the plurality of virtual NUMA nodes in view of the respective measure of system load; and
withdraw, via the hypervisor, a virtual processor belonging to the least-loaded virtual NUMA node in view of a measure of system load of the virtual processor.
8. The apparatus of claim 7 wherein the measure of system load of the least-loaded virtual NUMA node is associated with the withdrawn virtual processor.
9. The apparatus of claim 7 wherein the measure of system load of the least-loaded virtual NUMA node pertains to all virtual processors belonging to the least-loaded virtual NUMA node.
10. The apparatus of claim 7 wherein the measure of system load of the virtual processor is in view of a percentage of processor cycles consumed by context switches between threads executed by a virtual machine.
11. The apparatus of claim 10 wherein the context switches are between threads belonging to different processes.
12. The apparatus of claim 10 wherein the context switches are between threads belonging to a same process.
13. The apparatus of claim 10 wherein the measure of system load of the virtual processor is in view of a number of threads in a runnable state executing concurrently.
14. The apparatus of claim 10 wherein the measure of system load of the virtual processor is in view of a number of processes in a runnable state executing concurrently.
15. The apparatus of claim 10 wherein the measure of system load of the virtual processor is in view of a number of applications in a runnable state executing concurrently.
16. A non-transitory computer readable storage medium, comprising instructions stored therein, which when executed, cause a physical processor to:
detect, by a hypervisor executed by the physical processor, a measure of system load for each of a plurality of virtual non-uniform access (NUMA) nodes associated with one or more virtual machines, wherein a respective measure of system load pertains to a load of multiple virtual processors belonging to a respective virtual NUMA node;
select a least-loaded virtual NUMA node from the plurality of virtual NUMA nodes in view of the respective measure of system load; and
withdraw, by the hypervisor, a virtual processor belonging to the least-loaded virtual NUMA node in view of a measure of system load of the virtual processor.
17. The non-transitory computer readable storage medium of claim 16 wherein the measure of system load of the virtual processor is in view of a number of context switches per unit of time between threads executed by the one or more virtual machines.
18. The non-transitory computer readable storage medium of claim 17 wherein the context switches are between threads belonging to different processes.
19. The non-transitory computer readable storage medium of claim 17 wherein the context switches are between threads belonging to a same process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/049,814 US10146589B2 (en) | 2012-11-14 | 2016-02-22 | Processor unplug in virtualized computer systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/677,261 US9286131B2 (en) | 2012-11-14 | 2012-11-14 | Processor unplug in virtualized computer system |
US15/049,814 US10146589B2 (en) | 2012-11-14 | 2016-02-22 | Processor unplug in virtualized computer systems |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/677,261 Continuation US9286131B2 (en) | 2012-11-14 | 2012-11-14 | Processor unplug in virtualized computer system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20160170810A1 US20160170810A1 (en) | 2016-06-16 |
US10146589B2 true US10146589B2 (en) | 2018-12-04 |
Family
ID=50683040
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/677,261 Active 2033-08-22 US9286131B2 (en) | 2012-11-14 | 2012-11-14 | Processor unplug in virtualized computer system |
US15/049,814 Active 2032-12-23 US10146589B2 (en) | 2012-11-14 | 2016-02-22 | Processor unplug in virtualized computer systems |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/677,261 Active 2033-08-22 US9286131B2 (en) | 2012-11-14 | 2012-11-14 | Processor unplug in virtualized computer system |
Country Status (1)
Country | Link |
---|---|
US (2) | US9286131B2 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9286131B2 (en) * | 2012-11-14 | 2016-03-15 | Red Hat Israel, Ltd. | Processor unplug in virtualized computer system |
US9280375B1 (en) * | 2014-04-30 | 2016-03-08 | Google Inc. | Dynamically adjustable virtual machine |
US10514950B2 (en) * | 2015-04-14 | 2019-12-24 | Huawei Technologies Co., Ltd. | Interface switching method, apparatus, and device |
US10831556B2 (en) | 2015-12-23 | 2020-11-10 | Intel IP Corporation | Virtual CPU consolidation to avoid physical CPU contention between virtual machines |
CN112579257B (en) * | 2020-12-14 | 2024-02-23 | 深信服科技股份有限公司 | Scheduling method and device for virtual central processing unit core and related equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030236816A1 (en) | 2002-06-20 | 2003-12-25 | Lakshminarayanan Venkatasubramanian | Spin-yielding in multi-threaded systems |
US20050080753A1 (en) | 2003-10-14 | 2005-04-14 | Microsoft Corporation | Systems and methods for instruction sequence compounding in a virtual machine environment |
US20090077564A1 (en) | 2007-09-13 | 2009-03-19 | Microsoft Corporation | Fast context switching using virtual cpus |
US20100250868A1 (en) * | 2009-03-26 | 2010-09-30 | Microsoft Corporation | Virtual non-uniform memory architecture for virtual machines |
US20100332658A1 (en) | 2009-06-29 | 2010-12-30 | Red Hat Israel, Ltd. | Selecting a host from a host cluster to run a virtual machine |
US20110010713A1 (en) | 2009-07-13 | 2011-01-13 | Hitachi, Ltd. | Computer system, virtual machine monitor and scheduling method for virtual machine monitor |
US20120278800A1 (en) | 2011-04-27 | 2012-11-01 | Microsoft Corporation | Virtual Processor Allocation Techniques |
US20120291029A1 (en) | 2011-05-13 | 2012-11-15 | International Business Machines Corporation | Operating virtual switches in a virtualized computing environment |
US8527988B1 (en) | 2009-07-31 | 2013-09-03 | Hewlett-Packard Development Company, L.P. | Proximity mapping of virtual-machine threads to processors |
US20130347002A1 (en) | 2012-06-21 | 2013-12-26 | Microsoft Corporation | Performant runtime pause with no cpu utilization |
US9286131B2 (en) * | 2012-11-14 | 2016-03-15 | Red Hat Israel, Ltd. | Processor unplug in virtualized computer system |
-
2012
- 2012-11-14 US US13/677,261 patent/US9286131B2/en active Active
-
2016
- 2016-02-22 US US15/049,814 patent/US10146589B2/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030236816A1 (en) | 2002-06-20 | 2003-12-25 | Lakshminarayanan Venkatasubramanian | Spin-yielding in multi-threaded systems |
US20050080753A1 (en) | 2003-10-14 | 2005-04-14 | Microsoft Corporation | Systems and methods for instruction sequence compounding in a virtual machine environment |
US20090077564A1 (en) | 2007-09-13 | 2009-03-19 | Microsoft Corporation | Fast context switching using virtual cpus |
US20100250868A1 (en) * | 2009-03-26 | 2010-09-30 | Microsoft Corporation | Virtual non-uniform memory architecture for virtual machines |
US20100332658A1 (en) | 2009-06-29 | 2010-12-30 | Red Hat Israel, Ltd. | Selecting a host from a host cluster to run a virtual machine |
US20110010713A1 (en) | 2009-07-13 | 2011-01-13 | Hitachi, Ltd. | Computer system, virtual machine monitor and scheduling method for virtual machine monitor |
US8527988B1 (en) | 2009-07-31 | 2013-09-03 | Hewlett-Packard Development Company, L.P. | Proximity mapping of virtual-machine threads to processors |
US20120278800A1 (en) | 2011-04-27 | 2012-11-01 | Microsoft Corporation | Virtual Processor Allocation Techniques |
US20120291029A1 (en) | 2011-05-13 | 2012-11-15 | International Business Machines Corporation | Operating virtual switches in a virtualized computing environment |
US20130347002A1 (en) | 2012-06-21 | 2013-12-26 | Microsoft Corporation | Performant runtime pause with no cpu utilization |
US9286131B2 (en) * | 2012-11-14 | 2016-03-15 | Red Hat Israel, Ltd. | Processor unplug in virtualized computer system |
Non-Patent Citations (5)
Title |
---|
De George, Andy, "How to Autoscale an Application", https://azure.microsoft.com/en-in/documentation/articles/cloud-services-how-to-scale/, 10 pages, updated on Jul. 12, 2015. |
Rao, "Towards Fair and Efficient SMP Virtual Machine Scheduling", Feb. 2014, ACM, pp. 273-285. (Year: 2014). * |
USPTO, Final Office Action for U.S. Appl. No. 13/677,261, dated May 15, 2015. |
USPTO, Notice of Allowance for U.S. Appl. No. 13/677,264, dated Nov. 9, 2015. |
USPTO, Office Action for U.S. Appl. No. 13/677,261, dated Dec. 19, 2014. |
Also Published As
Publication number | Publication date |
---|---|
US9286131B2 (en) | 2016-03-15 |
US20140137108A1 (en) | 2014-05-15 |
US20160170810A1 (en) | 2016-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9164809B2 (en) | Virtual processor provisioning in virtualized computer systems | |
US10365936B2 (en) | Idle processor management by guest in virtualized systems | |
US11157302B2 (en) | Idle processor management in virtualized systems via paravirtualization | |
US10146589B2 (en) | Processor unplug in virtualized computer systems | |
US8312195B2 (en) | Managing interrupts using a preferred binding between a device generating interrupts and a CPU | |
US9858098B2 (en) | Hypervisor modification of system tables | |
US9804876B2 (en) | Hypervisor-driven hibernation | |
US9697029B2 (en) | Guest idle based VM request completion processing | |
US10275268B2 (en) | Providing entropy to a guest operating system | |
US9489228B2 (en) | Delivery of events from a virtual machine to a thread executable by multiple host CPUs using memory monitoring instructions | |
US9201823B2 (en) | Pessimistic interrupt affinity for devices | |
US9600314B2 (en) | Scheduler limited virtual device polling | |
US20130054860A1 (en) | Optimistic interrupt affinity for devices | |
US20150347166A1 (en) | Paravirtualized migration counter for migrating a virtual cpu to a different physical cpu | |
US9256455B2 (en) | Delivery of events from a virtual machine to host CPU using memory monitoring instructions | |
US20140149979A1 (en) | Virtual machine wakeup using a memory monitoring instruction | |
US9575788B2 (en) | Hypervisor handling of processor hotplug requests | |
US10503659B2 (en) | Post-copy VM migration speedup using free page hinting | |
US9766917B2 (en) | Limited virtual device polling based on virtual CPU pre-emption | |
US9378057B2 (en) | Paravirtualized migration counter | |
US10671425B2 (en) | Lazy timer programming for virtual machines | |
US9104634B2 (en) | Usage of snapshots prepared by a different host |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: RED HAT ISRAEL, LTD., ISRAEL Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TSIRKIN, MICHAEL;LAOR, DOR;SIGNING DATES FROM 20121112 TO 20121113;REEL/FRAME:037789/0499 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |