US11822970B2 - Identifier (ID) allocation in a virtualized computing environment - Google Patents
Identifier (ID) allocation in a virtualized computing environment Download PDFInfo
- Publication number
- US11822970B2 US11822970B2 US15/297,172 US201615297172A US11822970B2 US 11822970 B2 US11822970 B2 US 11822970B2 US 201615297172 A US201615297172 A US 201615297172A US 11822970 B2 US11822970 B2 US 11822970B2
- Authority
- US
- United States
- Prior art keywords
- ids
- node
- cache
- batch
- response
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000004044 response Effects 0.000 claims abstract description 36
- 230000008569 process Effects 0.000 claims description 22
- 238000013519 translation Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 238000007726 management method Methods 0.000 description 9
- 230000002085 persistent effect Effects 0.000 description 9
- 238000013459 approach Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 2
- 238000013523 data management Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Definitions
- Virtualization allows the abstraction and pooling of hardware resources to support virtual machines in a virtualized computing environment, such as a Software-Defined Datacenter (SDDC).
- SDDC Software-Defined Datacenter
- virtual machines running different operating systems may be supported by the same physical machine (e.g., referred to as a “host”).
- Each virtual machine is generally provisioned with virtual resources to run an operating system and applications.
- the virtual resources may include central processing unit (CPU) resources, memory resources storage resources, network resources, etc.
- hosts in the virtualized computing environment may be managed by a cluster of nodes, such as management components on a management plane, etc. Such nodes are configured to facilitate the configuration of objects in the virtualized computing environment, including allocating identifiers (IDs) to those objects.
- ID allocation may not be performed efficiently.
- FIG. 1 is a schematic diagram illustrating an example virtualized computing environment in which identifier (ID) allocation may be performed
- FIG. 2 is a schematic diagram illustrating an example distributed firewall implementation in the virtualized computing environment in FIG. 1 ;
- FIG. 3 is a flowchart of an example process for a node to perform ID allocation in a virtualized computing environment
- FIG. 4 is a flowchart of an example detailed process for anode to perform ID allocation in a virtualized computing environment.
- FIG. 5 is a schematic diagram illustrating example ID retrievals from a pool to a cache associated with a node in the virtualized computing environment in FIG. 1 .
- FIG. 1 is a schematic diagram illustrating example virtualized computing environment 100 in which ID allocation may be performed. It should be understood that, depending on the desired implementation, virtualized computing environment 100 may include additional and/or alternative components than that shown in FIG. 1 .
- Virtualized computing environment 100 includes multiple nodes forming cluster 102 , such as node-A 110 A, node-B 110 B and node-C 110 C that are connected via physical network 104 .
- each node 110 A/ 110 B/ 110 C may be implemented using a virtual entity (e.g., virtual appliance, virtual machine, etc.) and/or a physical entity.
- Each node 110 A/ 110 B/ 110 C is supported by hardware 112 A/ 112 B/ 112 C that includes components such as processor(s) 114 A/ 114 B/ 114 C, memory 116 A/ 116 B/ 116 C, network interface controller(s) 118 A/ 118 B/ 118 C, storage disk(s) 119 A/ 119 B/ 119 C, etc.
- cluster 102 represents a distributed cluster having node-A 110 A, node-B 110 B and node-C 110 C operating as management components on a management plane of a network virtualization platform, such as VMware's NSX (a trademark of VMware, Inc.), etc.
- the network virtualization platform is implemented to virtualize network resources such as physical hardware switches to support software-based virtual networks.
- each node 110 A/ 110 B/ 110 C may represent a network virtualization manager (e.g., NSX manager) via which the software-based virtual networks are configured by users.
- NSX manager network virtualization manager
- node-A 110 A, node-B 110 B and node-C 110 C may be associated with different sites each site representing a geographical location, business unit, organization, etc.
- Each node 110 A/ 110 B/ 110 C implements ID allocation module 120 A/ 120 B/ 120 C to provide an ID allocation service (IDAS) and/or ID generation service (IDGS) to any suitable ID consumer, such as first ID consumer 126 A/ 126 B/ 126 C, second ID consumer 128 A/ 128 B/ 128 C, etc.
- Persistent storage 170 is configured to store pool of IDs 172 that is shared across cluster 102 . For example, to meet ID allocation requests from ID consumer 126 A/ 128 A, ID allocation module 120 A of node-A 110 A may retrieve ID(s) from pool of IDs 172 .
- ID consumer may refer generally to any component that requests for IDs from ID allocation module 120 A/ 120 B/ 120 C.
- an ID consumer may reside on the same physical machine as node 110 A/ 110 B/ 110 C (as shown in FIG. 1 ) or on a different physical machine depending on the desired implementation.
- An ID consumer may be a physical entity, or a virtual entity supported by the physical entity.
- the term “persistent storage” 170 may refer generally to storage device in which information stored therein is not lost when the storage device fails or is powered down.
- FIG. 2 is a schematic diagram illustrating an example distributed firewall implementation 200 in virtualized computing environment 100 in FIG. 1 .
- node-B 110 B and node-C 110 C from cluster 102 are not shown in FIG. 2 , but it should be understood that they may be similarly configured to implement distributed firewall.
- node-A 110 A implements ID consumer 126 A in the form of a distributed firewall controller.
- a user e.g., network administrator
- Host 210 includes suitable virtualization software (e.g., hypervisor 211 ) and hardware 212 to support various virtual machines, such as “VM 1 ” 221 and “VM 2 ” 222 .
- Hypervisor 211 maintains a mapping between underlying hardware 212 of host 210 and virtual resources allocated to virtual machine 221 / 222 .
- Hardware 212 includes physical components (some not shown for simplicity) such as Central Processing Unit (CPU), memory, storage disk(s), and physical network interface controllers (NICs) 214 , etc.
- CPU Central Processing Unit
- NICs physical network interface controllers
- the virtual resources are allocated to virtual machine 221 / 222 to support application(s) running on top of guest operating system executing at virtual machine 221 / 222 ,
- the virtual resources may include virtual CPU, virtual memory, virtual disk, virtual network interface controller (vNIC), etc.
- Virtual machine monitors (VMMs) 231 , 232 implemented by hypervisor 211 are to emulate hardware resources, such as “VNIC 1 ” 241 for “VM 1 ” 221 and “VNIC 2 ” 242 for “VM 2 ” 222 .
- Hypervisor 211 further supports virtual switch 250 to handle packets to and from virtual machine 221 / 222 .
- firewall is implemented to filter packets to and from the virtual machines.
- each host 210 implements local firewall engine 260 to filter packets for “VM 1 ” 221 or “VM 2 ” 222 according to firewall rules 262 .
- firewall engine 260 may allow some packets to be delivered to “VM 1 ” 221 (see “PASS” 270 ), while dropping other packets that are destined for “VM 2 ” 222 (see “DROP” 280 ).
- Firewall rules 262 may be configured via distributed firewall controller (see 126 A), which interacts with host 210 to apply or update firewall rules 262 .
- firewall rule configuration is the assignment of unique IDs for identifying firewall rules 262 across cluster 102 , such as 30-bit monotonically increasing IDs. For example, when a virtual machine (e.g., “VM 1 ” 221 ) is migrated from a source site associated with node-A 110 A to a target site associated with node-B 110 B, the same IDs may be used without having to reconfigure firewall rules 262 . This increases the mobility of virtual machines within cluster 102 and facilitates disaster recovery in virtualized computing environment 100 .
- VM 1 virtual machine
- ID allocation generally involves node 110 A/ 110 B/ 110 C retrieving IDs from shared pool 172 responsive to each and every ID allocation request from ID consumer 126 A/ 128 A. In a database environment, this may involve sending a query to, and receiving a result from, persistent storage 170 . Each query results in a network round trip. In the example distributed firewall in FIG. 2 and other application that require high-volume and frequent ID allocations, the delay resulting from the network round trips of the queries may be detrimental to the performance of node 110 A/ 110 B/ 110 C.
- ID allocation may be performed more efficiently by reducing or minimizing access to persistent storage 170 .
- a pre-allocation approach is used by retrieving a batch of IDs from shared pool 172 to service future ID allocation requests.
- first batch 124 A is retrieved from pool 172 to cache-A 122 A at node-A 110 A; second batch 124 B to cache-B 122 B at node-B 110 B; and third batch 124 C to cache-C 122 C at node-C 110 C.
- each node 110 A/ 110 B/ 110 C may perform ID allocation in a distributed and concurrent manner using the retrieved IDs in its own cache 122 A/ 122 B/ 122 C.
- the term “cache” may refer generally to memory (or an area of memory) storing IDs locally (i.e., “local” to particular node 110 A/ 110 B/ 110 C) to improve the speed of allocation of such IDs and reduce the number of accesses made to persistent storage 170 .
- each batch of IDs 124 A/ 124 B/ 124 C may be stored temporarily in cache 122 A/ 122 B/ 122 C (e.g., in-memory cache) for future access by ID allocation module 120 A/ 120 B/ 120 C.
- FIG. 3 is a flowchart of example process 300 for node 110 A/ 110 B/ 110 C to perform ID allocation in virtualized computing environment 100 .
- Example process 300 may include one or more operations, functions, or actions illustrated by one or more blocks, such as 310 to 340 . The various blocks may be combined into fewer blocks, divided into additional blocks, and/or eliminated depending on the desired implementation.
- Example process 300 may be performed by node 110 A/ 110 B/ 110 C, such as using ID allocation module 120 A/ 120 B/ 120 C, etc.
- node-A 110 A (“first node”) in the following. It should be understood that example process 300 may be similarly performed by node-B 110 B and node-C 110 C (“second node”).
- node-A 110 A retrieves batch of IDs 124 A from shared pool 172 to cache-A 122 A.
- the IDs may be retrieved to service future ID allocation requests from a distributed firewall controller (i.e., ID consumer).
- ID consumer i.e., ID consumer
- the IDs may be used for uniquely identifying firewall rules 262 across cluster 102 .
- IDs in batch 124 A may be exclusively allocated by node-A 110 A in a cluster-aware manner, and the same ID is not allocated to different objects by different nodes.
- node-A 110 A in response to receiving a request for ID allocation from ID consumer 126 A/ 128 B, allocates ID(s) from batch 124 A in cache-A 122 A to object(s) for unique identification of those object(s) across cluster 102 .
- a response that includes the allocated ID(s) is sent to ID consumer 126 A/ 128 B.
- ID allocation according to example process 300 may be implemented for identifying any suitable objects across cluster 102 .
- other example objects that require unique identification may include Network Address Translation (NAT) rules, logical switches, logical (distributed) routers, etc.
- Example ID consumers include management components associated with the objects, such as the distributed firewall controller in FIG. 2 , edge device, network gateway, logical switch manager, logical router manager, etc.
- multiple pools may be shared across cluster 102 , such as pool 172 for firewall rules 262 and a separate pool for NAT rules.
- node 110 A/ 110 B/ 110 C may maintain multiple caches to store different batches of IDs retrieved from the respective pools.
- FIG. 4 is a flowchart of example detailed process 400 for node 110 A/ 110 B/ 110 C to perform ID allocation in virtualized computing environment 100 .
- Example process 400 may include one or more operations, functions, or actions illustrated by one or more blocks, such as 410 to 460 . The various blocks may be combined into fewer blocks, divided into additional blocks, and/or eliminated depending on the desired implementation.
- Example process 400 may be performed by node 110 A/ 110 B/ 110 C using any suitable approach, such as ID allocation module 120 A/ 120 B/ 120 C, etc.
- ID allocation module 120 A/ 120 B/ 120 C etc.
- FIG. 4 will be explained with reference to FIG. 5 , which is a schematic diagram illustrating example ID retrievals 500 from pool 172 to cache 122 A/ 122 B/ 122 C associated with node 110 A/ 110 B/ 110 C in virtualized computing environment 100 in FIG. 1 .
- cache 122 A/ 122 B/ 122 C is created for node 110 A/ 110 B/ 110 C with an initial batch of IDs.
- shared pool 172 represents a common pool of IDs shared by node-A 110 A, node-B 110 B and node-C 110 C in cluster 102 .
- shared pool 172 and cache 122 A/ 122 B/ 122 C may be implemented as objects or data structures having any suitable attributes.
- Cache 122 A/ 122 B/ 122 C may also be characterized using attributes such as cache.remaining to indicate the number or quantity of unallocated ID and cache.next to indicate the next unallocated ID in cache 122 A/ 122 B/ 122 C.
- a first batch of IDs (see 510 ) may be retrieved from shared pool 172 to cache-A 122 A using any suitable approach, such as node-A 110 A invoking function allocateFromPool( ) that returns a result in the form of (batchStart, batchSize).
- batchStart represents the first value of the retrieved batch of IDs
- a second batch of IDs may be retrieved from shared pool 172 to cache-B 122 B created for node-B 110 B by invoking allocateFromPool( ).
- IDs ranging from 1 to 1024 are stored in cache-A 122 A; 1025 to 2048 in cache-B 122 B; and 2049 to 4072 in cache-C 122 C.
- each node 110 A/ 110 B/ 110 C may perform ID allocation from its own local cache 122 A/ 122 B/ 122 C in a distributed manner.
- node-A 110 A receives an ID allocation request from ID consumer 126 A to allocate M IDs, with M representing a requested quantity of IDs to be allocated.
- the two scenarios will be discussed further below.
- node-A 110 A does not immediately retrieve the required (M ⁇ cache.remaining) from shared pool 172 to meet the allocation request.
- M 1100
- this approach responds to ID consumer 126 A with the K available IDs without having to wait for node-A 110 A to retrieve more IDs.
- This allows ID consumer 126 A to start using the available IDs, as well as reduces the response time of node-A 110 A.
- This also reduces the number of times shared pool 172 is accessed, which in turn reduces the likelihood of conflicting with another concurrent access to shared pool 172 .
- any suitable modification may be made to example process 400 , such as by retrieving more IDs from shared pool 172 when cache.remaining is insufficient to meet a request.
- node-B 110 B may respond to requests from ID consumer 126 B/ 128 B by allocating IDs from cache-B 122 B, and node-C 110 C performing allocation from cache-C 122 C. Since cache-A 122 A, cache-B 122 B and cache-C 122 C each contain a range of IDs from shared pool 172 , node-A 110 A, node-B 110 B and node-C 110 C may perform ID allocation independently in a more efficient way compared to having to access shared pool 172 in response to each and every ID allocation request.
- node-A 110 A determines whether the retrieval from shared pool 172 is successful, such as by detecting an exception when attempting to update pool.lastAllocated associated with shared pool 172 .
- the exception e.g., called “CommitConflictException,” “StaleObjectState” or “ConcurrentUpdate” indicates that the retrieval is unsuccessful. This occurs when shared pool 172 is concurrently accessed by multiple threads that are all attempting to update pool.lastAllocated. However, only one thread will succeed while others fail. As such, the retrieval is successful if the pool.lastAllocated attribute is successfully updated. Otherwise, the retrieval is unsuccessful if the exception is detected.
- the exception may be caused by multiple threads executing on the same node (e.g., node-A 110 A), or multiple threads executing on different nodes (e.g., node-A 110 A and node-B 110 B).
- the term “thread” may refer generally to a thread of execution. Threads provide a way for a software program to split itself into multiple simultaneous running tasks. For example, node-A 110 A may create multiple threads to process multiple allocation, requests concurrently, such as 40 requests concurrently in the distributed firewall application in FIG. 2 .
- node-A 110 A detects an exception (see 540 ) when it accesses shared pool 172 concurrently as node-B 110 B.
- node-B 110 B may have invoked the allocateFromPool( ) function just before node-A 110 A, and successfully retrieved a batch of IDs (see 550 ) from shared pool 172 .
- node-A 110 A finds its invocation of the allocateFromPool( ) function unsuccessful in response to detecting the exception.
- thread.sleep random_time
- Example process 400 then proceeds to 425 , 430 and 435 .
- node-A 110 A, node-B 110 B and node-C 110 C from cluster 102 may share multiple pools for supporting different applications that require unique ID allocation.
- a retrieval request may specify a particular shared pool 172 , such as in the form of allocateFromPool(poolID).
- a particular batchSize may be specified in a retrieval request, such as allocateFromPool(poolID,batchSize).
- ID allocation may be performed in a lightweight, unmanaged manner that does not necessitate lifecycle management of IDs.
- ID leakage may occur during ID allocation.
- the term “leakage” may refer generally to the loss of IDs before they are consumed or allocated.
- lifecycle management of IDs is performed to manage temporary allocation and subsequent release of IDs. However, this creates additional processing burden for node 110 A/ 110 B/ 110 C and causes unnecessary delay to ID allocation.
- ID leakage may be tolerated to avoid the need for lifecycle management.
- any unconsumed IDs in cache 122 A/ 122 B/ 122 C may be lost once node 110 A/ 110 B/ 110 C restarts or fails.
- node-A 110 A restarts or fails, the remaining 974 IDs will be lost even though the IDs have not been allocated to any ID consumer.
- node-A 110 A instead of attempting to track the unconsumed IDs, node-A 110 A simply has to retrieve a new batch of IDs from shared pool 172 the next time it receives a new ID allocation request.
- ID allocation module 120 A should support substantially 300 ID allocations per minute to avoid, or reduce the likelihood of, adversely affecting the performance of node-A 110 A.
- API Application Programming Interface
- shared pool 172 may be implemented as a persistent entity that is common across cluster 102 and replicated on all nodes 110 A- 110 C.
- replication regions may be configured to each store a copy of shared pool 172 , such as a first replicated region for node-A 110 A, a second replicated region for node-B 110 B and a third replicated region for node-C 110 C.
- the regions are analogous to tables in a relational database and manage data in a distributed fashion as name/value pairs. This reduces the latency of data access from shared pool 172 by each node 110 A/ 110 B/ 110 C. Any changes made to shared pool 172 will be persisted across the different replicated regions.
- the above examples can be implemented by hardware (including hardware logic circuitry), software or firmware or a combination thereof.
- the above examples may be implemented by any suitable computing device, computer system, etc.
- the computing device may include processor(s), memory unit(s) and physical NIC(s) that may communicate with each other via a communication bus, etc.
- the computing device may include a non-transitory computer-readable medium having stored thereon instructions or program code that, when executed by the processor, cause the processor to perform processes described herein with reference to FIG. 1 to FIG. 5 .
- computing devices capable of acting as node 110 A/ 110 B/ 110 C may be deployed in virtualized computing environment 100 .
- Special-purpose hardwired circuitry may be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), and others.
- ASICs application-specific integrated circuits
- PLDs programmable logic devices
- FPGAs field-programmable gate arrays
- processor is to be interpreted broadly to include a processing unit, ASIC, logic unit, or programmable gate array etc.
- a virtualized computing instance may represent an addressable data compute node or isolated user space instance.
- any suitable technology may be used to provide isolated user space instances, not just hardware virtualization.
- Other virtualized computing instances may include containers (e.g., running on top of a host operating system without the need for a hypervisor or separate operating system such as Docker, etc.; or implemented as an operating system level virtualization), virtual private servers client computers, etc.
- the virtual machines may also be complete computation environments, containing virtual equivalents of the hardware and system software components of a physical computing system.
- a “computer-readable storage medium”, as the term is used herein, includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant (PDA), mobile device, manufacturing tool, any device with a set of one or more processors, etc.).
- a computer-readable storage medium may include recordable/non recordable media (e.g., read-only memory (ROM), random access memory (RAM), magnetic disk or optical storage media, flash memory devices etc.).
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
Claims (19)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN201641021782 | 2016-06-24 | ||
IN201641021782 | 2016-06-24 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170371716A1 US20170371716A1 (en) | 2017-12-28 |
US11822970B2 true US11822970B2 (en) | 2023-11-21 |
Family
ID=60677532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/297,172 Active 2037-10-04 US11822970B2 (en) | 2016-06-24 | 2016-10-19 | Identifier (ID) allocation in a virtualized computing environment |
Country Status (1)
Country | Link |
---|---|
US (1) | US11822970B2 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108415828B (en) * | 2018-01-23 | 2021-09-24 | 广州视源电子科技股份有限公司 | Program testing method, apparatus, readable storage medium and computer equipment |
US11082303B2 (en) * | 2019-07-22 | 2021-08-03 | Vmware, Inc. | Remotely hosted management of network virtualization |
CN111865677B (en) * | 2020-07-13 | 2022-11-04 | 苏州浪潮智能科技有限公司 | Device for identifying ID address of server node |
US12050567B2 (en) * | 2021-08-10 | 2024-07-30 | Palantir Technologies Inc. | Framework for live data migration |
US11716396B1 (en) * | 2021-08-27 | 2023-08-01 | Oracle International Corporation | System and method for providing unique identifiers for use with enterprise application environments |
US20230169114A1 (en) * | 2021-11-29 | 2023-06-01 | Sap Se | Ad hoc processing of graph data |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5884322A (en) * | 1994-05-24 | 1999-03-16 | Apple Computer, Inc. | Method and apparatus for creating and assigning unique identifiers for network entities and database items in a networked computer system |
US6457053B1 (en) * | 1998-09-21 | 2002-09-24 | Microsoft Corporation | Multi-master unique identifier allocation |
US6842789B1 (en) * | 1999-10-21 | 2005-01-11 | Sun Microsystems, Inc. | Method and apparatus for assigning unique device identifiers across a distributed computing system |
US7197549B1 (en) * | 2001-06-04 | 2007-03-27 | Cisco Technology, Inc. | On-demand address pools |
US20070276833A1 (en) * | 2006-05-10 | 2007-11-29 | Sybase, Inc. | System and Method for Assignment of Unique Identifiers in a Distributed Environment |
US20090213763A1 (en) * | 2008-02-22 | 2009-08-27 | Dunsmore Richard J | Method and system for dynamic assignment of network addresses in a communications network |
US20100189073A1 (en) * | 2009-01-26 | 2010-07-29 | Xg Technology, Inc. | Method for IP address management in networks using a proxy based approach in mobile IP telephony |
WO2011049553A1 (en) * | 2009-10-20 | 2011-04-28 | Hewlett-Packard Development Company, L.P. | Universally unique semantic identifiers |
US20110238793A1 (en) * | 2010-03-23 | 2011-09-29 | Juniper Networks, Inc. | Managing distributed address pools within network devices |
US8856540B1 (en) * | 2010-12-29 | 2014-10-07 | Amazon Technologies, Inc. | Customized ID generation |
US20140351396A1 (en) * | 2013-05-21 | 2014-11-27 | Vmware, Inc. | Hierarchical Network Managers |
US20160234161A1 (en) * | 2015-02-07 | 2016-08-11 | Vmware, Inc. | Multi-subnet participation for network gateway in a cloud environment |
US9813374B1 (en) * | 2015-06-10 | 2017-11-07 | Amazon Technologies, Inc. | Automated allocation using spare IP addresses pools |
-
2016
- 2016-10-19 US US15/297,172 patent/US11822970B2/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5884322A (en) * | 1994-05-24 | 1999-03-16 | Apple Computer, Inc. | Method and apparatus for creating and assigning unique identifiers for network entities and database items in a networked computer system |
US6457053B1 (en) * | 1998-09-21 | 2002-09-24 | Microsoft Corporation | Multi-master unique identifier allocation |
US6842789B1 (en) * | 1999-10-21 | 2005-01-11 | Sun Microsystems, Inc. | Method and apparatus for assigning unique device identifiers across a distributed computing system |
US7197549B1 (en) * | 2001-06-04 | 2007-03-27 | Cisco Technology, Inc. | On-demand address pools |
US20070276833A1 (en) * | 2006-05-10 | 2007-11-29 | Sybase, Inc. | System and Method for Assignment of Unique Identifiers in a Distributed Environment |
US20090213763A1 (en) * | 2008-02-22 | 2009-08-27 | Dunsmore Richard J | Method and system for dynamic assignment of network addresses in a communications network |
US20100189073A1 (en) * | 2009-01-26 | 2010-07-29 | Xg Technology, Inc. | Method for IP address management in networks using a proxy based approach in mobile IP telephony |
WO2011049553A1 (en) * | 2009-10-20 | 2011-04-28 | Hewlett-Packard Development Company, L.P. | Universally unique semantic identifiers |
US20110238793A1 (en) * | 2010-03-23 | 2011-09-29 | Juniper Networks, Inc. | Managing distributed address pools within network devices |
US8856540B1 (en) * | 2010-12-29 | 2014-10-07 | Amazon Technologies, Inc. | Customized ID generation |
US20140351396A1 (en) * | 2013-05-21 | 2014-11-27 | Vmware, Inc. | Hierarchical Network Managers |
US20160234161A1 (en) * | 2015-02-07 | 2016-08-11 | Vmware, Inc. | Multi-subnet participation for network gateway in a cloud environment |
US9813374B1 (en) * | 2015-06-10 | 2017-11-07 | Amazon Technologies, Inc. | Automated allocation using spare IP addresses pools |
Non-Patent Citations (1)
Title |
---|
Data Structure Consistency Using Atomic Operations in Storage Devices Ananth Devulapalli, Dennis Dalessandro, Pete Wyckoff (Year: 2008). * |
Also Published As
Publication number | Publication date |
---|---|
US20170371716A1 (en) | 2017-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11500670B2 (en) | Computing service with configurable virtualization control levels and accelerated launches | |
US11822970B2 (en) | Identifier (ID) allocation in a virtualized computing environment | |
US20210247973A1 (en) | Virtualized file server user views | |
US11218364B2 (en) | Network-accessible computing service for micro virtual machines | |
US11340807B2 (en) | Mounting a shared data store of a server cluster on a client cluster for use as a remote data store | |
US8954704B2 (en) | Dynamic network adapter memory resizing and bounding for virtual function translation entry storage | |
CN104115121B (en) | The system and method that expansible signaling mechanism is provided virtual machine (vm) migration in middleware machine environment | |
US8937940B2 (en) | Optimized virtual function translation entry memory caching | |
US20220100550A1 (en) | Accelerator Loading Method, System, and Apparatus | |
JP2016170669A (en) | Load distribution function deployment method, load distribution function deployment device, and load distribution function deployment program | |
US20210405902A1 (en) | Rule-based provisioning for heterogeneous distributed systems | |
US10237346B2 (en) | Maintaining partition-tolerant distributed metadata | |
WO2020247235A1 (en) | Managed computing resource placement as a service for dedicated hosts | |
US11416267B2 (en) | Dynamic hardware accelerator selection and loading based on acceleration requirements | |
US20240231873A1 (en) | High availability control plane node for container-based clusters | |
US10609139B2 (en) | Coordinator ownership authentication in a distributed system with multiple storage object coordinators | |
US10474394B2 (en) | Persistent reservation emulation in shared virtual storage environments | |
US10620856B2 (en) | Input/output (I/O) fencing with persistent reservation information in shared virtual storage environments | |
US9417900B2 (en) | Method and system for automatic assignment and preservation of network configuration for a virtual machine | |
US11334380B2 (en) | Remote memory in hypervisor | |
US11086779B2 (en) | System and method of a highly concurrent cache replacement algorithm | |
US11683374B2 (en) | Containerized gateways and exports for distributed file systems | |
US20240354136A1 (en) | Scalable volumes for containers in a virtualized environment | |
US10671320B2 (en) | Clustered storage system configured with decoupling of process restart from in-flight command execution | |
CN112668000A (en) | Configuration data processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NICIRA, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUJAR, BHAGYASHREE;AMBARDEKAR, PRASHANT;GAURAV, PRAYAS;AND OTHERS;SIGNING DATES FROM 20160927 TO 20160928;REEL/FRAME:040417/0180 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: NOTICE OF APPEAL FILED |
|
STCV | Information on status: appeal procedure |
Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS |
|
STCV | Information on status: appeal procedure |
Free format text: BOARD OF APPEALS DECISION RENDERED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |