CN106933654B - Virtual machine starting method based on cache - Google Patents
Virtual machine starting method based on cache Download PDFInfo
- Publication number
- CN106933654B CN106933654B CN201710159705.1A CN201710159705A CN106933654B CN 106933654 B CN106933654 B CN 106933654B CN 201710159705 A CN201710159705 A CN 201710159705A CN 106933654 B CN106933654 B CN 106933654B
- Authority
- CN
- China
- Prior art keywords
- physical node
- node
- virtual machine
- physical
- mirror image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000005540 biological transmission Effects 0.000 claims abstract description 20
- 238000012545 processing Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45575—Starting, stopping, suspending or resuming virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
When a physical node of the method provided by the invention receives a request for applying a virtual machine, firstly, whether a target mirror image is cached in the physical node or a brother node is searched, and if the target mirror image is cached and the physical node or the brother node can meet the resource requirement, the virtual machine is directly started in the physical node or the brother node. Therefore, the method provided by the invention can reduce the occurrence of mirror image transmission to the maximum extent. Then, steps S5 and S6 plan transmission paths from the individual source points and the selection of intermediate switches, which optimizes bandwidth utilization across the network. Therefore, the method provided by the invention can effectively improve the transmission speed. In addition, step S8 includes determining whether the target image needs to be cached after the virtual machine is started. This enables the target image to be cached in the physical node included in the cluster where the target is physically located. The occurrence of image transfers is reduced for subsequent virtual machine boots. Therefore, the method provided by the invention can effectively accelerate the starting speed of the virtual machine.
Description
Technical Field
The invention relates to the field of cloud computing, in particular to a virtual machine starting method based on cache.
Background
In recent years, with the development of cloud computing technology, many traditional companies and individual users have moved their computing tasks from traditional servers into cloud computing centers. Through the virtual technology of cloud computing, the utilization rate of a plurality of it resources (including CPU, memory, hard disk and network resources) is greatly improved, and the cost is reduced.
As an important technology in cloud computing, the virtual machine technology has been widely applied to various cloud computing platforms as a carrier of common computing resources. For example, Azure, Amazon EC2, RackSpace and other famous cloud computing platforms all adopt a virtual machine technology to provide high-efficiency and high-expansibility computing services for users, and users are also generally adapted to recognize the mode.
Although the virtual machine technology has been accepted, the technology itself has some technical difficulties:
1) concurrent use by the user: among cloud computing users, the cloud computing users are mainly divided into individual users and enterprise-level users, wherein the enterprise users usually apply for and open multiple virtual machines at one time due to large traffic, so that a large number of mirror images need to be transmitted in a data center in the same time period, and the load of network bandwidth is increased instantly. And the starting time of the virtual machine in the period of time is greatly improved, and the user experience is reduced.
2) Diversity of images: in cloud computing, users typically use a variety of systems, such as: windows, linux, ioS, etc., and for the same windows operating systems are also divided into versions of winXP, win7, win8, winServer, etc., while for linux, such open source operating systems are more diverse (Centos, Fedora, ubuntu.). This requires that the data center network must have these various images.
3) Special architecture of data center: in a data center, a special architecture represented by fat tree is generally used, rather than the conventional multi-way tree structure. Corresponding to such a special structure, the traditional cache solution cannot well utilize the characteristics of the topology architecture.
Centralized processing mode: in a conventional data center, the data information is usually processed in a centralized manner, that is, there must be a central control node in the network, and this node holds global information (including the distribution of requests, the network condition, the resource condition, etc.). When a request is received by the data center, the request is handed to the centralized control node for processing, and the virtual machine instantiation of the request is determined, and the required resources are obtained. The advantage of a centralized processing scheme is that global information can be grasped to make an optimal decision. But at worst it is also obvious that any scheduling and resource changes in the network must be requested and notified to this control node, since the data center has to keep track of the global information situation in the network in real time. This leads to the following problems:
1) these requests and notification messages themselves additionally increase the bandwidth load on the network, making an otherwise congested network more congested when there are multiple concurrent requests.
2) Now, the size of the data center is getting larger, when maintaining the network information, the requirement for the computing power of the control node itself will become larger, and if there are too many requests, it is likely to happen because the control node handles the instantiation delay caused by untimely processing.
3) Since all requests are handled via this unique control node, a "single point of failure" is necessarily a problem that has to be considered.
In the virtual machine instantiation process, it is usually necessary to obtain a target virtual machine image from a specific place (resource pool), and then transmit the target image to a target node through a network to perform instantiation operation, whereas the size of a general system image varies from several hundred M to several G, and transmission of multiple images at the same time will affect the bandwidth resources of the whole network, resulting in a long virtual machine instantiation time, and thus a very unfriendly user experience.
Disclosure of Invention
The invention provides a cache-based virtual machine starting method for solving the defects of network bandwidth resource consumption and virtual machine instantiation time prolonging caused by the fact that a virtual machine image needs to be transmitted between a data source and a target physical node in the virtual machine starting method provided by the prior art.
In order to realize the purpose, the technical scheme is as follows:
a virtual machine starting method based on cache comprises the following steps:
s1, a physical node A receives a request for applying a virtual machine, and if a target mirror image is cached in the physical node A and the resource of the physical node A meets the requirement of application, the target mirror image is directly utilized to start the virtual machine at the local part of the physical node A; otherwise, executing step S2;
s2, searching a father node of the physical node A, and adding a node which is in the same cluster with the physical node A and is cached with a target mirror image in son nodes of the father node into a list PM _ list;
s3.a) if the PM _ list is not empty and the resource of the physical node B in the PM _ list meets the application requirement, starting a virtual machine at the physical node B by using the target mirror image cached by the physical node B;
b) if the PM _ list is empty, selecting a physical node C, the resource of which can meet the application requirement, from child nodes of the parent node of the physical node a as a node where the virtual machine is placed, and then executing step S4;
c) if the physical node of which the resource can meet the application requirement cannot be found in the two conditions of a) and b), forwarding the request to the physical nodes of other clusters for processing;
s4, starting from the physical node C, recursively searching mirror image information cached by the son node of the father node to obtain all clusters of physical nodes cached with target mirror images;
s5, traversing the clusters obtained in the step S4, and selecting corresponding physical nodes from the corresponding clusters as data sources according to the congestion degree of data transmission in the clusters;
s6, circulating all layers in the fat tree, selecting a switch with the minimum workload from all switches with the same position in each layer to construct a transmission path from a data source to a physical node C, and then transmitting the target mirror image from the data source to the physical node C through the transmission path;
s7, the physical node C locally starts a virtual machine by using the target mirror image; then judging whether the target mirror image has enough space to store the target mirror image, if so, directly caching the target mirror image, otherwise, executing the step S8;
s8, calculating the occurrence frequency of various mirror images cached in the physical node in the cluster where the physical node C is located, if the occurrence frequency of the target mirror images is higher than the occurrence frequency of the other mirror images, not caching the target mirror images, otherwise deleting the mirror images with the highest occurrence frequency, and caching the target mirror images into the physical node in the cluster where the physical node C is located.
In the above scheme, when the physical node receives a request for applying for a virtual machine, it first searches whether a target mirror image is cached in itself or a brother node, and if the target mirror image is cached and the itself or the brother node can meet a resource requirement, the virtual machine is directly started in itself or the brother node. Therefore, the method provided by the invention can reduce the occurrence of mirror image transmission to the maximum extent. Then, steps S5 and S6 plan transmission paths from the individual source points and the selection of intermediate switches, which optimizes bandwidth utilization across the network. Therefore, the method provided by the invention can effectively improve the transmission speed. In addition, step S8 includes determining whether the target image needs to be cached after the virtual machine is started. This enables the target image to be cached in the physical node included in the cluster where the target is physically located. The occurrence of image transfers is reduced for subsequent virtual machine boots. Therefore, the method provided by the invention can effectively accelerate the starting speed of the virtual machine.
Preferably, the resource of the physical node can meet the requirement of the application, which means that the physical node can meet the following conditions: the amount of computing resources remaining for the physical node is greater than or equal to the amount of computing resources needed to satisfy the target image.
Compared with the prior art, the invention has the beneficial effects that:
when a physical node of the method provided by the invention receives a request for applying a virtual machine, firstly, whether a target mirror image is cached in the physical node or a brother node is searched, and if the target mirror image is cached and the physical node or the brother node can meet the resource requirement, the virtual machine is directly started in the physical node or the brother node. Therefore, the method provided by the invention can reduce the occurrence of mirror image transmission to the maximum extent. In addition, step S8 includes determining whether the target image needs to be cached after the virtual machine is started. This enables the target image to be cached in the physical node included in the cluster where the target is physically located. The occurrence of image transfers is reduced for subsequent virtual machine boots. Therefore, the method provided by the invention can effectively accelerate the starting speed of the virtual machine.
Drawings
FIG. 1 is a schematic diagram of steps S1-S3.
FIG. 2 is a schematic diagram of steps S4-S6.
FIG. 3 is a schematic diagram of steps S7-S8.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
the invention is further illustrated below with reference to the figures and examples.
Example 1
As shown in fig. 1, 2 and 3, the method provided by the invention comprises the following steps:
s1, a physical node A receives a request for applying a virtual machine, and if a target mirror image is cached in the physical node A and the resource of the physical node A meets the requirement of application, the target mirror image is directly utilized to start the virtual machine at the local part of the physical node A; otherwise, executing step S2;
s2, searching a father node of the physical node A, and adding a node which is in the same cluster with the physical node A and is cached with a target mirror image in son nodes of the father node into a list PM _ list;
s3.a) if the PM _ list is not empty and the resource of the physical node B in the PM _ list meets the application requirement, starting a virtual machine at the physical node B by using the target mirror image cached by the physical node B;
b) if the PM _ list is empty, selecting a physical node C, the resource of which can meet the application requirement, from child nodes of the parent node of the physical node a as a node where the virtual machine is placed, and then executing step S4;
c) if the physical node of which the resource can meet the application requirement cannot be found in the two conditions of a) and b), forwarding the request to the physical nodes of other clusters for processing;
s4, starting from the physical node C, recursively searching mirror image information cached by the son node of the father node to obtain all clusters of physical nodes cached with target mirror images;
s5, traversing the clusters obtained in the step S4, and selecting corresponding physical nodes from the corresponding clusters as data sources according to the congestion degree of data transmission in the clusters;
s6, circulating all layers in the fat tree, selecting a switch with the minimum workload from all switches with the same position in each layer to construct a transmission path from a data source to a physical node C, and then transmitting the target mirror image from the data source to the physical node C through the transmission path;
s7, the physical node C locally starts a virtual machine by using the target mirror image; then judging whether the target mirror image has enough space to store the target mirror image, if so, directly caching the target mirror image, otherwise, executing the step S8;
s8, calculating the occurrence frequency of various mirror images cached in the physical node in the cluster where the physical node C is located, if the occurrence frequency of the target mirror images is higher than the occurrence frequency of the other mirror images, not caching the target mirror images, otherwise deleting the mirror images with the highest occurrence frequency, and caching the target mirror images into the physical node in the cluster where the physical node C is located.
In the above scheme, when the physical node receives a request for applying for a virtual machine, it first searches whether a target mirror image is cached in itself or a brother node, and if the target mirror image is cached and the itself or the brother node can meet a resource requirement, the virtual machine is directly started in itself or the brother node. Therefore, the method provided by the invention can reduce the occurrence of mirror image transmission to the maximum extent. Then, steps S5 and S6 plan transmission paths from the individual source points and the selection of intermediate switches, which optimizes bandwidth utilization across the network. Therefore, the method provided by the invention can effectively improve the transmission speed. In addition, step S8 includes determining whether the target image needs to be cached after the virtual machine is started. This enables the target image to be cached in the physical node included in the cluster where the target is physically located. The occurrence of image transfers is reduced for subsequent virtual machine boots. Therefore, the method provided by the invention can effectively accelerate the starting speed of the virtual machine.
In a specific implementation process, the resource of the physical node can meet the requirement of the application, which means that the physical node can meet the following conditions: the amount of computing resources remaining for the physical node is greater than or equal to the amount of computing resources needed to satisfy the target image.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (2)
1. A virtual machine starting method based on cache is characterized in that: the method comprises the following steps:
s1, a physical node A receives a request for applying a virtual machine, and if a target mirror image is cached in the physical node A and the resource of the physical node A meets the requirement of application, the target mirror image is directly utilized to start the virtual machine at the local part of the physical node A; otherwise, executing step S2;
s2, searching a father node of the physical node A, and adding a node which is in the same cluster with the physical node A and is cached with a target mirror image in son nodes of the father node into a list PM _ list;
s3.a) if the PM _ list is not empty and the resource of the physical node B in the PM _ list meets the application requirement, starting a virtual machine at the physical node B by using the target mirror image cached by the physical node B;
b) if the PM _ list is empty, selecting a physical node C, the resource of which can meet the application requirement, from child nodes of the parent node of the physical node a as a node where the virtual machine is placed, and then executing step S4;
c) if the physical node of which the resource can meet the application requirement cannot be found in the two conditions of a) and b), forwarding the request to the physical nodes of other clusters for processing;
s4, starting from the physical node C, recursively searching mirror image information cached by the son node of the father node of the physical node C to obtain all clusters of the physical nodes cached with target mirror images;
s5, traversing the clusters obtained in the step S4, and selecting corresponding physical nodes from the corresponding clusters as data sources according to the congestion degree of data transmission in the clusters;
s6, circulating all layers in the fat tree, selecting a switch with the minimum workload from all switches with the same position in each layer to construct a transmission path from a data source to a physical node C, and then transmitting the target mirror image from the data source to the physical node C through the transmission path;
s7, the physical node C locally starts a virtual machine by using the target mirror image; then judging whether the target mirror image has enough space to store the target mirror image, if so, directly caching the target mirror image, otherwise, executing the step S8;
s8, calculating the occurrence frequency of various mirror images cached in the physical node in the cluster where the physical node C is located, if the occurrence frequency of the target mirror images is higher than the occurrence frequency of the other mirror images, not caching the target mirror images, otherwise deleting the mirror images with the highest occurrence frequency, and caching the target mirror images into the physical node in the cluster where the physical node C is located.
2. The cache-based virtual machine startup method according to claim 1, characterized in that: the resource of the physical node can meet the requirement of application, which means that the physical node can meet the following conditions: the amount of computing resources remaining for the physical node is greater than or equal to the amount of computing resources needed to satisfy the target image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710159705.1A CN106933654B (en) | 2017-03-17 | 2017-03-17 | Virtual machine starting method based on cache |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710159705.1A CN106933654B (en) | 2017-03-17 | 2017-03-17 | Virtual machine starting method based on cache |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106933654A CN106933654A (en) | 2017-07-07 |
CN106933654B true CN106933654B (en) | 2020-08-28 |
Family
ID=59433259
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710159705.1A Active CN106933654B (en) | 2017-03-17 | 2017-03-17 | Virtual machine starting method based on cache |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106933654B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107463402B (en) * | 2017-07-31 | 2018-09-14 | 腾讯科技(深圳)有限公司 | The operation method and device of virtual opetrating system |
CN110704157B (en) * | 2019-09-12 | 2023-06-30 | 深圳市元征科技股份有限公司 | Application starting method, related device and medium |
CN112596825B (en) * | 2020-11-26 | 2022-04-01 | 新华三大数据技术有限公司 | Cloud desktop starting method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104065547A (en) * | 2014-06-23 | 2014-09-24 | 西安电子科技大学昆山创新研究院 | A method for selecting physical hosts inside a computing center |
CN102629941B (en) * | 2012-03-20 | 2014-12-31 | 武汉邮电科学研究院 | Caching method of a virtual machine mirror image in cloud computing system |
WO2015004575A1 (en) * | 2013-07-11 | 2015-01-15 | International Business Machines Corporation | Virtual machine backup |
CN102843426B (en) * | 2012-08-09 | 2015-11-18 | 网宿科技股份有限公司 | Based on Web cache resources shared system and the method for intelligent father node |
CN105718280A (en) * | 2015-06-24 | 2016-06-29 | 乐视云计算有限公司 | Method and management platform for accelerating IO of virtual machine |
CN103440157B (en) * | 2013-06-25 | 2016-12-28 | 百度在线网络技术(北京)有限公司 | A kind of method and apparatus of the template for obtaining virtual machine |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9471373B2 (en) * | 2011-09-24 | 2016-10-18 | Elwha Llc | Entitlement vector for library usage in managing resource allocation and scheduling based on usage and priority |
CN103560967B (en) * | 2013-10-17 | 2016-06-01 | 电子科技大学 | The virtual data center mapping method of a kind of business demand perception |
-
2017
- 2017-03-17 CN CN201710159705.1A patent/CN106933654B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102629941B (en) * | 2012-03-20 | 2014-12-31 | 武汉邮电科学研究院 | Caching method of a virtual machine mirror image in cloud computing system |
CN102843426B (en) * | 2012-08-09 | 2015-11-18 | 网宿科技股份有限公司 | Based on Web cache resources shared system and the method for intelligent father node |
CN103440157B (en) * | 2013-06-25 | 2016-12-28 | 百度在线网络技术(北京)有限公司 | A kind of method and apparatus of the template for obtaining virtual machine |
WO2015004575A1 (en) * | 2013-07-11 | 2015-01-15 | International Business Machines Corporation | Virtual machine backup |
CN104065547A (en) * | 2014-06-23 | 2014-09-24 | 西安电子科技大学昆山创新研究院 | A method for selecting physical hosts inside a computing center |
CN105718280A (en) * | 2015-06-24 | 2016-06-29 | 乐视云计算有限公司 | Method and management platform for accelerating IO of virtual machine |
Non-Patent Citations (3)
Title |
---|
基于影子缓存的多增量虚拟机启动系统;胡修堃;《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》;20140615(第06期);I137-16 * |
基于胖树的启发式P2P资源搜索算法研究;葛祥友;《广西民族大学学报(自然科学版》;20130831;第19卷(第3期);76-80 * |
数据中心网络拓扑结构研究;万海涛;《电脑知识与技术》;20160731;第12卷(第21期);43-45 * |
Also Published As
Publication number | Publication date |
---|---|
CN106933654A (en) | 2017-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11687555B2 (en) | Conditional master election in distributed databases | |
Bunyakitanon et al. | End-to-end performance-based autonomous VNF placement with adopted reinforcement learning | |
US10348810B1 (en) | Scalable distributed computations utilizing multiple distinct clouds | |
CN108737270B (en) | Resource management method and device for server cluster | |
US10243878B2 (en) | Fog computing network resource partitioning | |
US9720724B2 (en) | System and method for assisting virtual machine instantiation and migration | |
KR102273413B1 (en) | Dynamic scheduling of network updates | |
US8965845B2 (en) | Proactive data object replication in named data networks | |
US10366111B1 (en) | Scalable distributed computations utilizing multiple distinct computational frameworks | |
CN111165019B (en) | Controller in access network | |
US9705750B2 (en) | Executing data stream processing applications in dynamic network environments | |
US11314545B2 (en) | Predicting transaction outcome based on artifacts in a transaction processing environment | |
US10983828B2 (en) | Method, apparatus and computer program product for scheduling dedicated processing resources | |
US10776404B2 (en) | Scalable distributed computations utilizing multiple distinct computational frameworks | |
CN111641567B (en) | Dynamic network bandwidth allocation and management based on centralized controller | |
KR20210056655A (en) | Method for selecting predict-based migration candidate and target on cloud edge | |
CN106933654B (en) | Virtual machine starting method based on cache | |
JP6326062B2 (en) | Transparent routing of job submissions between different environments | |
US11550505B1 (en) | Intra-shard parallelization of data stream processing using virtual shards | |
US20220229689A1 (en) | Virtualization platform control device, virtualization platform control method, and virtualization platform control program | |
CN112655185A (en) | Apparatus, method and storage medium for service distribution in software defined network | |
CN115604273A (en) | Method, apparatus and program product for managing computing systems | |
US11323512B2 (en) | Peer to peer infrastructure management architecture | |
US11687269B2 (en) | Determining data copy resources | |
US20240241770A1 (en) | Workload summarization for congestion avoidance in computer servers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |