Nothing Special   »   [go: up one dir, main page]

CN114116186B - Dynamic scheduling method and device for resources - Google Patents

Dynamic scheduling method and device for resources Download PDF

Info

Publication number
CN114116186B
CN114116186B CN202010871037.7A CN202010871037A CN114116186B CN 114116186 B CN114116186 B CN 114116186B CN 202010871037 A CN202010871037 A CN 202010871037A CN 114116186 B CN114116186 B CN 114116186B
Authority
CN
China
Prior art keywords
dimension
service
resource
related information
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010871037.7A
Other languages
Chinese (zh)
Other versions
CN114116186A (en
Inventor
方艾
徐雄
金铎
袁立宇
张玉忠
梁冰
杨豪杰
谭晓敏
赵华
李长江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202010871037.7A priority Critical patent/CN114116186B/en
Publication of CN114116186A publication Critical patent/CN114116186A/en
Application granted granted Critical
Publication of CN114116186B publication Critical patent/CN114116186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure provides a method and a device for dynamic scheduling of resources. The resource dynamic scheduling device collects the related information of the service node and the link experienced by the business processing flow; generating a dynamic view of resource distribution according to the related information; and processing the dynamic view data by using the trained decision model to generate a configuration result, and performing resource optimization on a first preset dimension by using the configuration result. The method and the system can be used for clearly determining the association relation between the resource allocation and the actually-occurring service, accurately knowing the user experience index and the corresponding resource bottleneck, accurately capturing and predicting the demand of the service on the resource, thereby more effectively utilizing cloud resources, improving the user experience and effectively ensuring the implementation of the 5G service.

Description

Dynamic scheduling method and device for resources
Technical Field
The disclosure relates to the field of cloud computing, and in particular relates to a method and a device for dynamically scheduling resources.
Background
In a cloud deployment service environment, for example, a service application system in a 5G related cloud environment, related cloud resources are dynamically scheduled and allocated in the actual use process of a user, and meanwhile, the process of edge, end and cloud coordination exists, so that accurate resource scheduling is required. The related art adopts a mode of resource management through resource prediction and association factor establishment.
Disclosure of Invention
The inventor finds that in the related technology, resource prediction is relatively macroscopic, only faces to business, does not refine the difference of user distribution and the like, has complex establishment process of the correlation factors among resources, and is difficult to update in real time. Thus leading to the following problems:
1) The resource allocation is not matched with the actually occurring service, some resources become hot spots, so that the congestion situation occurs, and some resources are in an idle state.
2) The demand of the business on the resources cannot be accurately captured or predicted, and accurate capacity planning is difficult to carry out.
3) It is difficult to achieve dynamic allocation and deployment of resources.
Accordingly, the present disclosure provides a dynamic scheduling scheme for resources, which can effectively implement optimized dynamic scheduling for resources.
According to a first aspect of an embodiment of the present disclosure, there is provided a method for dynamically scheduling resources, including: collecting related information of service nodes and links experienced by a service processing flow; generating a dynamic view of resource distribution according to the related information; and processing the dynamic view data by using the trained decision model to generate a configuration result, and performing resource optimization on a first preset dimension by using the configuration result.
In some embodiments, generating a dynamic view of the resource distribution from the related information comprises: sorting the related information in time to determine a call link that the business process experiences; counting the number of requests of each service node in unit time to determine the load of each service node in a second preset dimension; and forming the dynamic view according to the call link and the load of each service node on the second preset dimension.
In some embodiments, the second preset dimension includes at least one of a business dimension, a user dimension, a geographic dimension, or a time dimension.
In some embodiments, the related information includes at least one of processing time consumption of the node, inter-node jump time consumption, or resource consumption information of the node.
In some embodiments, the first preset dimension includes at least one of a geographic distribution dimension, a time dimension, or an edge cloud collaboration dimension.
In some embodiments, the configuration result includes at least one of a number of service nodes, a central processing unit performance parameter, a memory performance parameter, or a storage space performance parameter.
According to a second aspect of the embodiments of the present disclosure, there is provided a dynamic scheduling apparatus for resources, including: the acquisition module is configured to acquire the related information of the service node and the link experienced by the business processing flow; an analysis module configured to generate a dynamic view of the resource distribution from the related information; the scheduling module is configured to process the dynamic view data by utilizing the trained decision model to generate a configuration result, and perform resource optimization on a first preset dimension by utilizing the configuration result.
In some embodiments, the analysis module is configured to sort the relevant information in time to determine a call link undergone by the service process, count the number of requests per unit time of each service node to determine a load of each service node in a second preset dimension, and form the dynamic view according to the call link and the load of each service node in the second preset dimension.
In some embodiments, the second preset dimension includes at least one of a business dimension, a user dimension, a geographic dimension, or a time dimension.
In some embodiments, the related information includes at least one of processing time consumption of the node, inter-node jump time consumption, or resource consumption information of the node.
In some embodiments, the first preset dimension includes at least one of a geographic distribution dimension, a time dimension, or an edge cloud collaboration dimension.
In some embodiments, the configuration result includes at least one of a number of service nodes, a central processing unit performance parameter, a memory performance parameter, or a storage space performance parameter.
According to a third aspect of the embodiments of the present disclosure, there is provided a dynamic scheduling apparatus for resources, including: a memory configured to store instructions; a processor coupled to the memory, the processor configured to perform a method according to any of the embodiments described above based on instructions stored in the memory.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, wherein the computer readable storage medium stores computer instructions which, when executed by a processor, implement a method as referred to in any of the embodiments above.
Other features of the present disclosure and its advantages will become apparent from the following detailed description of exemplary embodiments of the disclosure, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow diagram of a method for dynamic scheduling of resources according to one embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a dynamic scheduling apparatus for resources according to one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a dynamic scheduling apparatus for resources according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram before dynamic scheduling of resources according to one embodiment of the present disclosure;
fig. 5 is a schematic diagram after dynamic scheduling of resources according to one embodiment of the present disclosure.
It should be understood that the dimensions of the various elements shown in the figures are not drawn to actual scale. Further, the same or similar reference numerals denote the same or similar members.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. The description of the exemplary embodiments is merely illustrative, and is in no way intended to limit the disclosure, its application, or uses. The present disclosure may be embodied in many different forms and is not limited to the embodiments described herein. These embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. It should be noted that: the relative arrangement of parts and steps, the composition of materials, and the numerical values set forth in these examples should be construed as merely illustrative, and not limiting unless specifically stated otherwise.
The use of the terms "comprising" or "including" and the like in this disclosure means that elements preceding the term encompass the elements recited after the term, and does not exclude the possibility of also encompassing other elements.
All terms (including technical or scientific terms) used in this disclosure have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs, unless specifically defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
Fig. 1 is a flow diagram of a method for dynamic scheduling of resources according to one embodiment of the present disclosure. In some embodiments, the following resource dynamic scheduling method steps are performed by the resource dynamic scheduling device.
In step 101, information about service nodes and links experienced by the business process flow is collected.
In some embodiments, the related information includes at least one of processing time consumption of the node, inter-node jump time consumption, or resource consumption information of the node.
For example, using Jaeger et al based on opentracking standards to implement call chain tracking to record the tracking information of the requests processed by each service node (or each functional module) in the system, it is generally necessary to allocate IDs for association tracking to the requests, and the collected information includes the source node of the requests, the processing time of the requests, the next processing node of the requests, the duration of the requests for jumping between nodes, etc., and through association, the node and link experienced by each request and related information can be known.
In addition, the system monitoring can be realized by using a prometheus, zabbix scheme and the like. The collected information mainly includes real-time operation data of a CPU (Central Processing Unit ) of the node or the module, a memory, a network, a disk and the like.
In step 102, a dynamic view of the resource distribution is generated from the relevant information.
In some embodiments, the call links experienced by the business processes are determined by associating records generated by the respective IDs according to the obtained correlation information and time ordering. Next, the number of requests per unit time (e.g., classified statistics by service, traffic, etc.) per service node (or function module) is counted to determine the load of each service node (or function module) in a second predetermined dimension. For example, the second preset dimension includes at least one of a business dimension, a user dimension, a geographic dimension, or a time dimension. And then, forming a dynamic view according to the call link and the load of each service node in a second preset dimension.
For example, by forming a connection between service nodes and overlaying the load and service type to generate a corresponding dynamic view.
In step 103, the dynamic view data is processed using the trained decision model to generate configuration results, and resource optimization is performed in a first preset dimension using the configuration results.
In some embodiments, the first preset dimension includes at least one of a geographic distribution dimension, a time dimension, or a side cloud collaboration dimension.
In some embodiments, the configuration results include at least one of a number of service nodes, a CPU performance parameter, a memory performance parameter, or a storage space performance parameter.
It should be noted that, by using a common model such as a decision tree, a neural network, linear regression, and the like, and a combination thereof, the model is trained based on historical data (test data is used when the historical data is absent), so as to obtain a decision model. The dynamic view data is input into the decision model to output various index parameters required for generating the configuration file, such as performance parameters including the number of service nodes, CPU, memory, storage, and the like. In the process of inputting the dynamic view data into the decision model, the dynamic view data can be converted as required to obtain variables or characteristic values which can be processed by the decision model.
In some embodiments, a configuration file, such as yaml formatted resource configuration and capacity planning rules, may be generated based on the input results of the decision model to optimally configure the system.
By the resource dynamic scheduling method provided by the embodiment, the association relation between the resource allocation and the actually-occurring service can be clarified, the user experience index and the corresponding resource bottleneck thereof can be accurately known, and the demand of the service on the resource can be accurately captured and predicted, so that the cloud resource can be more effectively utilized, the user experience is improved, and the implementation of the 5G service is effectively ensured.
Fig. 2 is a schematic diagram of a dynamic scheduling apparatus for resources according to an embodiment of the present disclosure. As shown in fig. 2, the resource dynamic scheduling device comprises an acquisition module 21, an analysis module 22 and a scheduling module 23.
The acquisition module 21 is configured to acquire information about service nodes and links experienced by the business process flow.
In some embodiments, the related information includes at least one of processing time consumption of the node, inter-node jump time consumption, or resource consumption information of the node.
The analysis module 22 is configured to generate a dynamic view of the resource distribution from the relevant information;
in some embodiments, the analysis module determines the call links that the business process experiences by correlating the records generated by the respective IDs according to the obtained correlation information and time ordering. Next, the number of requests per unit time (e.g., classified statistics by service, traffic, etc.) per service node (or function module) is counted to determine the load of each service node (or function module) in a second predetermined dimension. For example, the second preset dimension includes at least one of a business dimension, a user dimension, a geographic dimension, or a time dimension. And then, forming a dynamic view according to the call link and the load of each service node in a second preset dimension.
For example, by forming a connection between service nodes and overlaying the load and service type to generate a corresponding dynamic view.
The scheduling module 23 is configured to process the dynamic view data using the trained decision model to generate configuration results and to use the configuration results for resource optimization in a first preset dimension.
In some embodiments, the first preset dimension includes at least one of a geographic distribution dimension, a time dimension, or a side cloud collaboration dimension.
In some embodiments, the configuration result includes at least one of a number of service nodes, a central processing unit performance parameter, a memory performance parameter, or a storage space performance parameter.
Through the resource dynamic scheduling device provided by the embodiment, the association relation between the resource allocation and the actually-occurring service can be clarified, the user experience index and the corresponding resource bottleneck thereof can be accurately known, and the demand of the service on the resource can be accurately captured and predicted, so that the cloud resource can be more effectively utilized, the user experience is improved, and the implementation of the 5G service is effectively ensured.
Fig. 3 is a schematic structural view of a dynamic scheduling apparatus for resources according to another embodiment of the present disclosure. As shown in fig. 3, the resource dynamic scheduling apparatus includes a memory 31 and a processor 32.
The memory 31 is used for storing instructions. The processor 32 is coupled to the memory 31. The processor 32 is configured to perform a method as referred to in any of the embodiments of fig. 1 based on the instructions stored by the memory.
As shown in fig. 3, the dynamic resource scheduling device further includes a communication interface 33 for information interaction with other devices. Meanwhile, the dynamic resource scheduling device further comprises a bus 34, and the processor 32, the communication interface 33 and the memory 31 complete communication with each other through the bus 34.
The Memory 31 may include a high-speed RAM (Random Access Memory ) and may further include a Non-Volatile Memory (NVM). Such as at least one disk storage. The memory 31 may also be a memory array. The memory 31 may also be partitioned and the blocks may be combined into virtual volumes according to certain rules.
Further, the processor 32 may be a central processing unit, or may be an ASIC (Application Specific Integrated Circuit ), or one or more integrated circuits configured to implement embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium. The computer readable storage medium stores computer instructions that, when executed by a processor, implement a method as referred to in any of the embodiments of fig. 1.
Fig. 4 is a schematic diagram before dynamic scheduling of resources according to one embodiment of the present disclosure. Fig. 5 is a schematic diagram after dynamic scheduling of resources according to one embodiment of the present disclosure.
As shown in fig. 4, it is known that the nodes A2, B4 and C1 are in a busy state and the node B3 is in an idle state by collecting the related information of the service node and the link through which the traffic processing flow is performed. Part of the link transmission traffic is slow, e.g. the link between node A2-node B4, the link between node B4-node C1.
The dynamic view data is processed by utilizing the trained decision model to generate configuration results. For example, the configuration result may be:
1) Node A3 is added to split traffic on the original node A2. For example, to transfer the traffic of user N from node A2 to node A3.
2) Calls to node B4 are reduced and traffic is split to other nodes. For example, the link between node A4 and node B4 is canceled, the link between node B5 and node B4 is canceled, and the link between node B4 and node C1 is canceled.
3) And releasing the idle resources. Such as releasing node B3 in an idle state.
4) Node C3 is increased to relieve the stress of node C1 and node B4.
As shown in fig. 5, the nodes A2, B4 and C1 that are in the busy state all return to the normal operation state, the system no longer includes the nodes in the idle state, and the links between the nodes also return to the normal state.
In some embodiments, the optimization results may also be manually adjusted so that the system reaches an optimal state.
In some embodiments, resource requirements may also be predicted by a decision model. For example, in a specific time (such as a peak time), or a specific traffic (such as a certain burst traffic), or a burst fault (such as a broken line of a certain machine room), etc., the change of the resource demand under different conditions is predicted, and corresponding deployment is made in advance.
In some embodiments, the functional modules described above may be implemented as general-purpose processors, programmable logic controllers (Programmable Logic Controller, abbreviated as PLCs), digital signal processors (Digital Signal Processor, abbreviated as DSPs), application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASICs), field programmable gate arrays (Field-Programmable Gate Array, abbreviated as FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or any suitable combination thereof for performing the functions described herein.
Thus, embodiments of the present disclosure have been described in detail. In order to avoid obscuring the concepts of the present disclosure, some details known in the art are not described. How to implement the solutions disclosed herein will be fully apparent to those skilled in the art from the above description.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the present disclosure. It will be understood by those skilled in the art that the foregoing embodiments may be modified and equivalents substituted for elements thereof without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (8)

1. A dynamic scheduling method for resources comprises the following steps:
collecting related information of service nodes and links experienced by a service processing flow;
generating a dynamic view of resource distribution according to the related information;
processing the dynamic view data by using a trained decision model to generate a configuration result, and performing resource optimization on a first preset dimension by using the configuration result, wherein the first preset dimension comprises at least one of a geographic distribution dimension, a time dimension or a marginal cloud cooperative dimension;
wherein generating a dynamic view of the resource distribution according to the related information comprises:
sorting the related information in time to determine a call link that the business process experiences;
counting the number of requests of each service node in unit time to determine the load of each service node in a second preset dimension, wherein the second preset dimension comprises at least one of a service dimension, a user dimension, a geographic dimension or a time dimension;
and forming the dynamic view according to the call link and the load of each service node on the second preset dimension.
2. The method according to claim 1, wherein:
the related information includes at least one of processing time consumption of the nodes, skip time consumption between the nodes, or resource consumption information of the nodes.
3. The method according to claim 1, wherein:
the configuration result includes at least one of a number of service nodes, a central processing unit performance parameter, a memory performance parameter, or a storage space performance parameter.
4. A dynamic scheduling device for resources, comprising:
the acquisition module is configured to acquire the related information of the service node and the link experienced by the business processing flow;
the analysis module is configured to generate a dynamic view of resource distribution according to the related information, order the related information in time to determine a calling link undergone by the service processing, count the number of requests of each service node in unit time to determine the load of each service node in a second preset dimension, wherein the second preset dimension comprises at least one of a service dimension, a user dimension, a geographic dimension or a time dimension, and form the dynamic view according to the calling link and the load of each service node in the second preset dimension;
the scheduling module is configured to process the dynamic view data by utilizing the trained decision model to generate a configuration result, and perform resource optimization on a first preset dimension by utilizing the configuration result, wherein the first preset dimension comprises at least one of a geographic distribution dimension, a time dimension or a marginal cloud cooperative dimension.
5. The apparatus of claim 4, wherein:
the related information includes at least one of processing time consumption of the nodes, skip time consumption between the nodes, or resource consumption information of the nodes.
6. The apparatus of claim 4, wherein:
the configuration result includes at least one of a number of service nodes, a central processing unit performance parameter, a memory performance parameter, or a storage space performance parameter.
7. A dynamic scheduling device for resources, comprising:
a memory configured to store instructions;
a processor coupled to the memory, the processor configured to perform the method of any of claims 1-3 based on instructions stored by the memory.
8. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1-3.
CN202010871037.7A 2020-08-26 2020-08-26 Dynamic scheduling method and device for resources Active CN114116186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010871037.7A CN114116186B (en) 2020-08-26 2020-08-26 Dynamic scheduling method and device for resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010871037.7A CN114116186B (en) 2020-08-26 2020-08-26 Dynamic scheduling method and device for resources

Publications (2)

Publication Number Publication Date
CN114116186A CN114116186A (en) 2022-03-01
CN114116186B true CN114116186B (en) 2023-11-21

Family

ID=80374394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010871037.7A Active CN114116186B (en) 2020-08-26 2020-08-26 Dynamic scheduling method and device for resources

Country Status (1)

Country Link
CN (1) CN114116186B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104009871A (en) * 2014-06-06 2014-08-27 中国科学院声学研究所 SDN controller implementation method and SDN controller
US9584440B1 (en) * 2015-10-12 2017-02-28 Xirsys Llc Real-time distributed tree
CN109753356A (en) * 2018-12-25 2019-05-14 北京友信科技有限公司 A kind of container resource regulating method, device and computer readable storage medium
CN110301128A (en) * 2017-03-02 2019-10-01 华为技术有限公司 Resource management data center cloud framework based on study

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200021514A1 (en) * 2017-01-31 2020-01-16 The Mode Group High performance software-defined core network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104009871A (en) * 2014-06-06 2014-08-27 中国科学院声学研究所 SDN controller implementation method and SDN controller
US9584440B1 (en) * 2015-10-12 2017-02-28 Xirsys Llc Real-time distributed tree
CN110301128A (en) * 2017-03-02 2019-10-01 华为技术有限公司 Resource management data center cloud framework based on study
CN109753356A (en) * 2018-12-25 2019-05-14 北京友信科技有限公司 A kind of container resource regulating method, device and computer readable storage medium

Also Published As

Publication number Publication date
CN114116186A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN113037786B (en) Intelligent computing power scheduling method, device and system
CN110209549B (en) Data processing method, related device, related equipment and system
US12135996B2 (en) Computing resource scheduling method, scheduler, internet of things system, and computer readable medium
EP3675420B1 (en) Distributed storage system upgrade management method and device, and distributed storage system
CN105900064A (en) Method and apparatus for scheduling data flow task
CN110247816A (en) Index monitoring method and device
CN110662245A (en) Base station load early warning method and device based on deep learning
JP5506889B2 (en) Communication traffic prediction device, communication traffic prediction method, and program
CN116226096A (en) Electronic signature data maintenance management system based on data processing
CN114116186B (en) Dynamic scheduling method and device for resources
CN109992408B (en) Resource allocation method, device, electronic equipment and storage medium
CN105740077A (en) Task assigning method applicable to cloud computing
CN107464571B (en) Data quality assessment method, equipment and system
CN115996433B (en) Radio resource adjustment method, device, electronic equipment and storage medium
CN111031413A (en) Service processing method and SDN controller
CN117234733A (en) Distributed system task allocation method, system, storage medium and equipment
CN116932224A (en) Big data function resource consumption evaluation method and device
CN116708219A (en) DPI platform-based data acquisition method and device
CN111107569B (en) Method and device for screening problem cells
CN105991366B (en) A kind of business monitoring method and system
CN109302723A (en) A kind of multinode real-time radio pyroelectric monitor control system Internet-based and control method
CN114153714A (en) Log information based capacity adjustment method, device, equipment and storage medium
CN112714037A (en) Method, device and equipment for evaluating guarantee performance of online service quality
CN113467892A (en) Distributed cluster resource configuration method and corresponding device, equipment and medium
CN112948229A (en) Method and device for determining performance of scheduling cluster, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant