Nothing Special   »   [go: up one dir, main page]

CN114595061A - Resource allocation method and device, electronic equipment and computer readable storage medium - Google Patents

Resource allocation method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114595061A
CN114595061A CN202210217802.2A CN202210217802A CN114595061A CN 114595061 A CN114595061 A CN 114595061A CN 202210217802 A CN202210217802 A CN 202210217802A CN 114595061 A CN114595061 A CN 114595061A
Authority
CN
China
Prior art keywords
target
task
resource
processor
amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210217802.2A
Other languages
Chinese (zh)
Inventor
杨萍萍
陈镛先
王爽
丁晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210217802.2A priority Critical patent/CN114595061A/en
Publication of CN114595061A publication Critical patent/CN114595061A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a resource configuration method and apparatus, an electronic device, and a computer-readable storage medium, which can be applied to the field of computer technologies and financial technologies. The resource allocation method comprises the following steps: the method comprises the steps of obtaining a task list, wherein the task list comprises information of a target task, the information of the target task comprises a process number and a target resource demand, the target task runs on processing nodes in a distributed system, and the distributed system comprises a plurality of processing nodes; determining the available resource quantity of each processing node in the distributed system according to the task list; determining configuration parameters of the target task according to the target resource demand and the available resource quantity; and according to the configuration parameters, performing resource configuration on the target task.

Description

Resource allocation method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technology and the field of finance, and more particularly, to a resource allocation method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
As computer technology has evolved, more and more users have opted to use distributed systems for data storage and management, which may include container clusters and traditional physical clusters.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art: in a traditional physical cluster, hardware resources of a deployment architecture are usually much larger than actual requirements, and different tenants share one set of database cluster.
Disclosure of Invention
In view of the above, the present disclosure provides a resource configuration method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a resource configuration method, including:
acquiring a task list, wherein the task list comprises information of a target task, the information of the target task comprises a process number and a target resource demand, the target task runs on a processing node in a distributed system, and the distributed system comprises a plurality of processing nodes;
determining the available resource amount of each processing node in the distributed system according to the task list;
determining the configuration parameters of the target task according to the target resource demand and the available resource quantity; and
and according to the configuration parameters, performing resource configuration of the target task.
According to an embodiment of the present disclosure, the target task further includes task names, and each task name corresponds to one or more process numbers;
the method further comprises the following steps:
creating a processor control submodule and a memory control submodule according to the task name;
and running the target task on the processor control subsystem and the memory control subsystem corresponding to the task name according to the process number.
According to an embodiment of the present disclosure, the target resource demand includes a target processor core number, and the available resource amount includes an available processor number and an available processor core number; the determining the configuration parameters of the target task according to the target resource demand amount and the available resource amount includes:
determining the numerical relationship between the target processor core number and the usable processor core number;
in response to the number of the usable processor cores being greater than the number of the target processor cores, determining a target processor and a corresponding target processing node in the usable processor number;
and setting the target processor and the target processing node for processing the target task by using the processor control subsystem.
According to an embodiment of the present disclosure, the available resource amount further includes an available memory amount, and the method further includes:
and setting a preset memory amount of the target processing node for processing the target task by using the memory control subsystem according to the usable memory amount, wherein the preset memory amount is smaller than the usable memory amount.
According to an embodiment of the present disclosure, the target resource demand further includes a target memory footprint, and the method further includes:
updating the target memory occupation amount at preset time intervals;
stopping executing the target task under the condition that the target memory occupation amount is determined to be larger than or equal to the preset memory amount;
and under the condition that the target memory occupation amount is smaller than the preset memory amount, starting to execute the target task.
According to an embodiment of the present disclosure, after the resource allocation of the target task is performed according to the configuration parameter:
and releasing corresponding allocated resources in response to determining that the target task is completed, wherein the allocated resources include the target processor, the target processing node, and the preset memory amount.
According to another aspect of the present disclosure, there is provided a resource configuration apparatus, including: the device comprises an acquisition module, a first determination module, a second determination module and a configuration module.
The system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a task list, the task list comprises information of a target task, the information of the target task comprises a process number and a target resource demand, the target task runs on a processing node in a distributed system, and the distributed system comprises a plurality of processing nodes;
a first determining module, configured to determine, according to the task list, an available resource amount of each processing node in the distributed system;
a second determining module, configured to determine a configuration parameter of the target task according to the target resource demand amount and the available resource amount; and
and the configuration module is used for carrying out resource configuration on the target task according to the configuration parameters.
According to another aspect of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory to store one or more instructions that,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the method as described above.
According to another aspect of the present disclosure, there is provided a computer program product comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the configuration parameters of the target task are determined according to the target resource demand in the task list and the available resource quantity of each processing node in the distributed system, so as to perform resource configuration of the target task. Through the technical means, the problems of resource competition and fault propagation among different target tasks caused by the fact that different users share one database cluster in the prior art are at least partially solved, and the technical effect of improving the overall utilization rate of resources is further achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically shows a system architecture to which a resource configuration method may be applied according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow diagram of a resource configuration method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method of determining configuration parameters for a target task according to an embodiment of the present disclosure;
FIG. 4 schematically shows a block diagram of a resource configuration apparatus according to an embodiment of the present disclosure; and
fig. 5 schematically shows a block diagram of an electronic device adapted to implement a resource configuration method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
In the technical scheme of the disclosure, before the personal information of the user is acquired or collected, the authorization or the consent of the user is acquired.
As computer technology has evolved, more and more users have opted to use distributed systems for data storage and management, which may include container clusters and traditional physical clusters.
In the container cluster, resource isolation of different layers can be performed by utilizing a Kubernets own resource isolation layer, including the cluster, a name space, a node, a pod, a container and the like.
In a traditional physical cluster, different users share one database cluster, resource competition is easy to generate, and the workload of a tenant which is too active may affect the performance of other tenants in the same cluster; in addition, hardware resources of a deployment architecture in a traditional physical cluster are usually far greater than actual service requirements, and more idle resources exist, so that the utilization rate of the resources is low.
In order to at least partially solve the technical problems in the related art, the present disclosure provides a resource allocation method and apparatus, an electronic device, and a computer-readable storage medium, which can be applied to the computer technology field and the financial field. The resource allocation method comprises the following steps: the method comprises the steps of obtaining a task list, wherein the task list comprises information of a target task, the information of the target task comprises a process number and a target resource demand, the target task runs on processing nodes in a distributed system, and the distributed system comprises a plurality of processing nodes; determining the available resource quantity of each processing node in the distributed system according to the task list; determining configuration parameters of the target task according to the target resource demand and the available resource quantity; and according to the configuration parameters, performing resource configuration of the target task.
It should be noted that the method and apparatus for resource allocation provided by the embodiments of the present disclosure may be used in the field of computer technology and the field of finance, for example, to perform resource isolation between banking outlets. The method and the device for resource allocation provided by the embodiment of the disclosure can also be used in any fields except the technical field of computers and the financial field, such as resource allocation in a distributed system. The application fields of the resource configuration method and the resource configuration device provided by the embodiment of the disclosure are not limited.
Fig. 1 schematically shows a system architecture to which a resource configuration method may be applied according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a server 104 and a processing node 105. The links between the terminal devices 101, 102, 103 and the server 104, between the server 104 and the processing node 105 may be via wired and/or wireless communication links, etc.
The user may use the terminal devices 101, 102, 103 to interact with the server 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 104 may be a server providing various services, such as a background management server (for example only) providing support for tasks performed by users with the terminal devices 101, 102, 103. The backend management server may analyze and perform other processing on the received data such as the information of the target task, and send a processing result (for example, information or data obtained or generated according to the information of the target task) to the processing node 105, so as to perform resource allocation of the target task.
Processing node 105 may be a logical entity in a distributed system that performs computing work according to a protocol, and may include a process or machine that performs some work. Processing nodes 105 may include stateless nodes that do not need to store their own intermediate state information, such as Nginx (HTTP and reverse proxy web server); the processing nodes 105 may also include stateful nodes whose state and data may be persisted to media such as disks, e.g., MySQL (relational database management system) and the like.
It should be noted that the method for configuring resources provided by the embodiments of the present disclosure may be generally performed by the server 104. Accordingly, the resource configuration apparatus provided by the embodiments of the present disclosure may be generally disposed in the server 104. The method for resource configuration provided by the embodiment of the present disclosure may also be performed by a server or a server cluster that is different from the server 104 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 104. Correspondingly, the resource configuration apparatus provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 104 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 104.
For example, the information of the target task may be originally stored in any one of the terminal apparatuses 101, 102, or 103 (for example, but not limited to, the terminal apparatus 101), or stored on an external storage apparatus and may be imported into the terminal apparatus 101. Then, the terminal device 101 may send the information of the target task to the server or the server cluster, and the server or the server cluster receiving the information of the target task executes the resource configuration method provided by the embodiment of the present disclosure.
It should be understood that the number of terminal devices, servers, and processing nodes in fig. 1 are merely illustrative. There may be any number of terminal devices, servers, and processing nodes, as desired for implementation.
Fig. 2 schematically shows a flow chart of a resource configuration method according to an embodiment of the present disclosure.
As shown in fig. 2, the resource allocation method includes operations S201 to S204.
In operation S201, a task list is obtained, where the task list includes information of a target task, the information of the target task includes a process number and a target resource demand, the target task runs on a processing node in a distributed system, and the distributed system includes a plurality of processing nodes.
According to an embodiment of the present disclosure, the target task may include processes that different users individually run in the distributed system.
In operation S202, the amount of resources available to each processing node in the distributed system is determined according to the task list.
According to the embodiment of the disclosure, the available resource amount may include the available processor condition, the available memory occupation, the available disk space condition, and the like.
In operation S203, a configuration parameter of the target task is determined according to the target resource demand amount and the available resource amount.
According to an embodiment of the present disclosure, the configuration parameters may include processor resources, memory resources, network card resources, and the like.
According to the embodiment of the disclosure, because the computing resources and the storage resources of the distributed system basically adopt a single deployment principle, the disk IO is used as a storage component, and when a task is executed, files in the disk IO need to be repeatedly read and written, and a low read and write delay is usually required to ensure the performance of a distributed transaction, a Solid State Disk (SSD) can be separately deployed for the IO-intensive component, and is physically isolated from other components from the physical level of the disk.
In operation S204, resource allocation of the target task is performed according to the allocation parameters.
According to the embodiment of the disclosure, the configuration parameters of the target task are determined according to the target resource demand in the task list and the available resource quantity of each processing node in the distributed system, so as to perform resource configuration of the target task. Through the technical means, the problems of resource competition and fault propagation among different target tasks caused by the fact that different users share one database cluster in the prior art are at least partially solved, and the technical effect of improving the overall utilization rate of resources is further achieved.
The method of fig. 2 is further described with reference to fig. 3 in conjunction with specific embodiments.
According to an embodiment of the present disclosure, the target task further includes task names, each task name corresponding to one or more process numbers; the resource allocation method further comprises the following steps:
creating a processor control submodule and a memory control submodule according to the task name; and running the target task on the processor control subsystem and the memory control subsystem corresponding to the task name according to the process number.
In a CGroup (Control Groups), a task may include a process, according to an embodiment of the present disclosure.
According to embodiments of the present disclosure, a control group may provide a mechanism to aggregate or partition a group of tasks and their subtasks into a hierarchy with specific functionality, and the control group may be used to limit, control the resources of a group of tasks.
According to an embodiment of the present disclosure, the processor control submodule may be created by:
mkdir/sys/fs/cgroup/cpuiset/$ task name.
According to an embodiment of the present disclosure, the memory control submodule may be created by:
mkdir/sys/fs/cgroup/memory/$ task name.
According to an embodiment of the present disclosure, processes of different users may be run on the processor control submodule by:
echo $ process number >/sys/fs/cgroup/cpuiset/$ task name/cgroup.
According to embodiments of the present disclosure, the processor control submodule may include a CPU core based restriction, e.g., which cores the target task may use may be restricted.
According to the embodiment of the disclosure, processes of different users can be run on the memory control submodule through the following instructions:
echo $ process number >/sys/fs/cgroup/memory/$ task name/cgroup.
According to the embodiment of the disclosure, the memory control submodule can limit the memory usage amount of target tasks of different users.
According to the embodiment of the disclosure, the resource allocation and the resource isolation of the target task can be realized by utilizing the self-contained CGroup mechanism of the Linux kernel.
FIG. 3 schematically illustrates a flowchart of a method of determining configuration parameters for a target task, according to an embodiment of the disclosure.
As shown in fig. 3, the method of determining the configuration parameters of the target task includes operations S301 to S303.
In operation S301, a numerical relationship of a target processor core number and a usable processor core number is determined.
According to the embodiment of the disclosure, the target resource demand includes a target processor core number, and the usable resource amount includes a usable processor number and a usable processor core number.
In operation S302, in response to the number of usable processor cores being greater than the target processor core number, a target processor and a corresponding target processing node are determined in the usable processor number.
In operation S303, a target processor and a target processing node for processing a target task are set using the processor control subsystem.
According to an embodiment of the present disclosure, the cpsets may assign separate target processors and target processing nodes to target tasks in a control group.
According to an embodiment of the present disclosure, a target processor for processing a target task may be set by:
eco $ CPU usage >/sys/fs/cgroup/cpuiset/$ task name/cpuiset.
According to an embodiment of the present disclosure, a target processing node for processing a target task may be set by:
echo $ NUMA node >/sys/fs/cgroup/cpuiset/$ task name/cpuiset.
According to the embodiment of the present disclosure, for example, if the number of target processor cores is 3 and the number of available processor cores is 2, the processing node corresponding to the processor is not a processing node with sufficient resources. For example, if the number of target processor cores is 3 and the number of available processor cores is 4, the processing node corresponding to the processor is a processing node with sufficient resources, and the target processor and the corresponding target processing node can be determined.
According to the embodiment of the disclosure, the target processors and the corresponding target processing nodes can be respectively allocated to the target tasks of different users by utilizing the processor control subsystem according to the resource demand of the target tasks.
According to the embodiment of the present disclosure, the usable resource amount further includes a usable memory amount, and the resource configuration method further includes:
and setting a preset memory amount of the target processing node for processing the target task by using the memory control subsystem according to the available memory amount, wherein the preset memory amount is smaller than the available memory amount.
According to an embodiment of the present disclosure, the preset amount of memory for processing the target task may be set by:
new memory usage >/sys/fs/cgroup/memory/$ task name/memory.limit _ in _ bytes.
According to the embodiment of the disclosure, in a distributed system, resource allocation is carried out on a target task by utilizing a control group, so that different users are completely isolated, and data access across users cannot be carried out in the aspect of data security, so that the independence of user data is realized, and the safety of the user data is guaranteed; in terms of resource usage, users can share their resource configuration exclusively, thereby avoiding problems of resource contention and fault propagation between different tasks.
According to an embodiment of the present disclosure, the target resource demand further includes a target memory footprint, and the method further includes:
updating the target memory occupation amount at preset time intervals; stopping executing the target task under the condition that the target memory occupation amount is determined to be greater than or equal to the preset memory amount; and starting to execute the target task under the condition that the target memory occupied amount is smaller than the preset memory amount.
According to the embodiment of the disclosure, the pause identifier can be set for the target task through the following instructions:
eco 1 >/sys/fs/cgroup/memory/$ task name/memory.
According to the embodiment of the disclosure, by setting the pause flag, it is prevented that an Out of Memory (Memory overflow) occurs due to insufficient Memory when the target task is executed.
According to the embodiment of the present disclosure, after the resource configuration of the target task is performed according to the configuration parameters:
and releasing corresponding allocated resources in response to the fact that the target task is determined to be completed and executed, wherein the allocated resources comprise a target processor, a target processing node and a preset memory amount.
According to the embodiment of the disclosure, the resource consumption condition in the distributed system can be checked in real time by combining with the monitoring device in the distributed system. When the processor and the memory are found to be incapable of meeting the target resource demand, the resource allocation in the processor control submodule and the memory control submodule can be dynamically adjusted, so that the resource allocation meets the resource demand required in the actual operation of the target task.
Fig. 4 schematically shows a block diagram of a resource configuration apparatus according to an embodiment of the present disclosure.
As shown in fig. 4, the resource allocation apparatus 400 includes: an acquisition module 401, a first determination module 402, a second determination module 403, and a configuration module 404.
The obtaining module 401 is configured to obtain a task list, where the task list includes information of a target task, the information of the target task includes a process number and a target resource demand, the target task runs on a processing node in a distributed system, and the distributed system includes a plurality of processing nodes.
A first determining module 402, configured to determine, according to the task list, an available resource amount of each processing node in the distributed system.
A second determining module 403, configured to determine a configuration parameter of the target task according to the target resource demand amount and the available resource amount.
And a configuration module 404, configured to perform resource configuration of the target task according to the configuration parameters.
According to the embodiment of the disclosure, the configuration parameters of the target task are determined according to the target resource demand in the task list and the available resource quantity of each processing node in the distributed system, so as to perform resource configuration of the target task. Through the technical means, the problems of resource competition and fault propagation among different target tasks caused by the fact that different users share one database cluster in the prior art are at least partially solved, and the technical effect of improving the overall utilization rate of resources is further achieved.
According to an embodiment of the present disclosure, the target task further includes task names, each task name corresponding to one or more process numbers.
According to an embodiment of the present disclosure, the resource configuration apparatus 400 further includes: a creating module and an operating module.
And the creating module is used for creating a processor control submodule and a memory control submodule according to the task name.
And the running module is used for running the target task on the processor control subsystem and the memory control subsystem corresponding to the task name according to the process number.
According to the embodiment of the disclosure, the target resource demand includes a target processor core number, and the usable resource amount includes a usable processor number and a usable processor core number.
According to an embodiment of the present disclosure, the second determining module 403 includes: the device comprises a first determining unit, a second determining unit and a first setting unit.
The first determination unit is used for determining the numerical relation between the target processor core number and the usable processor core number.
A second determination unit to determine a target processor and a corresponding target processing node in the usable processor number in response to the usable processor core number being greater than the target processor core number.
A first setting unit for setting a target processor and a target processing node for processing a target task using the processor control subsystem.
According to the embodiment of the present disclosure, the usable resource amount further includes a usable memory amount.
According to an embodiment of the present disclosure, the second determining module 403 further includes: a second setting unit.
And a second setting unit, configured to set, according to the available memory amount, a preset memory amount of the target processing node for processing the target task by using the memory control subsystem, where the preset memory amount is smaller than the available memory amount.
According to an embodiment of the present disclosure, the target resource demand further comprises a target memory footprint.
According to an embodiment of the present disclosure, the resource configuration apparatus 400 further includes: the device comprises an updating module, a stopping module and an executing module.
And the updating module is used for updating the target memory occupation amount at preset time intervals.
And the stopping module is used for stopping executing the target task under the condition that the target memory occupation amount is determined to be greater than or equal to the preset memory amount.
And the execution module is used for starting to execute the target task under the condition that the target memory occupation amount is smaller than the preset memory amount.
According to an embodiment of the present disclosure, the resource configuration apparatus 400 further includes: and releasing the module.
And the releasing module is used for releasing corresponding allocated resources in response to the fact that the target task is determined to be completed and executed, wherein the allocated resources comprise a target processor, a target processing node and a preset memory amount.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the obtaining module 401, the first determining module 402, the second determining module 403 and the configuring module 404 may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the obtaining module 401, the first determining module 402, the second determining module 403, and the configuring module 404 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented in any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the obtaining module 401, the first determining module 402, the second determining module 403 and the configuring module 404 may be at least partially implemented as a computer program module, which when executed may perform a corresponding function.
It should be noted that, the resource allocation apparatus part in the embodiment of the present disclosure corresponds to the resource allocation method part in the embodiment of the present disclosure, and the description of the resource allocation apparatus part specifically refers to the resource allocation method part, which is not described herein again.
Fig. 5 schematically shows a block diagram of an electronic device adapted to implement a resource configuration method according to an embodiment of the present disclosure. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, a computer electronic device 500 according to an embodiment of the present disclosure includes a processor 501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 501 may also include onboard memory for caching purposes. Processor 501 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are stored. The processor 501, the ROM502, and the RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM502 and/or the RAM 503. Note that the programs may also be stored in one or more memories other than the ROM502 and the RAM 503. The processor 501 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, electronic device 500 may also include an input/output (I/O) interface 505, input/output (I/O) interface 505 also being connected to bus 504. The electronic device 500 may also include one or more of the following components connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program, when executed by the processor 501, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be embodied in the device/apparatus/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include ROM502 and/or RAM 503 and/or one or more memories other than ROM502 and RAM 503 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by the embodiments of the present disclosure, when the computer program product is run on an electronic device, the program code being adapted to cause the electronic device to carry out the method of resource configuration provided by the embodiments of the present disclosure.
The computer program, when executed by the processor 501, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of a signal on a network medium, downloaded and installed through the communication section 509, and/or installed from the removable medium 511. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A method of resource allocation, comprising:
the method comprises the steps of obtaining a task list, wherein the task list comprises information of a target task, the information of the target task comprises a process number and a target resource demand, the target task runs on processing nodes in a distributed system, and the distributed system comprises a plurality of processing nodes;
determining the available resource amount of each processing node in the distributed system according to the task list;
determining configuration parameters of the target task according to the target resource demand and the available resource quantity; and
and according to the configuration parameters, performing resource configuration of the target task.
2. The method of claim 1, wherein the target task further comprises task names, each of the task names corresponding to one or more of the process numbers;
the method further comprises the following steps:
creating a processor control submodule and a memory control submodule according to the task name;
and running the target task on the processor control subsystem and the memory control subsystem corresponding to the task name according to the process number.
3. The method of claim 2, wherein the target resource demand comprises a target processor core number, the amount of useable resource comprises a useable processor number and a useable processor core number; the determining the configuration parameters of the target task according to the target resource demand amount and the available resource amount comprises:
determining a numerical relationship of the target processor core number and the usable processor core number;
in response to the number of useable processor cores being greater than the target processor core number, determining a target processor and a corresponding target processing node in the useable processor number;
setting, with the processor control subsystem, the target processor and the target processing node for processing the target task.
4. The method of claim 3, wherein the amount of usable resources further comprises an amount of usable memory, the method further comprising:
and setting a preset memory amount of the target processing node for processing the target task by using the memory control subsystem according to the usable memory amount, wherein the preset memory amount is smaller than the usable memory amount.
5. The method of claim 4, the target resource demand further comprising a target memory footprint, the method further comprising:
updating the target memory occupation amount at preset time intervals;
stopping executing the target task under the condition that the target memory occupation amount is determined to be greater than or equal to the preset memory amount;
and starting to execute the target task under the condition that the target memory occupation amount is smaller than the preset memory amount.
6. The method of claim 1, further comprising, after the configuring of the resources of the target task according to the configuration parameters:
and in response to determining that the target task is completed executing, releasing corresponding allocated resources, where the allocated resources include the target processor, the target processing node, and the preset memory amount.
7. A resource configuration apparatus, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a task list, the task list comprises information of a target task, the information of the target task comprises a process number and a target resource demand, the target task runs on processing nodes in a distributed system, and the distributed system comprises a plurality of processing nodes;
the first determining module is used for determining the available resource quantity of each processing node in the distributed system according to the task list;
the second determining module is used for determining the configuration parameters of the target task according to the target resource demand and the available resource quantity; and
and the configuration module is used for carrying out resource configuration on the target task according to the configuration parameters.
8. An electronic device, comprising:
one or more processors;
a memory to store one or more instructions that,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
9. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 6.
10. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 6 when executed.
CN202210217802.2A 2022-03-07 2022-03-07 Resource allocation method and device, electronic equipment and computer readable storage medium Pending CN114595061A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210217802.2A CN114595061A (en) 2022-03-07 2022-03-07 Resource allocation method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210217802.2A CN114595061A (en) 2022-03-07 2022-03-07 Resource allocation method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114595061A true CN114595061A (en) 2022-06-07

Family

ID=81807992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210217802.2A Pending CN114595061A (en) 2022-03-07 2022-03-07 Resource allocation method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114595061A (en)

Similar Documents

Publication Publication Date Title
US10235480B2 (en) Simulation of internet of things environment
CN109408205B (en) Task scheduling method and device based on hadoop cluster
US10324754B2 (en) Managing virtual machine patterns
US20130019015A1 (en) Application Resource Manager over a Cloud
US20150127808A1 (en) Using cloud resources to improve performance of a streaming application
US10055393B2 (en) Distributed version control of orchestration templates
CN113076224B (en) Data backup method, data backup system, electronic device and readable storage medium
CN113132400B (en) Business processing method, device, computer system and storage medium
CN112152988B (en) Method, system, computer device and medium for asynchronous NBMP request processing
US20190310856A1 (en) Executing instructions based on a shared physical register
US20150373078A1 (en) On-demand helper operator for a streaming application
US20230214265A1 (en) High availability scheduler event tracking
US11983576B2 (en) Accessing topological mapping of cores
CN114595061A (en) Resource allocation method and device, electronic equipment and computer readable storage medium
US11954534B2 (en) Scheduling in a container orchestration system utilizing hardware topology hints
CN112506781B (en) Test monitoring method, device, electronic equipment, storage medium and program product
CN114968552A (en) Cache allocation method, apparatus, device, storage medium and program product
CN114201508A (en) Data processing method, data processing apparatus, electronic device, and storage medium
CN113781154A (en) Information rollback method, system, electronic equipment and storage medium
CN114035864A (en) Interface processing method, interface processing device, electronic device, and storage medium
CN112988604A (en) Object testing method, testing system, electronic device and readable storage medium
US10606681B2 (en) Incremental dump with fast reboot
CN114185682B (en) Log output method and device, electronic equipment and storage medium
CN113127142A (en) Method and device for creating virtual machine in physical system
CN112579282A (en) Data processing method, device, system and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination