Nothing Special   »   [go: up one dir, main page]

CN111381950B - Multi-copy-based task scheduling method and system for edge computing environment - Google Patents

Multi-copy-based task scheduling method and system for edge computing environment Download PDF

Info

Publication number
CN111381950B
CN111381950B CN202010147501.8A CN202010147501A CN111381950B CN 111381950 B CN111381950 B CN 111381950B CN 202010147501 A CN202010147501 A CN 202010147501A CN 111381950 B CN111381950 B CN 111381950B
Authority
CN
China
Prior art keywords
task
nodes
tasks
cluster
job
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010147501.8A
Other languages
Chinese (zh)
Other versions
CN111381950A (en
Inventor
钱柱中
王甜甜
陆桑璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202010147501.8A priority Critical patent/CN111381950B/en
Publication of CN111381950A publication Critical patent/CN111381950A/en
Application granted granted Critical
Publication of CN111381950B publication Critical patent/CN111381950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a task scheduling method and system based on multiple copies for an edge computing environment. The method comprises the following steps: periodically measuring and collecting the execution state of tasks in each edge cluster, and establishing a task time delay factor distribution probability model of the cluster, wherein the model describes the resource performance of the cluster; estimating the number of copies required by each job executable task based on the shortest remaining processing time principle and the fair sharing principle; based on the current progress of each job and the resource performance of the clusters, adopting an iterative allocation mode to allocate idle nodes of each cluster to each task according to the estimated number of the copies; the tasks assigned to the nodes are scheduled to the corresponding clusters for execution. By using the method and the device, idle resources in the edge computing environment can be effectively utilized in a real system, and the operation time delay of the operation is reduced by reasonably setting task copies.

Description

Multi-copy-based task scheduling method and system for edge computing environment
Technical Field
The invention relates to task scheduling and resource allocation in a job processing system in an edge computing environment, in particular to a task scheduling method and a task scheduling system for reducing big data processing application time delay based on multiple copies for the edge computing environment.
Background
Today, users interact with cloud data increasingly frequently, and cloud service providers are deploying backbone data centers while accelerating the construction of "edge" clusters. Google has 15 data centers built worldwide, and 1400 end servers from 139 countries are detected to serve the end users; the alicloud establishes 18 data centers, while its content delivery network products (CDNs) cover over 1200 acceleration end servers, spanning multiple operator (ISP) networks in 6 continents. For IT service providers built depending on cloud, user data of various types are also more and more dispersed in a plurality of clusters distributed in regions, and cloud-edge fused wide-area storage and computing environments are formed by the heterogeneous clusters and cross ISP network connection.
A large data processing, machine learning and other jobs consist of a plurality of stages with data dependence, a plurality of tasks in the same stage are executed in parallel, and similar calculation is executed on different data partitions. A job may involve the processing and aggregation of multiple clustered data and rely on a wide area network for the necessary data transfer. For example, search engine applications based on keyword advertising require periodic collection of various types of advertisement clicks within a specified area. Statistics indicate that google searches up to 55 billion times per day in 2016 resulted in at least 1.1TB data scattered across the edge clusters. Because of wide area network bandwidth limitations, such cross-domain big data analysis typically employs leaving large amounts of raw data in place, while dispatching parallel tasks to the corresponding edge clusters to process the data in order to complete the computation faster.
Jobs based on parallel processing are often held off by some "slow" task. A commercial cluster of microsoft shows that about half of the jobs are elongated by at least 34% of the completion time by "slow" tasks. The reasons for generating the slow task are complicated and unavoidable, including intermittent component faults, underlying resource contention, wide area network congestion and the like, which can cause the actual service capacity of the computing node (also called a machine or a container for carrying the computing unit of the task) to greatly fluctuate in a short time, so that the actual execution time of the computing task is far beyond expectations, and the slow task is formed. Moreover, as the scale of cloud service computing infrastructure expands, the above phenomenon becomes more prevalent, especially at resource-limited edge clusters.
The most widely applied "slow" task processing scheme today is multi-copy execution, i.e., task replication, executing multiple copies for a task that has become (or is likely to become) "slow" and then advancing subsequent computations based on the processing results of the fastest copy. For big data processing jobs in an edge computing environment, as shown in fig. 1, one may wish to copy tasks not only locally, but also remotely, with the free resources of other edge clusters, reducing big data processing application latency.
However, cross-domain task scheduling and replication involving multiple clusters actually faces significant challenges. First, the system environment is highly dynamic and uncertain. For example, the arrival of a job may be time-varying and unpredictable, with the available resources of the system varying accordingly, and the idle resource replication tasks using the edge clusters should not affect the system's normal/upcoming job's resource usage; instability in node performance can also cause the copy to execute at an unknown time before the copy is actually executed, especially edge servers are often limited in capacity and prone to overload. Plus wide area network fluctuations, which can affect the execution of the copies. It is not easy to design online algorithms that continuously adapt to this dynamics and uncertainty. Secondly, the system environment is heterogeneous, and the execution clusters of task copies can influence the performance improvement which can be brought to the operation. For example, heterogeneous node performance of each edge cluster may affect execution, while heterogeneous transport networks between clusters may affect data transmission. Third, task replication and scheduling for any large-scale system should not impose excessive overhead on the system, requiring a balance between algorithm complexity and expected job acceleration.
In light of the above challenges, existing task replication and scheduling methods are not applicable. In the existing research work, task replication strategies are mainly divided into two categories, namely replication based on monitoring and active cloning: the former monitors task operation and collects execution information, and after finding an abnormal task, a new copy is started for the abnormal task so as to reduce the influence of the abnormality; the latter actively replicates the task based on the historical execution information at the beginning of task execution to reduce the likelihood of it becoming a slow task.
A replication mechanism adopted in a MapReduce and other big data processing system is that before a job is close to the end, a replication is started for all tasks still being executed in the current job so as to accelerate the completion of the job, but the task replication mechanism does not consider the execution condition of each task, and resource waste is easy to cause. Therefore, LATE and its subsequent research efforts carefully estimate the remaining execution time for each task in the big data analysis job, and accurately identify and launch a speculative copy for a delayed task based on statistics of all parallel task execution rates in the same phase. Mantri is aware of the opportunity cost of copy usage, controls the investment of new copies more tightly, and opens a new copy for slow tasks only if time and total resource consumption are saved at the same time as compared to the original execution. The GRASS further finds that the opportunity cost of task replication decreases with the decrease of the number of tasks remaining in the job, and the decrease of the opportunity cost means that the benefit of increasing the copy investment is larger, so the GRASS aims to find a time node for increasing the copy investment in the process of executing the job, and the job completion is better accelerated. The Hopper allocates resources for the jobs based on the virtual resource requirements of each job. The virtual resource requirement is greater than the remaining number of tasks for a job, and under the model, the service rate of a single job can be optimized. When a slow task is found, a new copy may be started as long as the total usage of the job resources does not exceed its allocated resources.
However, in an edge environment, replication based solely on monitoring has limitations. On the one hand, the system has to oversee the execution of remote tasks on different clusters at too high a cost, and may not be monitored due to the overall failure of the edge clusters with limited partial capacity, so that the system cannot discover anomalies in time. On the other hand, abnormal tasks are difficult to define: because the performance difference of each edge resource is large, and the cross-domain data transmission time required by the task copy has large fluctuation along with the change of the copy resource position, the system cannot accurately identify the abnormality and cannot timely start the effective task copy. The multi-copy scheduling oriented to the edge environment needs a more active copying mode to adapt to performance fluctuation caused by heterogeneous resources in the cross-domain environment and execution delay caused by cross-domain data transmission.
When the task starts to be executed, the other active copy strategy 'task clone' determines the number of task copies which should be put into according to the task characteristics and the system environment so as to achieve less overall job delay with the least copies. The Dolly clones the same number of copies for each task in a small job without significantly increasing cluster utilization, such that the probability of slow tasks occurring for the entire job is below a set threshold. Srptms+c and ESE developed an optimization framework to investigate how to exploit the average completion time of jobs in a redundant optimization system, the model it employs required to know exactly the distribution of task execution times. This information is meaningful within one cluster (faces in a commercial cluster from Facebook and big both show task time to present Pareto distribution) because of near isomorphic compute nodes and rich local area network bandwidth within the cluster. However, in the edge environment, the task execution of the multi-edge cooperation needs to consider the time delay caused by cross-domain data transmission, and the transmission time delay is different when the clusters where the tasks are located are different due to the limitation and the dynamic property of the transmission bandwidth among the clusters of the cross-domain network. Second, the computing resource performance provided by different clusters is also different. Therefore, a theoretical model based simply on task execution time distribution is not applicable to an edge environment.
Therefore, there is a need to propose a new and more efficient task replication and scheduling strategy for an unstable, highly dynamic and heterogeneous edge computing environment.
Disclosure of Invention
The invention aims to: aiming at the challenges, the invention provides a multi-copy-based task scheduling method and a multi-copy-based task scheduling system for an edge computing environment, which can adapt to a high-isomerism and high-dynamic cross-domain data analysis system environment, comprehensively consider execution characteristics of each job, cluster resource performance and total system load condition, coordinate the available copy number and execution positions of multiple jobs, perform task copying and scheduling on line, and ensure the distributed processing efficiency of the edge-environment-oriented job.
The technical scheme is as follows: in order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, a multi-copy based task scheduling method for an edge-oriented computing environment includes the steps of:
periodically measuring and collecting the execution state of tasks in each edge cluster, and establishing a task time delay factor distribution probability model of the cluster, wherein the model describes the resource performance of the cluster;
estimating the number of copies required by each job executable task based on the shortest remaining processing time principle and the fair sharing principle;
based on the current progress of each job and the resource performance of the clusters, adopting an iterative allocation mode to allocate idle nodes of each cluster to each task according to the estimated number of the copies;
the tasks assigned to the nodes are scheduled to the corresponding clusters for execution.
Further, the task time delay factor distribution probability model of the cluster is expressed as:
j represents a job number and is used to indicate a job number,representing the execution time of task l at cluster k, s k The delay factor, which represents the possible experiences of tasks in cluster k, is set to a random variable,/>Representing the inherent duration of the task,/-, for example>Representing the transmission delay of the acquired data.
Further, the estimating the number of copies needed by each task executable task based on the shortest remaining processing time principle and the fair sharing principle includes:
according to the principle of shortest residual processing time, sequencing all the jobs according to the higher priority of the fewer residual tasks;
selecting the calculation nodes of the equal shared whole system of the jobs with the designated proportion epsilon from the sorting result;
and evenly distributing the calculated nodes which are obtained by the job and are available to the current ready task of the job, wherein the number of the nodes obtained by the task is the estimated number of the copies of the node, and the estimated number of the nodes is used as the upper bound of the executable number of the copies of the task.
Further, the iterative allocation mode allocates execution nodes for the estimated copies of the tasks, each task is allocated with at most one node in each round in iteration, and each round is executed according to the following steps:
(1) Sequencing the jobs, and sequencing the jobs with fewer tasks left in the current execution stage in front;
(2) Ordering tasks within a job usingIndicating the position of the node to which the task has been assigned in the round preceding the assignment of the round, the round preceding the previous round according to the tasksExecution time expectations obtained in the secondaryThe values are ordered from high to low;
(3) Nodes are allocated to tasks in turn.
Further, the step (3) includes:
(1) Calculating the execution time threshold of task I and settingThe execution time threshold of task l isWherein->For the inherent duration of task l, u (t) represents the node utilization of the current system;
(2) When the number of the nodes allocated to the task l does not exceed the estimated number of the copies, and the execution time expectation is still higher than the threshold value, allocating a node to the task, wherein the node is the node with the optimal execution time of the task in all the current idle nodes,a cluster set representing currently owned free nodes, +.>Then it is the cluster where the node assigned to task i is located, update + ->
(3) When the task I runs out of the estimated number of copies, or the execution time of the task I is expected to be lower than a set threshold value, no node is allocated to the task I;
(4) This iteration is exited when no more tasks need to be assigned to nodes in the cycle.
The edge computing environment-oriented multi-copy-based task scheduling system comprises a job manager, a resource manager and a centralized distribution server, wherein the job manager is used for tracking the execution progress of a job and submitting a ready task set to the centralized distribution server; the resource manager is used for collecting the execution information of tasks in the clusters, establishing a delay factor distribution model for each cluster, and capturing the resource states of the clusters; after receiving the job and the resource state, the centralized allocation server allocates the available copies of each task and the positions of the copy operation nodes in the current scheduling period on line, returns allocation results to the job manager, and the job manager starts the task to operate on the corresponding nodes.
Further, the centralized distribution server distributes idle nodes of each cluster to each task according to the estimated number of the copies in an iterative distribution mode.
Further, the estimated number of copies is obtained according to the following manner:
according to the principle of shortest residual processing time, sequencing all the jobs according to the higher priority of the fewer residual tasks;
selecting the calculation nodes of the equal shared whole system of the jobs with the designated proportion epsilon from the sorting result;
and evenly distributing the calculated nodes which are obtained by the job and are available to the current ready task of the job, wherein the number of the nodes obtained by the task is the estimated number of the copies of the node, and the estimated number of the nodes is used as the upper bound of the executable number of the copies of the task.
The beneficial effects are that: the invention firstly models the task copying and scheduling problems in the edge computing environment, captures the execution characteristics of the task in the edge computing environment, and comprehensively considers the progress of each job, the resource performance of each edge cluster and the total load condition of the system based on the problem model to realize the task copying and scheduling on line. The solving algorithm of the invention has good performance, can well process the problem of slow task in the cross-domain analysis process in the actual system execution, and has obvious performance improvement compared with the common copying strategy. In a real system, the invention can effectively utilize idle resources in the edge computing environment, and reduce the operation time delay of the operation by reasonably setting task copies.
Drawings
FIG. 1 is a schematic diagram of replication of big data processing job tasks in an edge computing environment;
FIG. 2 is a flow chart of a scheduling method of the present invention;
FIG. 3 is a schematic diagram of a task resource allocation process of the present invention;
FIG. 4 is a flow chart of execution of the present invention directing the replication and scheduling of tasks for parallel processing jobs in a multi-cluster environment.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
In application systems such as big data processing and machine learning, one-time operation is advanced according to stages, each stage comprises a certain number of tasks, the tasks can be distributed in each cluster for parallel execution, and the completion of the current stage can be influenced by the delay of a small number of tasks, so that the execution time of the whole operation is prolonged. Multiple copy parallel execution can mitigate the effects of slow tasks, but existing task replication strategies for individual clusters are not suitable for use in unstable, highly dynamic, and highly heterogeneous edge computing environments. The invention provides a task scheduling method for reducing response time delay of distributed jobs based on multi-task copies, which is oriented to an edge computing environment. Firstly modeling task replication and scheduling problems in an edge computing environment to capture execution characteristics of tasks in the edge computing environment, comprehensively considering each job progress, each edge cluster resource performance and the total system load condition based on a problem model, carrying out task replication and scheduling on line, namely calculating the task replication number for executable tasks of each job, and selecting an edge cluster suitable for task replication execution.
Referring to fig. 2, a multi-copy based task scheduling method for an edge-oriented computing environment includes the steps of:
and step S10, periodically measuring and collecting the execution state of the task in each edge cluster, and establishing a task time delay factor distribution probability model of the cluster.
Consider a cross-domain analysis system consisting of multiple clustersComposition, per cluster->Having a certain number of computing nodes (M k ). Batch analysis operation->Continuously arrive at the system, every job +.>Is composed of several tasks (L j ) The execution is advanced in stages by composition and using the data dependencies between directed acyclic graph (DAG graph) encoding tasks. A task (l) may be scheduled if and only if all the tasks it depends on are completed, at which point the task is considered to be in a ready state.
The execution time of the task is unpredictable. The invention defines a delay factor which a task in a cluster k may experience as a random variable s k Execution time of one task l at cluster kMultiplying the intrinsic duration of the task by +.>Delay factor plus transmission delay of acquired data>The data transmission delay is then dependent on the wide area network bandwidth between the cluster in which the data is located and the executing cluster k. The delay factors of each cluster are independent of each other. Table 1 shows the distribution of delay factors from a Facebook Hadoop cluster.
Table 1 Facebook Cluster delay factor distribution example
s 1 2 3 4 5 6 7 8 9 10 11 12
Probability (%) 23 14 9 3 8 10 4 14 12 2.1 0.7 0.2
The invention relates to an online task scheduler of the cross-domain analysis system, which starts each scheduling period (t) and works in the current systemIs allocated resources so that the scheduling decision does not violate dependencies among the tasks.Representing the number of copies of task l in cluster k, +.>As a vector representing the copy allocation scheme of task l, the present invention finally gives +.for each task by the following steps>And determines the scheduling time of each task +.>
And S20, estimating the number of copies required by each job executable task based on the shortest remaining processing time principle and the fair sharing principle.
In summary, this step selects a part of the calculation nodes with the least number of remaining tasks for which the job is fair to share the whole system, and the scaling factor is ε (0 < ε < 1). The remaining duty cycle (1-epsilon) of jobs waits for resources to be allocated in a subsequent scheduling period.
Representing the current job number, sorting the current job according to SRPT principle, i.e. the job with less residual task number is prioritized, selecting the former +.>Computing node of job sharing whole system +.>Each job is assigned +.>H of the remaining unselected jobs j (t)=0。/>Indicating the number of calculation nodes that the job j has occupied at time t, so the number of calculation nodes available for the job at the current time t is
The invention fairly distributes the available nodes to the ready tasks of each job. For job j, S j (t) is the ready task set of the job at time t, σ j (t)=|S j (t) | represents the number of ready tasks, the number of available nodes per taskThis value is used as the estimated number of task copies. If a job has more nodes available than it currently has ready tasks, each ready task may have additional nodes to execute copies. Note that when c j (t)<σ j At (t), the present invention randomly selects c j (t) ready tasks and giving them +.>
And step S30, on the basis of the current progress of each job and the resource performance of each cluster, on the premise of not exceeding the estimated number of the copies, the idle nodes of each cluster are distributed to task running copies.
Overall, the invention adopts an iterative resource allocation mode. As shown in FIG. 3, each round of task ordering is performed, and an idle node execution copy is sequentially allocated to the task. When the number of nodes allocated to a task reaches the estimated number of copies, or the execution time of the task is expected to reach a set threshold value (the threshold value is calculated based on a theoretical model to ensure the algorithm performance), the task is considered to not need copies, and the task does not participate in the next round of allocation. This process continues until no tasks need to allocate nodes. Circles of different depths in the graph represent nodes of different clusters.
The iterative allocation is adopted by the invention because marginal benefits of starting copies for one task can be drastically reduced along with the increase of the number of the copies, and the attribute can prove to be true for the multi-copy task rate model of the invention. Thus, serving high priority tasks with fewer copies does not have significant performance penalty, while the saved nodes can be assigned to low priority tasks that have not yet had copies, thereby achieving a significant increase in average performance.
Specifically, the task ordering method in each round is as follows:
(a) The jobs are ordered first, and the jobs with fewer tasks remaining in the current stage are ordered in front. This is because the fewer the number of tasks remaining, the more sensitive the completion of the phase is to the delay of the task, and thus the better the node is required to suppress the possible delay.
(b) And sequencing the tasks in one job. In summary, the present invention expects to arrange the order of node allocation according to the current execution time of each task, with long time tasks prioritized, after all, the completion of the stage depending on the completion of the last task. The execution time of the task is expected to beWherein->Representing task i as the ith copy of cluster k; />Representing mathematical expectations; />Is indicated at->Under the copy deployment scheme of task l, each round of node allocation may change task +.>The invention uses->The node position condition which indicates that the task is allocated before the current round of allocation is continuously updated in the iterative process, and after the iteration is finished, the value of the node position condition is the final +.>Copy deployment scheme. For tasks within the same job, in each round according to the task +.>The values are ordered from high to low.
On the basis of the time expectation value from high to low, nodes are allocated to each task from front to back, and each round of node allocation for the tasks comprises the following steps:
(a) And when the number of the nodes allocated to the task I does not exceed the estimated number of the copies, and the execution time expectation is still higher than the set threshold value, allocating a node for the task, wherein the node is the node with the optimal execution time of the task in all the current idle nodes.A cluster set representing currently owned free nodes, +.>Then it is the cluster in which the one node assigned to task i is located. Distribution nodeAfter that, the relevant variables are updated, i.e. +.>
(b) When the task runs out of its estimated number of copies, or its execution time expectations have fallen below a set threshold, no nodes are allocated to the task.
The method for setting the threshold value of the execution time expectations is as follows: is provided withTask l has an execution time threshold of +.>Wherein->For the intrinsic duration of task l +.>Indicating the node utilization of the current system. The setting of the threshold is based on a theoretical model of the invention. Theoretical models show that the invention always has a provable performance guarantee as long as the expected execution time of each task reaches below a threshold. Therefore, after the task reaches the set threshold, nodes are not allocated to the task any more, and the investment of redundant resources of the system is saved.
Step S40, the tasks allocated to the nodes are scheduled to the corresponding clusters to be executed.
When no more tasks need to be distributed to the nodes at the time t, exiting the iteration, and starting to execute all tasks distributed to at least one node in the iteration process in the corresponding cluster, wherein the tasks are
The invention has good performance guarantee in theory, can well process the problem of slow task in the cross-domain analysis process in the actual system execution, and has obvious performance improvement compared with the common replication strategy.
The method is suitable for guiding the replication and scheduling of the tasks of the parallel processing job in the multi-cluster environment. In one embodiment, an edge-oriented computing environment multi-copy based task scheduling system is provided in which a scheduling method is run. Referring to fig. 4, the task scheduling system includes a Job Manager (JM), a Resource Manager (RM), and a centralized distribution server, and the core algorithm of the present invention is used as a centralized service to calculate task replication and scheduling decisions for online tasks, and implements information interaction with the Resource Manager and the Job Manager of each cluster, so as to provide necessary data input for the operation of the core algorithm.
As shown in FIG. 3, the cluster Resource Manager (RM) has a highest authority to arbitrate task resources within the cluster, and each submitted job has a Job Manager (JM) for tracking job execution progress and applying for needed resources to the RM. The general procedure is as follows:
(1) In the prototype system of the invention, JM does not directly apply for resources to RM of the cluster, but submits ready task set to centralized allocation server, and the centralized allocation server coordinates the resources of a plurality of clusters and allocates the resources.
(2) The invention develops a log collecting module Logcollector in the RM, which is used for collecting the execution information of tasks in the clusters, and periodically collating the information and submitting the information to a resource information module resource profiler, and establishes/updates a delay factor distribution model shown in table 1 for each cluster to capture heterogeneous and dynamic resource performance of each cluster.
(3) After receiving the job and the resource state, the centralized allocation server allocates the available copies of each task and the positions of the copy operation nodes in the current scheduling period on line according to the allocation method, returns the allocation result to the JM, and the JM starts the tasks to operate on the corresponding nodes.
(4) When one of the copies of a task is completed and the result is returned, the JM cancels the execution of the other copies of the task, releasing the resources.
It should be understood that the task scheduling system based on multiple copies in the edge-oriented computing environment in this embodiment may implement all the technical solutions in the foregoing method embodiments, and the functions of each functional module may be specifically implemented according to the methods in the foregoing method embodiments, and the specific implementation process may refer to the relevant descriptions in the foregoing embodiments, which are not repeated herein.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (5)

1. The multi-copy-based task scheduling method for the edge computing environment is characterized by comprising the following steps of:
periodically measuring and collecting the execution state of tasks in each edge cluster, and establishing a task time delay factor distribution probability model of the cluster, wherein the model describes the resource performance of the cluster;
estimating the number of copies needed by each job executable task based on the shortest remaining processing time principle and the fair sharing principle, including: according to the principle of shortest residual processing time, sequencing all the jobs according to the higher priority of the fewer residual tasks; selecting the calculation nodes of the equal shared whole system of the jobs with the designated proportion epsilon from the sorting result; the method comprises the steps that computing nodes which are obtained by a job and are available are evenly distributed to a current ready task of the job, the node number obtained by the task is the estimated cost number of the node, and the estimated cost number is used as an upper bound of the executable cost number of the task;
based on the current progress of each job and the resource performance of the clusters, adopting an iterative allocation mode to allocate idle nodes of each cluster to each task according to the estimated number of the copies;
scheduling tasks assigned to the nodes to the corresponding clusters for execution;
the iteration allocation mode allocates execution nodes for the estimated copies of the tasks, each task is allocated with one node at most in each iteration, and each iteration is executed according to the following steps:
(1) Sequencing the jobs, and sequencing the jobs with fewer tasks left in the current execution stage in front;
(2) Ordering tasks within a job j usingIndicating the node position of the task i which has been allocated in the round before the allocation of the round, the round being based on the execution time expectations obtained in the previous round for each taskThe values are ordered from high to low,/->Is indicated at->The execution time of task l under the duplicate deployment scheme;
(3) Nodes are allocated to tasks in turn.
2. The edge-oriented computing environment multi-copy based task scheduling method of claim 1, wherein the task time delay factor distribution probability model of the cluster is expressed as:
j represents a job number and is used to indicate a job number,representing the execution time of task l at cluster k, s k The delay factor, which represents the possible experiences of tasks in cluster k, is set to a random variable,/>Representing the inherent duration of the task,/-, for example>Representing the transmission delay of the acquired data.
3. The edge-oriented computing environment multi-copy based task scheduling method of claim 1, wherein the step (3) comprises:
(3-1) calculating the execution time threshold of task l, settingTask l has an execution time threshold of +.>Wherein->For the inherent duration of task l, u (t) represents the node utilization of the current system;
(3-2) when the number of nodes allocated to the task l does not exceed the estimated number of copies, and the execution time expectation is still higher than the threshold value, allocating a node to the task, the node being the node with the optimal execution time of the task among all the idle nodes currently,a cluster set representing currently owned free nodes, +.>Then it is the cluster where the node assigned to task i is located,/->Representing the execution time of task l at cluster k, update +.>
(3-3) when the task runs out of its estimated number of copies, or its execution time is expected to be lower than a set threshold, no node is allocated to the task;
(3-4) exiting the iteration when no more tasks need to be allocated to nodes within the current scheduling period.
4. The task scheduling system based on multiple copies for the edge computing environment is characterized by comprising a job manager, a resource manager and a centralized distribution server, wherein the job manager is used for tracking the execution progress of a job and submitting a ready task set to the centralized distribution server; the resource manager is used for collecting the execution information of tasks in the clusters, establishing a delay factor distribution model for each cluster, and capturing the resource states of the clusters; after receiving the job and resource states, the centralized allocation server allocates the available copies of each task and the positions of copy operation nodes in the current scheduling period on line, returns allocation results to the job manager, and the job manager starts the tasks to operate on the corresponding nodes; the centralized distribution server distributes idle nodes of each cluster to each task according to the estimated copy number by adopting an iterative distribution mode, and the estimated copy number is obtained according to the following modes: according to the principle of shortest residual processing time, sequencing all the jobs according to the higher priority of the fewer residual tasks; selecting the calculation nodes of the equal shared whole system of the jobs with the designated proportion epsilon from the sorting result; the method comprises the steps that computing nodes which are obtained by a job and are available are evenly distributed to a current ready task of the job, the node number obtained by the task is the estimated cost number of the node, and the estimated cost number is used as an upper bound of the executable cost number of the task;
the iteration allocation mode allocates execution nodes for the estimated copies of the tasks, each task is allocated with one node at most in each iteration, and each iteration is executed according to the following steps:
(1) Sequencing the jobs, and sequencing the jobs with fewer tasks left in the current execution stage in front;
(2) Ordering tasks within a job j usingIndicating the node position of the task i which has been allocated in the round before the allocation of the round, the round being based on the execution time expectations obtained in the previous round for each taskThe values are ordered from high to low,/->Is indicated at->The execution time of task l under the duplicate deployment scheme;
(3) Nodes are allocated to tasks in turn.
5. The edge-oriented computing environment multi-copy based task scheduling system of claim 4, wherein said step (3) comprises:
(3-1) calculating the execution time threshold of task l, settingTask l has an execution time threshold of +.>Wherein->For the inherent duration of task l, u (t) representsNode utilization of the current system;
(3-2) when the number of nodes allocated to the task l does not exceed the estimated number of copies, and the execution time expectation is still higher than the threshold value, allocating a node to the task, the node being the node with the optimal execution time of the task among all the idle nodes currently,a cluster set representing currently owned free nodes, +.>Then it is the cluster where the node assigned to task i is located,/->Representing the execution time of task l at cluster k, update +.>
(3-3) when the task runs out of its estimated number of copies, or its execution time is expected to be lower than a set threshold, no node is allocated to the task;
(3-4) exiting the iteration when no more tasks need to be allocated to nodes within the current scheduling period.
CN202010147501.8A 2020-03-05 2020-03-05 Multi-copy-based task scheduling method and system for edge computing environment Active CN111381950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010147501.8A CN111381950B (en) 2020-03-05 2020-03-05 Multi-copy-based task scheduling method and system for edge computing environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010147501.8A CN111381950B (en) 2020-03-05 2020-03-05 Multi-copy-based task scheduling method and system for edge computing environment

Publications (2)

Publication Number Publication Date
CN111381950A CN111381950A (en) 2020-07-07
CN111381950B true CN111381950B (en) 2023-07-21

Family

ID=71215233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010147501.8A Active CN111381950B (en) 2020-03-05 2020-03-05 Multi-copy-based task scheduling method and system for edge computing environment

Country Status (1)

Country Link
CN (1) CN111381950B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111885136B (en) * 2020-07-15 2022-07-26 北京时代凌宇科技股份有限公司 Edge computing gateway cluster operation method and system based on edge cloud cooperation
CN111858051B (en) * 2020-07-20 2023-09-05 国网四川省电力公司电力科学研究院 Real-time dynamic scheduling method, system and medium suitable for edge computing environment
CN111901425B (en) * 2020-07-28 2021-05-28 平安科技(深圳)有限公司 CDN scheduling method and device based on Pareto algorithm, computer equipment and storage medium
CN112272203B (en) * 2020-09-18 2022-06-14 苏州浪潮智能科技有限公司 Cluster service node selection method, system, terminal and storage medium
CN112612601A (en) * 2020-12-07 2021-04-06 苏州大学 Intelligent model training method and system for distributed image recognition
CN113162965B (en) * 2021-01-07 2022-09-20 浙江大学 Low-delay Map and Reduce joint scheduling method for heterogeneous MapReduce cluster
CN113157431B (en) * 2021-02-02 2022-09-20 天津理工大学 Computing task copy distribution method for edge network application environment
CN112929438B (en) * 2021-02-04 2022-09-13 中国工商银行股份有限公司 Business processing method and device of double-site distributed database
CN113268346A (en) * 2021-05-26 2021-08-17 平安科技(深圳)有限公司 Dynamic resource allocation method and allocation device for edge nodes
CN113179331B (en) * 2021-06-11 2022-02-11 苏州大学 Distributed special protection service scheduling method facing mobile edge calculation
CN113485718B (en) * 2021-06-29 2023-11-03 浙大城市学院 Context-aware AIoT application program deployment method in edge cloud cooperative system
CN116010081B (en) * 2022-12-05 2024-04-30 大连理工大学 Real-time system randomization task scheduling method based on-line priority reverse budget analysis
CN115904671B (en) * 2023-02-20 2023-06-27 中国华能集团清洁能源技术研究院有限公司 Task scheduling method, device, equipment and medium in edge computing environment
CN116502834B (en) * 2023-04-10 2024-01-09 九河精微塑胶工业(深圳)有限公司 Workshop intelligent management method and system based on digitization
CN116610082B (en) * 2023-07-18 2023-10-31 安徽思高智能科技有限公司 RPA job workflow redundancy scheduling method and system based on deep reinforcement learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509276B (en) * 2018-03-30 2021-11-30 南京工业大学 Video task dynamic migration method in edge computing environment
CN109324886A (en) * 2018-09-14 2019-02-12 中国人民解放军国防科技大学 cluster resource scheduling method and device
CN109639833B (en) * 2019-01-25 2021-09-07 福建师范大学 Task scheduling method based on wireless metropolitan area network micro-cloud load balancing
CN110601992B (en) * 2019-09-20 2022-08-30 南方电网科学研究院有限责任公司 Data processing method and device of intelligent measurement terminal based on edge calculation

Also Published As

Publication number Publication date
CN111381950A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
CN111381950B (en) Multi-copy-based task scheduling method and system for edge computing environment
Koole et al. Resource allocation in grid computing
Muthuvelu et al. A dynamic job grouping-based scheduling for deploying applications with fine-grained tasks on global grids
CN104765640B (en) A kind of intelligent Service dispatching method
Chen et al. Adaptive multiple-workflow scheduling with task rearrangement
CN109871270B (en) Scheduling scheme generation method and device
CN108270805B (en) Resource allocation method and device for data processing
JP4912927B2 (en) Task allocation apparatus and task allocation method
Cheng et al. Mitigating the negative impact of preemption on heterogeneous mapreduce workloads
CN109710372B (en) Calculation intensive cloud workflow scheduling method based on owl search algorithm
CN116932201A (en) Multi-resource sharing scheduling method for deep learning training task
Stavrinides et al. Orchestrating bag-of-tasks applications with dynamically spawned tasks in a distributed environment
Rafsanjani et al. A new heuristic approach for scheduling independent tasks on heterogeneous computing systems
CN110413393B (en) Cluster resource management method and device, computer cluster and readable storage medium
Shah et al. Agent based priority heuristic for job scheduling on computational grids
CN109189581B (en) Job scheduling method and device
Dubey et al. QoS driven task scheduling in cloud computing
Natarajan Parallel queue scheduling in dynamic cloud environment using backfilling algorithm
CN111930485B (en) Job scheduling method based on performance expression
Lin et al. Two-tier project and job scheduling for SaaS cloud service providers
Liu et al. Leveraging dependency in scheduling and preemption for high throughput in data-parallel clusters
Ibrahim et al. Improving mapreduce performance with progress and feedback based speculative execution
Wang et al. Geoclone: Online task replication and scheduling for geo-distributed analytics under uncertainties
CN117909061A (en) Model task processing system and resource scheduling method based on GPU hybrid cluster
Gąsior et al. A Sandpile cellular automata-based scheduler and load balancer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant