CN108415763B - Distribution method of edge computing system - Google Patents
Distribution method of edge computing system Download PDFInfo
- Publication number
- CN108415763B CN108415763B CN201810140988.XA CN201810140988A CN108415763B CN 108415763 B CN108415763 B CN 108415763B CN 201810140988 A CN201810140988 A CN 201810140988A CN 108415763 B CN108415763 B CN 108415763B
- Authority
- CN
- China
- Prior art keywords
- task
- energy consumption
- submodel
- cpu
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000005265 energy consumption Methods 0.000 claims description 94
- 238000004891 communication Methods 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 21
- 239000011541 reaction mixture Substances 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 12
- 241001189642 Theroa Species 0.000 abstract 1
- 238000004422 calculation algorithm Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 10
- 238000005457 optimization Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 101100460704 Aspergillus sp. (strain MF297-2) notI gene Proteins 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000004134 energy conservation Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003306 harvesting Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
- G06F9/4856—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
- G06F9/4862—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate
- G06F9/4875—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate with migration policy, e.g. auction, contract negotiation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
- G06F9/4893—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues taking into account power or heat criteria
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0283—Price estimation or determination
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Development Economics (AREA)
- General Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Supply And Distribution Of Alternating Current (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a distribution method of an edge computing system, which comprises the following steps: s1, a computing terminal receives task information of a task to be distributed sent by a scheduler; s2, calculating according to a preset quotation model to obtain the cost for executing the task and determine quotation, and applying for the task to a scheduler according to the quotation; and S3, the scheduler receives the quotations of each computing terminal, and distributes the tasks to the computing terminals through a preset task distribution model so that the sum of the quotations of each task is optimal. The invention has the advantages of protecting the privacy of the computing terminal, meeting the randomness of task arrival, ensuring the authenticity of the ROA scheme, ensuring the optimal total reward of the system and the like.
Description
Technical Field
The invention relates to the technical field of computer networks, in particular to a distribution method of an edge computing system.
Background
The advent of the Internet of Things (iot) era will generate a great deal of raw data. Through big data analysis, very valuable information can be obtained from the big data analysis. However, the computation required to analyze these data greatly exceeds the computing power of IoT devices. Therefore, we desire to extend cloud computing and services to the location where data is generated, i.e., the edge of the network.
To address this issue, it may be implemented by migrating the compute intensive analytics tasks to Mobile intelligent units (MDs) in the vicinity of the IoT Devices. These MDs, using the interconnection of short-range communication technologies, form a new architecture element called ad-hoc cloud that merges mobile computing with cloud computing. In general, edge computing provides nearby intelligent services, and can meet the requirement of low response delay of internet of things applications.
There are currently a number of techniques that focus on the study of offloading the task of computing the edges. The Hehai university invents a load task migration algorithm (patent number: CN201710043453.6) with optimized power consumption in an edge computing environment, and the Beijing industry university invents an indoor wireless positioning method (patent number: CN201610426115.6) based on edge computing and a Bayesian posterior probability model; there has also been a lot of research literature research on the bidding and auctioning of computing tasks for MDs, as described in document 12(A.L. jin, W.Song, and W.Zhuang, "Auction-based resource allocation for sharing clusters in mobile clusters computing," IEEE trans. Emerg. Topics company., to Appear, DOI:10.1109/TETC. 20152487865), the goal of auctioning is to maximize the overall value of MDs.
The prior art only uses a centralized optimization method which needs MDs to send privacy to a third-party system or other MDs; or a static model is defined, and the privacy of the MDs and the randomness of the arrival of the computing tasks cannot be protected if the arrival situation of the computing tasks is known in advance.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides the distribution method of the edge computing system, which can protect the privacy of the computing terminal, meet the randomness of task arrival, and ensure the authenticity of an ROA (remote-optimal Auction) scheme, thereby ensuring the optimal total reward sum of the system.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: a method of allocating an edge computing system, comprising the steps of:
s1, a computing terminal receives task information of a task to be distributed sent by a scheduler;
s2, calculating according to a preset quotation model to obtain the cost for executing the task and determine quotation, and applying for the task to a scheduler according to the quotation;
and S3, the scheduler receives the quotations of each computing terminal, and distributes the tasks to the computing terminals through a preset task distribution model so that the sum of the quotations of each task is optimal.
Further, the task information includes an input data amount, an output data amount, a CPU calculation cycle number and a task reward of the task.
Further, the quotation model comprises a CPU speed sub-model, an energy consumption sub-model and a cost estimation sub-model;
the CPU speed submodel is used for calculating and determining the CPU speed for executing the task according to the CPU calculation cycle number;
the energy consumption submodel is used for calculating the energy consumption for executing the task according to the CPU speed, the input data volume and the output data volume of the task;
and the cost estimation submodel is used for estimating the cost of executing the task according to the CPU speed, the energy consumption and the task reward and determining the task quotation according to the cost.
Further, the CPU speed determined by the CPU speed submodel is calculated to be the lowest CPU speed meeting the task execution requirement.
Further, the CPU speed sub-model satisfies the following equation (1):
in the formula (1), the reaction mixture is,in order to calculate the determined speed of the CPU,for calculating the set of CPU speeds at which the terminal can complete the task in one time slot, the speed is expressed as For a predetermined speed limit value, alphanAnd betanThe system parameters of the terminal CPU are calculated in advance.
Further, the energy consumption submodel comprises a communication energy consumption submodel and an execution energy consumption submodel;
the communication energy consumption submodel is used for calculating communication energy consumption required for communication for executing the task according to the input data volume and the output data volume;
the execution energy consumption submodel is used for determining execution energy consumption required by executing the task according to the calculation period number and the speed of the CPU;
and the energy consumption submodel calculates the energy consumption for executing the task according to the communication energy consumption and the execution energy consumption.
Further, the communication energy consumption submodel satisfies the following formula (2):
dn,k(t)=Cn,rxIk(t)+Cn,rxOk(t) (2)
in the formula (2), dn,k(t) calculated energy consumption for communication, Cn,rxEnergy consumption for receiving a unit of data for a computing terminal, Ik(t) is the received data amount of the task, Cn,rxEnergy consumption for transmitting a unit of data for a computing terminal, Ok(t) calculating the output data volume of the task executed by the terminal;
the execution energy consumption submodel satisfies the following formula (3):
zn,k(t)=cn(t)ln,k(t) (3)
in the formula (3), zn,k(t) calculated execution energy consumption, cn(t) energy consumption per unit time of CPU of the computing terminal,/n,k(t)=φk(t)/rn(t) time required to perform the task, phik(t) the number of CPU calculation cycles, rn(t) is CPU speed;
the energy consumption for executing the task is calculated as shown in formula (4):
Pn,k(t)=zn(t)+dn,k(t) (4)
in the formula (4), Pn,k(t) is energy consumption, zn,k(t) energy consumption for execution, dn,kAnd (t) represents communication energy consumption.
Further, the cost estimation submodel satisfies the following expression (5):
vn,k(t)=Pn,k(t)(En(t)-Λn)-Vwk(t) (5)
in the formula (5), vn,k(t) calculated cost, Pn,k(t) is energy consumption, En(t) is an energy queue, ΛnIs an energy buffer region, V is a preset weight value, wk(t) is a preset reward for the task.
Further, the task allocation model preset in step S3 is as shown in equation (6),
in the formula (6), J (t) is a set of tasks, Jn,k(t) is the winner indicator, Jn,k(t) '1' indicates that the task k is allocated to the terminal n for execution, and conversely '0', bn,kAnd (t) is the quotation of the terminal N to the task k, K (t) is the set of all tasks in the time slot t, and N is the number of the calculation terminals.
Further, a reward distribution step S4 is included, after the task is distributed to the computing terminal by the dispatcher, the corresponding reward is given to the computing terminal, the reward is determined according to a preset reward distribution model, the preset reward distribution model is shown as a formula (7),
in the formula (7), pn,k(t) is a reward assigned to computing terminal n after assignment of task k to computing terminal n,is the minimum value determined by equation (6) in the case where the terminal n is not involved,the minimum value determined by formula (6) is the minimum value under the condition that the terminal n and the task k do not participate in scheduling, and K (t) is the set of all tasks in the t time slot.
Compared with the prior art, the invention has the advantages that: according to the invention, each computing terminal respectively computes the cost of completing the task according to the task information to be distributed, determines the quotation according to the cost, applies for the task to the scheduler, and selects an optimal scheme for task distribution in the global scope on the basis of the quotation of all the computing terminals to each task by the scheduler, so that the task distribution scheme can ensure that the reward sum of the system is optimal on the basis of meeting the randomness of the task arrival and ensuring the authenticity of an ROA (restricted-optimal Auction) scheme; moreover, the computing terminal only needs to upload the quotation of the computing terminal to the scheduler, so that the privacy of the computing terminal can be effectively protected.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention.
FIG. 2 is a comparison of system rewards for three different algorithms over time in an embodiment of the invention.
FIG. 3 is a graph of time-averaged MD benefits and time-averaged system rewards in accordance with an embodiment of the invention.
FIG. 4 is a diagram illustrating the sum of system rewards from a simulation test according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and specific preferred embodiments of the description, without thereby limiting the scope of protection of the invention.
As shown in fig. 1, the allocation method of the edge computing system of the present embodiment includes the steps of: s1, a computing terminal receives task information of a task to be distributed sent by a scheduler;
s2, calculating according to a preset quotation model to obtain the cost for executing the task and determine quotation, and applying for the task to a scheduler according to the quotation; and S3, the scheduler receives the quotations of each computing terminal, and distributes the tasks to the computing terminals through a preset task distribution model so that the sum of the quotations of each task is optimal.
In this embodiment, the task information includes an input data amount, an output data amount, a CPU calculation cycle number, and a task reward of the task. The quotation model comprises a CPU speed submodel, an energy consumption submodel and a cost estimation submodel; the CPU speed submodel is used for calculating and determining the CPU speed for executing the task according to the CPU calculation cycle number; the energy consumption submodel is used for calculating the energy consumption for executing the task according to the CPU speed, the input data volume and the output data volume of the task; and the cost estimation submodel is used for estimating the cost of executing the task according to the CPU speed, the energy consumption and the task reward and determining the task quotation according to the cost.
In this embodiment, an edge system having one scheduler and N computation terminals (MD) is taken as an example for explanation, and the N computation terminals are respectively expressed as: MDnAnd N is {1,2, …, N }. The edge system runs in a time slot with a preset unit size, and the time slot is marked as te e {1,2, …, T }. The computing terminal has an energy module, such as a solar panel, a piezoelectric device, which can convert ambient energy into electrical energy for storage in a rechargeable battery for later use. The scheduler accumulates computing tasks from the IoT devices and broadcasts task information to nearby MDsn。MDnCalculating costs of processing tasks and sending their bids to a scheduler, the scheduler deciding winners of each task and then sending task information to the winning MDn. A computing task is delay sensitive, its deadline cannot exceed the length of one time slot, and if the task is not processed within the current time, it may be discarded or processed by another computing unit. In this embodiment, the scheduler is used as an auctioneer of a task, the computing terminal is used as a bidder of the task, the computing terminal computes the cost of processing the task and bids the task to the scheduler, and the scheduler selects a computing terminal that wins the bid to execute the task according to the bid of each computing terminal and gives a reward (i.e., reward) for the successful bidder to execute the task.
In this embodiment, the MapReduce and Google Dataflow programming models are followed, considering batch and flow based computational tasks. In the t-th time slot, the available computation tasks form a set, denoted by k (t) ═ k (t) |. Considering MDnLet us assume that MD is ubiquitousnIs greater than the number of computing tasks, i.e., N > K (t). The winner index is represented by Jn,k(t) if MDnIf the task k is won in the time slot t, the value is 1, otherwise, the value is takenThe value is 0. In one time slot, each MD can win at most one task, and each task only needs one MD to perform calculation, namely the formula (8) is satisfied:
in the formula (8), Jn,k(t) is the winner indicator, K (t) is the set of all tasks in the t time slot, and N is the number of computing terminals.
In this embodiment, the computing terminal, by using Dynamic voltage and frequency scaling (DVFC) capability, the MDnThe CPU speed may be adjusted to assume the set rn={rn,1,rn,2,…,rn,maxOne of the available discrete rates given in (j), i.e. the CPU speed at time slot t, for computing terminal n, satisfies equation (9):
in the formula (9), rn(t) calculating the CPU speed of the terminal n in the time slot t, rnIs a set of CPU speeds for the computing terminal.
Using phi if the number of CPU calculation cycles required to process task kk(t), then the time to compute task k may be expressed as ln,k(t)=φk(t)/rn(t), and the time for calculating task k cannot exceed the length of one time slot, that is, the following equation (10) is satisfied:
ln,k(t)≤τ (10)
in the formula (10), ln,k(t) is the CPU time required for the terminal n to process the task k, and τ is the size of the unit time slot. Thus, a set of CPU speeds at which the computing terminal n can complete task k within a time slot can be determinedIs shown asSatisfy the requirement of I.e. the minimum CPU speed required to complete the execution of task k within one time slot.
In this embodiment, the CPU speed determined by the CPU speed sub-model is calculated as the lowest CPU speed that meets the task execution requirement. The CPU speed submodel satisfies the following formula (1):
in the formula (1), the reaction mixture is,in order to calculate the determined speed of the CPU,for calculating the set of CPU speeds at which the terminal can complete the task in one time slot, the speed is expressed as For a predetermined speed limit value, alphanAnd betanThe system parameters of the terminal CPU are calculated in advance.
In the present embodiment, an objective function as shown in equation (11) is set,
in formula (11), f (r)n,k(t)) is an objective function, ΛnTo calculate the size of the energy buffer of the terminal, En(t) calculating the size of the energy queue of the terminal, alphanAnd betanFor predetermined system parameters, phi, of the CPU of the computing terminalk(t) number of CPU calculation cycles required to process task k, rnAnd (t) calculating the CPU speed of the terminal.
In this embodiment, the energy buffer refers to the maximum capacity of the computing terminal battery, the energy queue refers to the current remaining capacity of the computing terminal battery, and the remaining capacity + the rechargeable capacity is the maximum capacity.
The derivative of the formula (10) is 0,
If it is notIs represented byn,k(t) is the available CPU speed, so the best solution to the CPU speed control problem isr* n,k(t) there are the following three cases:
1. if it is notWhen in useThe energy consumption of the computing terminal monotonically increases, so
2. If it is notThe convex optimization result of the CPU speed control problem is rn,iOr rn,i+1And therefore, the first and second electrodes are,
In this embodiment, the energy consumption submodel includes a communication energy consumption submodel and an execution energy consumption submodel; the communication energy consumption submodel is used for calculating communication energy consumption required for communication for executing the task according to the input data volume and the output data volume; the execution energy consumption submodel is used for determining execution energy consumption required by executing the task according to the calculation period number and the speed of the CPU; and the energy consumption submodel calculates the energy consumption for executing the task according to the communication energy consumption and the execution energy consumption.
In this embodiment, the communication energy consumption submodel satisfies the following equation (2):
dn,k(t)=Cn,rxIk(t)+Cn,rxOk(t) (2)
in the formula (2), dn,k(t) calculated energy consumption for communication, Cn,rxEnergy consumption for receiving a unit of data for a computing terminal, Ik(t) is the received data amount of the task, Cn,rxEnergy consumption for transmitting a unit of data for a computing terminal, Ok(t) calculating the output data volume of the task executed by the terminal; the execution energy consumption submodel satisfies the following formula (3):
zn,k(t)=cn(t)ln,k(t) (3)
in the formula (3), zn,k(t) calculated execution energy consumption, cn(t) energy consumption per unit time of CPU of the computing terminal,/n,k(t)=φk(t)/rn(t) time required to perform the task, phik(t) the number of CPU calculation cycles, rn(t) is CPU speed; the energy consumption for executing the task is calculated as shown in formula (4):
Pn,k(t)=zn(t)+dn,k(t) (4)
in the formula (4), Pn,k(t) is energy consumption, zn,k(t) energy consumption for execution, dn,kAnd (t) represents communication energy consumption.
In the present embodiment, the energy consumption c of the CPU of the terminal per unit time processing is calculatedn(t) satisfies the formula (12),
cn(t)=anrn(t)3+βn (12)
in the formula (12), αnAnd betanFor predetermined system parameters of the computing terminal CPU, rnAnd (t) calculating the CPU speed of the terminal n in the time slot t.
In this embodiment, if MDnIf task k is won in slot t, then its benefit will be constituted by the cost of the payment and processing tasks it receives. We use pn,k(t) and vn,k(t) represents the payment received and the cost, respectively, then MDnThe benefit of (2) can be expressed by the difference between the two, as shown in equation (13):
in formula (13), Un(t) is the MD calculatednBenefit of (1), pn,k(t) payment that can be received for the computing terminal to complete task k, vn,k(t) cost to compute terminal to complete task k, Jn,k(t) is the winner indicator.
In this embodiment, the system award total depends on the accumulated award for all completed tasks, which can be expressed as shown in equation (14):
in the formula (14), R (t) is the system reward sum, Jn,k(t) is the winner indicator, ωk(t) rewards for each task.
The decision variables for the system prize total, including the CPU speed and the task winner index, are represented by (t) { r (t), j (t) }. Thus, a Sum-of-Rewards Optimization (SRO) problem is defined with the goal of maximizing the long-term Rewards Sum for the edge system, which can be expressed as shown in equation (15):
in equation (15), R (t) is the system reward sum, IE [ ] is the computational mathematical expectation.
In dealing with the SRO problem, on the one hand, the average reward sum needed to remain long depends on the current and future assignment of tasks; on the other hand, obtaining the maximum value of the net reward sum causes a contradiction, because if a centralized method is adopted, the MD is exposednPrivacy of (1), while taking a decentralized approach may result in MDnBidding is done in a manner that maximizes his own benefits rather than bidding in a manner that is a sum of the systematic awards.
In the present embodiment, the cost estimation submodel satisfies the following equation (5):
vn,k(t)=Pn,k(t)(En(t)-Λn)-Vwk(t) (5)
in the formula (5), vn,k(t) calculated cost, Pn,k(t) is energy consumption, En(t) is an energy queue, ΛnIs an energy buffer region, V is a preset weight value, wk(t) is a preset reward for the task.
In this embodiment, the computing terminal collects energy from surrounding energy resources, such as solar radiation, human motion, etc., to charge a battery having a capacity. Computing terminal MDnEnergy buffer ofThe size of the region is denoted as ΛnThe length of the energy queue is denoted as En(t) of (d). Then, the lengths of the energy queues of all the computing terminals constitute a set E (t) { E ═ E }1(t),E2(t),…,EN(t) }. Defining a random process etan(t) for indicating that within time slot t, terminal MD is calculatednMaximum energy, η, obtainedn(t) has an upper limit ηmaxBecause within a time slot etan(t) remains the same, but may change across time periods. Computing terminals will collect as much energy from the environment as possible to charge their batteries, but the length of the energy queue cannot exceed the size of the energy buffer. Thus, within one time slot t, MDnThe energy collected can be expressed as: e.g. of the typen(t)=min(Λn-En(t),ηn(t)). The energy collected by the computing terminal is used as input, and the energy consumed by the computing terminal is used as output, so that a random parameter queue formula of the computing terminal as shown in formula (16) can be obtained:
En(t+1)=En(t)-Pn(t)+en(t) (16)
in the formula (16), En(t) and En(t +1) energy queues at time slots t and t +1, respectively, Pn(t) is the energy that can be consumed in a time slot, enAnd (t) is the energy which can be supplemented in one time slot. Since the consumed energy cannot exceed the available energy, i.e. P is satisfiedn(t)≤En(t)。
In the present example, the formula (5) was demonstrated by the following procedure. Cost estimation of computational tasks may affect performance of the computing system by MDnThe sum of the awards earned, and MDnThere is a need to fully evaluate the benefits of processing tasks. The embodiment designs a cost function of a computing task through a Lyapunov optimization technology. Defining Lyapunov function to measure all computation terminals MDnThe energy conservation amount of (a) is as shown in formula (17):
in the formula (17), L (t) is the calculated energy retention amount,the method is used for calculating the electric quantity which can be supplemented to the terminal. Wherein,Λnto calculate the size of the energy buffer of the terminal, EnAnd (t) calculating the size of the energy queue of the terminal.
It can be deduced by equation (13) that a smaller l (t) indicates that the more energy the battery stores, the closer to the full state. Measuring the change in the lyapunov function across the time period by introducing a lyapunov offset is shown in equation (18):
Δ(t)=IE[L(t+1)-L(t)|E(t)] (18)
in the formula (18), Δ (t) is the calculated Lyapunov offset value, IE [ alpha ], [ beta ]]To calculate the mathematical expectation, L (t) and L (t +1) are the energy reserves of time slots t and t +1, respectively, EnAnd (t) calculating the size of the energy queue of the terminal.
In this embodiment, to maximize the total reward, r (t) weight in equation (14) is combined into Δ (t) to obtain a drift-minus-reward function, as shown in equation (19):
Δν(t)=Δ(t)-VR(t) (19)
in the formula (19), Δ (t) is the lyapunov offset value, V is the preset weight, and r (t) is the system award sum. The weight value V represents the proportion of r (t) in the formula (19), the larger V, the larger the proportion of r (t) in the formula, the more sensitive the system is to the reward that can be obtained for performing the task, and the less sensitive the stability of the energy queue. The smaller the delta (t), the smaller the change of the two time slots before and after the energy queue, and the more stable the energy queue. -VR (t) small, meaning that VR (t) is large, either V is large, or R (t) is large, or both. Therefore, to obtain the maximum system prize total r (t), it is desirable to minimize Δ (t). That is, as long as Δ ν (t) is minimized, r (t) can be maximized while ensuring the energy queue is stable.
In the present embodiment, equation (19) is a quadratic function equation relating to two variables, and in order to simplify the calculation process, the following derivation is performed:
the definition of each parameter in the formula (20) is the same as described above.
Squaring both sides of the formula (16) to obtain a formula (21),
the definition of each parameter in the formula (21) is the same as described above.
Formula (22) can be obtained by substituting formula (21) into formula (20):
the definition of each parameter in the formula (22) is the same as described above.
Substituting formula (22) into formula (19) yields formula (23):
in the formula (23), the compound represented by the formula,Pmaxmaximum energy consumption, η, for performing tasksmaxFor the random process η defined in this embodimentn(t) maximum value, and the remaining parameters are as defined above.
In the present embodiment, the maximum power consumption P for executing a taskmaxIt can be calculated from historical tasks, such as the time slot to four tasks s1, s2, s3 and s4, and we can use the information provided by the tasksTo calculate the energy consumption of each task, P1, P2, P3, P4, wherein the maximum is Pmax。
Equation (23) is a simplified linear equation, and in order to maximize the prize total, it is necessary to minimize the right side of the inequality (23). Substitution and simplification of formula (14) to the right of formula (23) yields formula (24):
in the formula (24), the definition of each parameter is the same as described above.
It follows that the right side of the minimization inequality (23) is minimized, i.e. in the minimization equation (24)The computing terminal MD can be obtained by the linear structurenThe cost of processing task k is shown in equation (5). The cost function shown in equation (5) includes two components, one of which is energy dependent, i.e., Pn,k(t)(En(t)-Λn) (ii) a The other part being associated with the weight reward, i.e. Vwk(t) of (d). With MDnThe cost function increases with the increase in energy consumption of processing task k, since an increase in energy consumption will cause MDnThe chance of processing future tasks is reduced. Will (E)n(t)-Λn) Viewed as Pn,k(t) it is also determined that the greater V, the greater R (t), the greater the system prize total.
In this embodiment, the task allocation model preset in step S3 is as shown in equation (6),
in the formula (6), J (t) is a set of tasks, Jn,k(t) is the winner indicator, Jn,k(t) '1' indicates that the task k is allocated to the terminal n for execution, and conversely '0', bn,kAnd (t) is the quotation of the terminal N to the task k, K (t) is the set of all tasks in the time slot t, and N is the number of the calculation terminals.Specifically, methods such as the Hungarian algorithm and the like can be adopted to process the allocation calculation of the task allocation model. The task allocation model takes the equation (8) as a constraint condition, that is, each MD can win at most one task in one time slot, and each task only needs one MD to perform calculation.
In this embodiment, the method further includes a reward distribution step S4, where the scheduler distributes the task to the computing terminal, and gives a reward corresponding to the computing terminal, where the reward is determined according to a preset reward distribution model, the preset reward distribution model is shown in formula (7),
in the formula (7), pn,k(t) is a reward assigned to computing terminal n after assignment of task k to computing terminal n,is the minimum value determined by equation (6) in the case where the terminal n is not involved,the minimum value determined by formula (6) is the minimum value under the condition that the terminal n and the task k do not participate in scheduling, and K (t) is the set of all tasks in the t time slot.
In this embodiment, a comparison test is performed on the method (SRO optimization algorithm) of the present invention and the energy consumption optimization algorithm and the greedy algorithm in the prior art through a specific simulation experiment, and 10 computing terminals (MDs) are set, and all MDs have the same system parameters: alpha is alphan=0.34,βn0.35 (same as Google Nexus), the energy consumption for transmitting unit data is h (t) × 308.6 × 10-9W/bit, h (t) varies with changes in channel conditions, h (t) epsilon [0.5,2.0, in order to guarantee a constant transmission rate]. The energy consumption to accept a unit of data is: 200 x 10-9The CPU adjustable speed range of W/bit, MD is rn={0.1,0.2,0.4,0.8,1,1.44}GHz。
The system processes a series of tasks with the same quality, and the input data size range is as follows: {2000Kb, 4000Kb, 6000Kb, 8000Kb }. All tasks fall into two categories: data processing tasks and decision-making tasks. The computational complexity of the data processing tasks is 1000cycles/bit, the output data is half of the input data, the computational complexity of the decision tasks is 3000cycles/bit, and the size of the output data is 500 Kb. Further, the length of each slot τ is 60s, and the speed of energy Extraction (EH) is 20 mW.
Through simulation experiments, the system rewards obtained by the three algorithms are shown in the relation of time in fig. 2, and it can be found that the system rewards quickly rise and then stabilize along with the time in the initial stage of the operation of the system. The system rewards obtained by the SRO algorithm are higher than those obtained by the energy consumption optimization algorithm and the greedy algorithm. This is because the energy optimization algorithm only considers the energy consumption of executing the task, and some tasks with high energy consumption may not be executed all the time, while the greedy algorithm only considers the reward that can be obtained by executing the current task, which causes the terminal to prematurely use up its own energy, and even if a task with a higher reward value occurs in the future, the terminal cannot execute the task due to the limitation of energy. And the cost evaluation function of the SRO algorithm comprehensively considers the current available battery capacity of the terminal, the energy consumption for executing the task and the reward which can be obtained by executing the task.
In this embodiment, the preset weight V and the number N of the computing terminals are different values, and the maximum energy upper limit η of the computing terminal is calculatedmaxThe SRO algorithm of the present invention is separately simulated and calculated when different values are taken, and a time-averaged MD benefit and a time-averaged system reward map under different conditions are obtained as shown in fig. 3. In fig. 3(a), the ordinate represents the time average of the system awards (2000 time slots), and the point with V equal to 0.8 in the black line is taken as an example, and the ordinate represents the average of the system awards obtained by 5 MD terminals in 2000 time slots under the condition that V equal to 0.8 and N equal to 5. In fig. 3(b), the ordinate represents the time average of the terminal benefit (2000 slots), and the point of V0.8 in the black line is taken as an example, and the ordinate represents the time when all MD terminals (N10) have the sum of the benefits when V0.8 and the maximum Energy Harvesting (EH) rate is 20And (4) performing inter-average. From the above fig. 3(a), it can be seen that when V is between 0.2 and 2.0, the system reward increases as the number of MDs increases, because the larger the number of MDs means that more tasks can be performed by the system, and thus more system rewards can be obtained. To verify the effectiveness of the SRO algorithm, we have taken a very large V (V ═ 100), and it can be seen from fig. 3(a) that the difference between the theoretically optimal system reward (three lines in fig. 3 (a)) and the system reward acquired by the SRO algorithm (corresponding three lines in fig. 3 (a)) is about 1.2% to 2.4%, and the system reward acquired by the SRO algorithm is very close to the theoretically optimal value. As can be seen from fig. 3(b), the faster the energy acquisition rate, the greater the benefit of the MD, since the MD can have more energy to perform the task. Furthermore, by comparing the two graphs (N ═ 10), it can be seen that the system awards a benefit greater than MD, which means that the scheduler can capture a partial intermediate profit between the task producer and the terminal.
In this embodiment, when the preset weights V are different, the sum of the system rewards obtained by performing a simulation test on the SRO algorithm of the present invention is shown in fig. 4. The ordinate in the figure represents the sum of the system rewards of all terminals, and each point represents the sum of the system rewards earned by all MDs at the current time slot. It has been found that during the initial stages of operation of the system, the reward for the system is maintained at a low level because of the limited power of the terminals, and the inability to perform any or very few tasks. After a period of time, the terminal collects enough energy to perform more tasks and the system rewards increase. Eventually, the system reward stabilizes. In the case where V is larger, the system award can also reach a larger value.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.
Claims (6)
1. A method for distributing an edge computing system, comprising the steps of:
s1, a computing terminal receives task information of a task to be distributed sent by a scheduler;
s2, calculating according to a preset quotation model to obtain the cost for executing the task and determine quotation, and applying for the task to a scheduler according to the quotation;
s3, the scheduler receives quotations of each computing terminal, and distributes tasks to the computing terminals through a preset task distribution model so that the sum of the quotations of each task is optimal;
the task information comprises the input data volume, the output data volume, the CPU calculation periodicity and the task reward of the task;
the quotation model comprises a CPU speed submodel, an energy consumption submodel and a cost estimation submodel;
the CPU speed submodel is used for calculating and determining the CPU speed for executing the task according to the CPU calculation cycle number;
the energy consumption submodel is used for calculating the energy consumption for executing the task according to the CPU speed, the input data volume and the output data volume of the task;
the cost estimation submodel is used for estimating the cost of executing the task according to the CPU speed, the energy consumption and the task reward, and determining the task quotation according to the cost;
calculating the determined CPU speed as the lowest CPU speed meeting the task execution requirement through the CPU speed submodel;
the CPU speed submodel satisfies the following formula (1):
in the formula (1), the reaction mixture is,in order to calculate the determined speed of the CPU,for calculating the set of CPU speeds at which the terminal can complete the task in one time slot, the speed is expressed as For a predetermined speed limit value, alphanAnd betanFor predetermined system parameters of the computing terminal CPU, f (r)n,k(t)) is a preset objective function.
2. The method of allocation of an edge computing system of claim 1, wherein: the energy consumption submodel comprises a communication energy consumption submodel and an execution energy consumption submodel;
the communication energy consumption submodel is used for calculating communication energy consumption required for communication for executing the task according to the input data volume and the output data volume;
the execution energy consumption submodel is used for determining execution energy consumption required by executing the task according to the calculation period number and the speed of the CPU;
and the energy consumption submodel calculates the energy consumption for executing the task according to the communication energy consumption and the execution energy consumption.
3. The method of allocation of an edge computing system of claim 2, wherein: the communication energy consumption sub-model satisfies the following formula (2):
dn,k(t)=Cn,rxIk(t)+Cn,rxOk(t) (2)
in the formula (2), dn,k(t) calculated energy consumption for communication, Cn,rxEnergy consumption for receiving a unit of data for a computing terminal, Ik(t) is the received data amount of the task, Cn,rxEnergy consumption for transmitting a unit of data for a computing terminal, Ok(t) is a computing terminal executiveThe output data volume of the task;
the execution energy consumption submodel satisfies the following formula (3):
zn,k(t)=cn(t)ln,k(t) (3)
in the formula (3), zn,k(t) calculated execution energy consumption, cn(t) energy consumption per unit time of CPU of the computing terminal,/n,k(t)=φk(t)/rn(t) time required to perform the task, phik(t) the number of CPU calculation cycles, rn(t) is CPU speed;
the energy consumption for executing the task is calculated as shown in formula (4):
Pn,k(t)=zn(t)+dn,k(t) (4)
in the formula (4), Pn,k(t) is energy consumption, zn,k(t) energy consumption for execution, dn,kAnd (t) represents communication energy consumption.
4. The method of allocation of an edge computing system of claim 3, wherein: the cost estimation submodel satisfies the following formula (5):
vn,k(t)=Pn,k(t)(En(t)-Λn)-Vwk(t) (5)
in the formula (5), vn,k(t) calculated cost, Pn,k(t) is energy consumption, En(t) is an energy queue, ΛnIs an energy buffer region, V is a preset weight value, wk(t) is a preset reward for the task.
5. The method of allocation of an edge computing system of claim 4, wherein: the task allocation model preset in step S3 is as shown in equation (6),
in the formula (6), J (t) is a set of tasks,Jn,k(t) is the winner indicator, Jn,k(t) '1' indicates that the task k is allocated to the terminal n for execution, and conversely '0', bn,kAnd (t) is the quotation of the terminal N to the task k, K (t) is the set of all tasks in the time slot t, and N is the number of the calculation terminals.
6. The method of allocation of an edge computing system of claim 5, wherein: and further comprises a reward distribution step S4, after the dispatcher distributes the task to the computing terminal, the corresponding reward is given to the computing terminal, the reward is determined according to a preset reward distribution model, the preset reward distribution model is shown as a formula (7),
in the formula (7), pnk(t) is a reward assigned to computing terminal n after assignment of task k to computing terminal n,is the minimum value determined by equation (6) in the case where the terminal n is not involved,the minimum value determined by formula (6) is the minimum value under the condition that the terminal n and the task k do not participate in scheduling, and K (t) is the set of all tasks in the t time slot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810140988.XA CN108415763B (en) | 2018-02-11 | 2018-02-11 | Distribution method of edge computing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810140988.XA CN108415763B (en) | 2018-02-11 | 2018-02-11 | Distribution method of edge computing system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108415763A CN108415763A (en) | 2018-08-17 |
CN108415763B true CN108415763B (en) | 2020-12-11 |
Family
ID=63128502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810140988.XA Active CN108415763B (en) | 2018-02-11 | 2018-02-11 | Distribution method of edge computing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108415763B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110913279B (en) * | 2018-09-18 | 2022-11-01 | 中科海微(北京)科技有限公司 | Processing method for augmented reality and augmented reality terminal |
CN109343904B (en) * | 2018-09-28 | 2021-12-10 | 燕山大学 | Lyapunov optimization-based fog calculation dynamic unloading method |
CN109121151B (en) * | 2018-11-01 | 2021-06-11 | 南京邮电大学 | Distributed unloading method under small cell integrated mobile edge calculation |
CN109710336B (en) * | 2019-01-11 | 2021-01-05 | 中南林业科技大学 | Mobile edge computing task scheduling method based on joint energy and delay optimization |
CN109714797B (en) * | 2019-02-18 | 2023-01-31 | 南京邮电大学 | Mobile edge network resource allocation method based on auction theory |
CN110113140B (en) * | 2019-03-08 | 2020-08-11 | 北京邮电大学 | Calculation unloading method in fog calculation wireless network |
CN111443990B (en) * | 2020-03-25 | 2023-04-07 | 中南大学 | Edge calculation task migration simulation system |
CN112929915B (en) * | 2021-02-20 | 2022-08-02 | 中南大学 | Dynamic data unloading method and system for mobile edge calculation |
CN113301158B (en) * | 2021-05-25 | 2022-03-22 | 东北大学 | Resource allocation method based on auction theory under mobile edge computing environment |
CN113703970B (en) * | 2021-08-13 | 2023-06-02 | 北京信息科技大学 | Auction mechanism-based server resource allocation method, device, equipment and medium |
CN113626107B (en) * | 2021-08-20 | 2024-03-26 | 中南大学 | Mobile computing unloading method, system and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101918933A (en) * | 2007-12-05 | 2010-12-15 | 生命力有限公司 | System and method for intelligently allocating client requests to server centers |
CN103853618A (en) * | 2014-03-06 | 2014-06-11 | 南京理工大学 | Resource allocation method with minimized cloud system cost based on expiration date drive |
CN105225016A (en) * | 2015-10-29 | 2016-01-06 | 华东师范大学 | A kind of in the cloud computing system of renewable energy supply based on the energy distributing method of cooperative game |
CN105426241A (en) * | 2015-11-16 | 2016-03-23 | 北京航空航天大学 | Cloud computing data center based unified resource scheduling energy-saving method |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120296974A1 (en) * | 1999-04-27 | 2012-11-22 | Joseph Akwo Tabe | Social network for media topics of information relating to the science of positivism |
US6405309B1 (en) * | 1999-06-18 | 2002-06-11 | Phoenix Technologies Ltd. | Method and apparatus for creating and deploying smaller Microsoft Windows applications for automatic configuration of a computing device |
US7730248B2 (en) * | 2007-12-13 | 2010-06-01 | Texas Instruments Incorporated | Interrupt morphing and configuration, circuits, systems and processes |
US9378065B2 (en) * | 2013-03-15 | 2016-06-28 | Advanced Elemental Technologies, Inc. | Purposeful computing |
US10007513B2 (en) * | 2015-08-27 | 2018-06-26 | FogHorn Systems, Inc. | Edge intelligence platform, and internet of things sensor streams system |
CN107317700B (en) * | 2017-06-09 | 2020-06-30 | 湖北理工学院 | Vehicle-mounted edge computing node selection system and method |
CN107295109A (en) * | 2017-08-16 | 2017-10-24 | 重庆邮电大学 | Task unloading and power distribution joint decision method in self-organizing network cloud computing |
-
2018
- 2018-02-11 CN CN201810140988.XA patent/CN108415763B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101918933A (en) * | 2007-12-05 | 2010-12-15 | 生命力有限公司 | System and method for intelligently allocating client requests to server centers |
CN103853618A (en) * | 2014-03-06 | 2014-06-11 | 南京理工大学 | Resource allocation method with minimized cloud system cost based on expiration date drive |
CN105225016A (en) * | 2015-10-29 | 2016-01-06 | 华东师范大学 | A kind of in the cloud computing system of renewable energy supply based on the energy distributing method of cooperative game |
CN105426241A (en) * | 2015-11-16 | 2016-03-23 | 北京航空航天大学 | Cloud computing data center based unified resource scheduling energy-saving method |
Also Published As
Publication number | Publication date |
---|---|
CN108415763A (en) | 2018-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108415763B (en) | Distribution method of edge computing system | |
CN112882815B (en) | Multi-user edge calculation optimization scheduling method based on deep reinforcement learning | |
CN110928654B (en) | Distributed online task unloading scheduling method in edge computing system | |
CN109829332A (en) | A kind of combined calculation discharging method and device based on energy collection technology | |
CN111401744B (en) | Dynamic task unloading method in uncertainty environment in mobile edge calculation | |
CN107911478A (en) | Multi-user based on chemical reaction optimization algorithm calculates discharging method and device | |
CN109981744B (en) | Data distribution method and device, storage medium and electronic equipment | |
CN115629865B (en) | Deep learning inference task scheduling method based on edge calculation | |
WO2021182948A1 (en) | Method and system for allocating charging resources to a plurality of charging stations | |
CN116366576A (en) | Method, device, equipment and medium for scheduling computing power network resources | |
Zhang et al. | A dynamic resource overbooking mechanism in fog computing | |
KR101065436B1 (en) | Stochastic scheduling of a real-time parallel task with uncertain computation amount on mulit-core processors | |
Xu et al. | Basic: Distributed task assignment with auction incentive in uav-enabled crowdsensing system | |
CN111708620B (en) | Task unloading method with charging mechanism | |
Zhou et al. | Where to process: deadline-aware online resource auction in mobile edge computing | |
CN114546660B (en) | Multi-unmanned aerial vehicle cooperation edge computing method | |
CN113555887B (en) | Power grid energy control method and device, electronic equipment and storage medium | |
Rublein et al. | Scalable resource allocation techniques for edge computing systems | |
Zhang et al. | Computational task offloading algorithm based on deep reinforcement learning and multi-task dependency | |
Pang et al. | Eris: An Online Auction for Scheduling Unbiased Distributed Learning Over Edge Networks | |
Liu et al. | Long-term auction for inner-dependent task offloading in blockchain-enabled edge computing | |
CN115344388B (en) | Power real-time simulation calculation task allocation method and device considering communication and calculation force | |
CN118508434B (en) | Resource allocation method, device, equipment and medium | |
Zhong et al. | Multiobjective African Vulture Scheduling Algorithm in Green Mobile Edge Computing | |
CN115297123B (en) | Fog computing task unloading method and system based on global cost minimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |