WO2018050242A1 - Efficient scheduler for short periodic tasks - Google Patents
Efficient scheduler for short periodic tasks Download PDFInfo
- Publication number
- WO2018050242A1 WO2018050242A1 PCT/EP2016/072014 EP2016072014W WO2018050242A1 WO 2018050242 A1 WO2018050242 A1 WO 2018050242A1 EP 2016072014 W EP2016072014 W EP 2016072014W WO 2018050242 A1 WO2018050242 A1 WO 2018050242A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- invocation
- periodic
- entry
- task
- tasks
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
- G06F9/4887—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
Definitions
- the present invention in some embodiments thereof, relates to scheduling a plurality of tasks and, more specifically, but not exclusively, to efficiently scheduling a plurality of periodic tasks by maintaining a min-heap structure in which the plurality of periodic tasks are grouped according to their common invocation frequencies.
- scheduling is a method for assigning resources and/or means for executing and completing a work.
- the work may be virtual computation elements such as, for example, threads, processes, data flows and/or the like that are scheduled onto hardware resources such as, for example, processors, network links, dedicated hardware and/or the like.
- Scheduling is a fundamental building block for any software implementation on any hardware platform and an intrinsic part of the execution model of the computing system. Scheduling is the element making it possible to have computer multitasking in which the plurality of computation elements share available hardware resources of the hardware platform and are executed concurrently.
- the scheduling activity may be carried out by a scheduler executed on the processing hardware, for example, as a stand-alone software module, as part of an operating system (OS), as a low level pre-OS software module and/or the like.
- the scheduler may also be implemented as a combination of any two or more of the above.
- the scheduler may be implemented using one or more of a plurality of techniques to allow the plurality of computation elements to be effectively computed by the available hardware resources.
- the scheduler may be implemented to achieve one or more computation objectives for the plurality of computation elements executed by the available hardware resources, for example, balancing load, maximizing throughput, minimizing response time, minimizing latency, maximizing fairness and/or the like.
- the scheduler may be implemented to provide an optimal compromise between the conflicting computation objectives.
- the scheduler may by further adapted according to the characteristics of the computing system.
- an apparatus for scheduling periodic tasks comprising: a memory for storing a min-heap structure comprising a plurality of lists, each list comprising a plurality of entries, each entry is associated with one of a plurality of periodic tasks, each of the plurality of lists orders a subset of the plurality of periodic tasks that share a common invocation frequency; and a processor adapted to perform a scheduling process at a timeout event by: analyzing a first entry at the beginning of a top list of the plurality of lists to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure, invoking the respective periodic associated with the first entry, and rearranging the min-heap structure; wherein the scheduling process is repeated until no periodic tasks scheduled for invocation at the current time are identified.
- the min-heap structure is a binary min-heap tree structure created to support the scheduling process
- the min- heap structure comprises a root node and at least one child node
- each of the nodes is associated with one of the plurality of lists and the lists are doubly linked lists.
- each of the entries of each of the lists comprises a timing of invocation of the associated periodic task.
- each respective entry associated with a respective task comprises a pointer to a first entry associated with a preceding periodic task and a pointer to a second entry associated with a succeeding periodic task, wherein the respective entry, the first entry and the second entry are included in a respective list of the plurality of lists, wherein the preceding periodic task is scheduled for an earlier invocation time than the respective task and the succeeding periodic task scheduled for a later invocation time than the respective periodic task, and wherein the processor uses the pointers to traverse through the respective list.
- the processor calculates a timing of a following timeout event and sets a system timer to expire at the timing of the following timeout event.
- system timer is a hardware timer.
- the system timer is provided by an operating system executed by the processor.
- the processor upon a scheduling request of a respective periodic task of the plurality of periodic tasks, the processor identifies the invocation frequency of the respective periodic task, creates a new entry for the respective periodic task and links it to the end of a respective list o the plurality of lists according to the identified invocation frequency.
- the processor is further adapted to create a new list in the min-heap structure in case the identified invocation frequency of a scheduled request is different from existing invocation frequency associated with any o the plurality of lists.
- the processor is adapted to remove an entry of the plurality of entries that is associated with the respective periodic task.
- the processor is further adapted to reschedule the respective periodic task by adding a new entry associated with the respective periodic task in case the respective periodic task requests rescheduling, the new entry is added at the end of a respective list of the plurality of lists.
- a method of scheduling periodic tasks comprising: identifying, at a timeout event, one of a plurality of periodic tasks that is scheduled for invocation at a current time by analyzing a first entry at the beginning of a top list of a plurality of lists of a min-heap structure to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure; invoking the respective periodic task associated with the first entry; and rearranging the min-heap structure; wherein the identifying, invoking and rearranging are repeated until no periodic tasks scheduled for invocation at the current time are identified.
- the min-heap structure is a binary min-heap tree structure created to support the scheduling process, the min-heap structure comprises a root node and at least one child node, each of the nodes is associated with one of the plurality of lists and the lists are doubly linked lists.
- each of the entries of each of the lists comprises a timing of invocation of the associated periodic task.
- each respective entry associated with a respective task comprises a pointer to a first entry associated with a preceding periodic task and a pointer to a second entry associated with a succeeding periodic task, wherein the respective entry, the first entry and the second entry are included in a respective list o the plurality of lists, wherein the preceding periodic task is scheduled for an earlier invocation time than the respective task and the succeeding periodic task scheduled for a later invocation time than the respective periodic task, and wherein the processor uses the pointers to traverse through the respective list.
- system timer is a hardware timer.
- system timer is provided by an operating system executed by the processor.
- the processor upon a scheduling request of a respective periodic task of the plurality of periodic tasks, identifies the invocation frequency of the respective periodic task, creates a new entry for the respective periodic task and links it to the end of a respective list of the plurality of lists according to the identified invocation frequency.
- the processor is further adapted to create a new list in the min-heap structure in case the identified invocation frequency of a scheduled request is different from existing invocation frequency associated with any of the plurality of lists.
- the processor is adapted to remove an entry of the plurality of entries that is associated with the respective periodic task.
- the processor is further adapted to reschedule the respective periodic task by adding a new entry associated with the respective periodic task in case the respective periodic task requests rescheduling, the new entry is added at the end of a respective list of the plurality of lists.
- a system for scheduling periodic tasks comprising a memory for storing a min-heap structure comprising a plurality of lists, each list comprising a plurality of entries, each entry is associated with one of a plurality of periodic tasks, each of the plurality of lists orders a subset of the plurality of periodic tasks that share a common invocation frequency and a processor adapted to perform a scheduling process at a timeout event by:
- the top list is located at a root node of the min-heap structure.
- scheduling process is repeated until no periodic tasks scheduled for invocation at the current time are identified.
- a method of scheduling periodic tasks comprising:
- Identifying, at a timeout event, one of a plurality of periodic tasks that is scheduled for invocation at a current time by analyzing a first entry at the beginning of a top list of a plurality of lists of a min-heap structure to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure. Invoking the respective periodic task associated with the first entry.
- identifying, invoking and rearranging are repeated until no periodic tasks scheduled for invocation at the current time are identified.
- a computer program having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- FIG. 1 is a flowchart of an exemplary process for scheduling periodic tasks, according to some embodiments of the present invention
- FIG. 2 is a schematic illustration of an exemplary system for scheduling periodic tasks, according to some embodiments of the present invention
- FIG. 3 is a schematic illustration of an exemplary network system embodiment employing the periodic tasks scheduling, according to some embodiments of the present invention
- FIG. 4 is a schematic illustration of an exemplary min-heap structure used for scheduling periodic tasks, according to some embodiments of the present invention
- FIG. 5 is a graph presenting a performance comparison between scheduling techniques for an exemplary periodic tasks execution environment in a first range, according to some embodiments of the present invention.
- FIG. 6 is a graph presenting a performance comparison between scheduling techniques for an exemplary periodic tasks execution environment in a second and a third range, according to some embodiments of the present invention.
- the present invention in some embodiments thereof, relates to scheduling a plurality of tasks and, more specifically, but not exclusively, to efficiently scheduling a plurality of periodic tasks by maintaining a min-heap structure in which the plurality of periodic tasks are grouped according to their common invocation frequencies.
- the present invention presents apparatuses, systems and methods for scheduling tasks, in particular periodic tasks by utilizing a scheduler that uses a min-heap structure for representing one or more double-link lists (DLLs) arranged to map a plurality of periodic tasks grouped together according to an invocation frequency of the tasks.
- the plurality of periodic tasks is grouped in DLL(s) such that a subset of the plurality of periodic tasks grouped in each of the lists share a common invocation frequency.
- Each of the DLLs comprises one or more entries each associated with one of the periodic tasks.
- Each of the entries stores an absolute invocation time of the associated periodic task and the entries in each respective list are ordered according to the absolute invocation time of the associated periodic task.
- each entry holds a pointer to a preceding entry associated with a periodic task having an earlier invocation time and a pointer to a succeeding entry associated with a periodic task having a later invocation time.
- the scheduler dynamically rearranges the min-heap structure to maintain the entries in each of the DLL(s) time sorted according to the absolute invocation time of their associated task.
- the DLL architecture allows the scheduler to traverse rapidly and efficiently through the time sorted min-heap structure (using the forward and/or backward pointers available in each of the entries) to search of task(s) to be invoked, to add tasks, to remove tasks and/or to rearrange the min-heap structure.
- the scheduler schedules timeout events using one or more timers, for example, a hardware timer, an operating system timer and/or the like.
- the scheduler schedules the timeout events according to the earliest future absolute invocation time identified in the entries in the DLL(s).
- the scheduler is invoked and analyzes the absolute invocation times stored in the entries in the DLL(s) to identify one or more of the periodic tasks that needs to be invoked at the current time and invokes them.
- the scheduler then rearranges (heapifies) the min-heap structure DLLs and/or entries and repeats the analysis to determine whether one or more additional tasks need to be invoked.
- the scheduler may reinsert the entry(s) associated with the invoked task(s) at the end of the respective DLL.
- the scheduler may rearrange a root node and/or one or more of child nodes of the min-heap structure according to the absolute invocation time of the associated tasks.
- the scheduler also sorts the entries in the DLL according to the absolute invocation time of their associated tasks and updates accordingly the respective pointers of the entries. During each timeout event, the analysis and invocation process is repeated until no more tasks need to be invoked at the current time and the scheduler then schedules a new timeout event, i.e. for the following timeout event.
- the scheduler may receive a plurality of requests from new and/or existing tasks and may create, add and/or remove entry(s) and/or DLL(s) in the min-heap structure. Following the requests, the scheduler rearranges the min-heap structure to maintain the time order of the entries in each of the DLLs and/or of the min-heap structure nodes.
- the scheduler presented herein may significantly improve utilization of the computing system compared to currently existing scheduling methodologies and/or implementations .
- the existing scheduling implementations may utilize one or more timers, hardware timers and/or OS provided timers for scheduling the plurality of tasks.
- the timers generate a timeout event for every invocation of one of the tasks and may therefore present a significant overhead, particularly for systems having a high number of task invocations thus exhausting computing resources of the system.
- the presented scheduler allows for high scalability as it relies on analyzing the min-heap structure to identify a plurality of tasks that need to be invoked at the current time.
- the scheduler While configuring the timer to trigger the next timeout event according to the absolute invocation time of the next task (earliest future invocation), once the timeout event is triggered, the scheduler analyzes the min-heap structure to identify additional tasks that may need to be invoked at the current time. As time elapses during the timeout event, the invocation time of additional tasks may be due and therefore the process of rescheduling additional timeout event(s) may be avoided and the respective tasks may be invoked immediately. Analyzing the min-heap structure to identify the plurality of tasks that may need to be invoked may be of particular advantage when the tasks are short and periodic, and possibly having common invocation frequencies. This may significantly reduce the number of invocations of the scheduler thus reducing the overhead of the scheduler itself.
- the scheduler may present significant improvement in particular for high invocation rate systems where the invocations may rapidly take place one after the other. Moreover, only a single timer may be required to manage the scheduling when using the presented scheduler. This may reduce the hardware and/or OS resources required for scheduling the tasks.
- the scheduler may provide a finer granularity compared to the timers used by the existing scheduling implementations, specifically a timer wheel mechanism. While the present scheduler may dynamically configure the hardware timer to a granularity as required by the next task scheduled for invocation thus avoiding a fixed granularity as may be applied by the timer wheel.
- timer wheel While some existing scheduling implementations, for example, the timer wheel, present improved performance especially for high invocation numbers (high number of applications) they present drawbacks that may be overcome by the presented scheduler.
- the timer wheel employs a time slots implementation in which the tasks are executed within their assigned time slots. This results in the timer wheel being constantly active even when no tasks' invocation is scheduled and/or when a low invocation number is required. The overhead exhibited by the timer wheel is therefore significant, especially for low invocation number scenarios.
- the scheduler presented herein is invoked only when task invocation is required and is suspended at any other time. Therefore, the scheduler presents a very low overhead with no respect to the number of invocations (number of tasks). Moreover, the scheduler may present significant performance improvement over the timer wheel in scenarios where the tasks are periodic tasks and even more so when the tasks are periodic and short. As opposed to the timer wheel that may hold a single ordered list for listing the scheduled tasks, the presented scheduler relies on the min-heap structure comprising the DLLs that are arranged to group together tasks having a common invocation frequency. This may allow the presented scheduler to traverse more rapidly and efficiently through the min- heap structure to search for the next task(s) to be invoked and/or to rearrange the min- heap to maintain time ordered.
- the advantage of the invocation frequency arranged DLLs is even more evident since the scheduler dynamically arranges the DLLs such that the periodic tasks that have the common invocation frequency are already ordered one after the other making the search and invocation highly efficient for the scheduler. Therefore, arranging the tasks in the min-heap tree structure may significantly reduce the seek time thus reducing computing resource and/or computation time.
- the reduced overhead, the fast search and the efficient min-heap structure rearrangement may significantly reduce the required computing resource of the scheduler that in turn may be translated to reducing costs and/or computation time.
- the present invention may be an apparatus, a system, a method, and/or a computer program product.
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- a network for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer such as the user equipment (UE), as a standalone software package, partly on the user's computer and partly on a remote computer such as the network apparatus or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- FPGA field-programmable gate arrays
- PLA programmable logic arrays
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- a process 100 for scheduling periodic tasks utilizes a scheduler that uses a min-heap structure comprising a root node and one or more child nodes each associated with a list of a plurality of periodic tasks grouped together according to their invocation frequency.
- the plurality of periodic tasks is grouped in the lists such that a subset of the plurality of periodic tasks grouped in each of the lists share a common invocation frequency.
- Each of the lists is a DLL comprising one or more entries each associated with one of the periodic tasks.
- Each of the entries stores an absolute invocation time of the associated periodic task and the entries in each respective list are ordered according to the absolute invocation time of the associated periodic task.
- each entry holds a pointer to a preceding entry associated with a periodic task having an earlier invocation time and a pointer to a succeeding entry associated with a periodic task having a later invocation time.
- the scheduler schedules a timeout event using one or more timers, for example, a hardware timer, an operating system timer and/or the like, typically a single timer. The scheduler schedules the timeout event according to the earliest future absolute invocation time identified in the entries in the DLL(s).
- the scheduler is invoked and analyzes the absolute invocation times stored in the entries in the DLL(s) to identify one or more of the periodic tasks that needs to be invoked at the current time and invokes them.
- the scheduler traverses the list(s) using the forward pointer available in each of the entries to continue scanning the list, invoking the tasks until it finds a task that is not scheduled for invocation at the current time but rather to a future time. Since the DLLs are arranged in the min-heap according to the absolute invocation time of the first associated task, the scheduler may traverse rapidly through the min-heap structure.
- the scheduler then rearranges (heapifies) the min-heap structure lists and/or entries and repeats the analysis to determine whether one or more additional tasks need to be invoked.
- the scheduler may rearrange the root node and one or more of the child nodes according to the absolute invocation time of the associated tasks with respect to the current absolute time.
- the scheduler may also update the order of one or more of the entries and update accordingly the respective pointers of the one or more entries.
- the analysis and invocation process is repeated until no more tasks need to be invoked at the current time and the scheduler then schedules a new timeout event.
- the scheduler may receive a plurality of scheduling requests from the plurality of periodic tasks and may create, add and/or remove entries and/or lists in the min-heap structure according to the invocation time(s) of the requesting tasks.
- FIG. 2 is a schematic illustration of an exemplary system for scheduling periodic tasks, according to some embodiments of the present invention.
- An exemplary system 200 for example, a computer, a server, a processing node, a network apparatus and/or any device comprising a processor(s) 202 may execute a scheduler 230 employing the process 100 for scheduling an execution of a plurality of tasks 220, in particular periodic tasks.
- the scheduler 230 and the tasks 220 are software modules each comprising a plurality of program instructions executed by the processor(s) 202 from one or more volatile and/or non-volatile memories, for example, a cache, a random access memory (RAM), a read only memory (ROM), a Flash array, a disk drive and/or the like.
- the processor(s) 202 hosts an OS 210, for example, Windows, Linux, Android and/or the like such that the tasks 220 are executed in the OS 210 environment.
- the scheduler 230 and the tasks 220 may be executed in an OS less environment where the processor(s) 202 executes the scheduler 230 to schedule the invocation of the tasks 220.
- the processor(s) 202 hosts a plurality of instances of the OS 210 where each OS 210 instance executes a scheduler such as the scheduler 230 for scheduling the invocation of the tasks 220.
- the scheduler 230 may further be a pre-OS software module in the scenario of multiple OS 210 instances such that the scheduler 230 manages the invocation of the OS 210 instances and/or the tasks 220 executed in the environment of each of the OS 210 instances for sharing of the processor(s) 202 computation resources .
- the processor(s) 202 may include one or more processing cores.
- the processor(s) 202 homogenous or heterogeneous, may be further arranged for parallel processing, as clusters and/or as one or more multi core processor(s).
- the processor(s) 202 may also include one or more distributed processing nodes, for example, a cluster of processing nodes, a network processing node and/or the like each comprising one or more processors and/or processing cores.
- the system 200 is employed for a network environment comprising a plurality of end devices (clients) that communicate over the network through one or more access devices connecting to one or more core devices constituting the network infrastructure.
- An exemplary network 300 in particular but not limited to, a cellular network comprises a plurality of user equipment (UE) devices 302 communicating over the network through one or more access devices 304 that connect to a core device 306 that may be a part of a backbone infrastructure of the network 300.
- UE user equipment
- Each of the UEs 302 may be, for example, a cellular device, a mobile phone, a laptop, a processing node, an end-point and/or the like.
- the access devices 304 may include for example, an aggregation device, a gateway, an access point and/or the like.
- Each of the UEs 302 maintains one or more sessions with the respective access device 304 that in turn maintains one or more sessions with the core device 306.
- the access device 304 having a processor such as the processor 202 may therefore need to manage a number of sessions in magnitude of 0(10) for serving the UEs 302.
- the core device 306 also having a processor such as the processor 202 may further need to manage a number of sessions in magnitude of 0(10000).
- the sessions (that may be considered as the tasks 220) may typically be periodic in nature, for example, Operations, Administration, and Management (OAM) messages, control messages, maintenance messages, monitoring messages and/or the like as the UEs 320 and/or the access devices 304 may be monitored at the same rate.
- the access device(s) 304 and/or the core device 306 may execute a scheduler such as the scheduler 230 in order to manage efficiently and/or effectively the invocation and execution of the tasks 220 (sessions).
- the process part 100 starts with the scheduler 230 creating a min-heap structure.
- the min-heap structure is a tree structure comprising a root node and one or more child nodes each representing a respective DLL.
- the min-heap structure includes only the root node with no child nodes.
- the scheduler 230 receives scheduling requests from the plurality of tasks 220 and creates one or more DLLs each grouping a subset of the plurality of tasks 220 that share a common invocation frequency.
- Each entry in each of the DLL(s) is associated with a task of the plurality of tasks 220 and stores an absolute invocation time of the associated task 220.
- each entry in the DLL comprises two pointers. The first pointer points to a preceding entry associated with a task 220 due for invocation at an earlier absolute time with respect to the invocation time of the task 220 associated with the current entry.
- the second pointer points to a succeeding entry associated with a task 220 due for invocation at a later absolute time with respect to the invocation time of the task 220 associated with the current entry.
- the scheduler 230 sorts the entries in each of the DLLs according to the absolute invocation time of the associated tasks 220.
- the scheduler 230 then associates each of the DLLs with one of the nodes in the min- heap structure.
- the scheduler 230 arranges the min-heap structure such that the DLL comprising a task 220 that is due for the earliest future absolute invocation time among the plurality of tasks 220 is associated with the root node.
- the scheduler 230 may create a single DLL and associate it with the root node.
- the scheduler 230 rearranges (heapifies) the min-heap structure to sort the entries in each of the DLL(s) according to the absolute invocation time of the associated tasks 220.
- rearranging the min-heap structure (heapifying) may be limited by O(log N F ) operations of the processor(s) 202 where N F is the number of DLLs, i.e. the number of different invocation frequencies identified for the plurality of tasks 220.
- An exemplary min-heap structure 400 includes a root node 402A and a plurality of child nodes 402B through 402Y.
- Each of the nodes is associated with a DLL 404, for example, the root node 402A is associated with a DLL 404A, the node 402B is associated with a DLL 404B and the node 402 Y is associated with a DLL 404 Y.
- Each DLL includes a plurality of entries 406 associated with a subset of the tasks 220 that share a common invocation frequency F.
- the DLL 404A includes entries 406A1 associated with a task 220A1, 406A2 associated with a task 220A2, 406A3 associated with a task 220A3 through 406Ak associated with a task 220 Ak.
- the tasks 220 A 1 through 220 Ak have a common invocation frequency i
- the DLL 404B includes entries 406B1 associated with a task 220B1, 406B2 associated with a task 220B2, 406B3 associated with a task 220B3 through 406Bm associated with a task 220Bm.
- the tasks 220B1 through 220Bm have a common invocation frequency
- the DLL 404 Y includes entries 406 Yl associated with a task 220 Yl , 406 Y2 associated with a task 220Y2, 406Y3 associated with a task 220Y3 through 406Yn associated with a task 220 Yn.
- the tasks 220 Yl through 220 Yn have a common invocation frequency
- Each entry 406 is doubly linked (through memory location pointers) to a preceding entry 406 and a succeeding entry. After the scheduler 230 rearranges the min-heap, the entries 406 in each of the DLLs 404 are sorted according to the absolute invocation time of the associated tasks 220.
- the entry 406 A 1 is associated with the task 220 A 1 having an absolute invocation time TA1, for example, 14:50:26.
- the next entry 406A2 is associated with the task 220A2 having an absolute invocation time TA2 where TA2 > TA1, for example, TA2 is 14:50:27.
- the next entry 406A3 is associated with the task 220A3 having an absolute invocation time TA3 where TA3 > TA2, for example, TA3 is 14:50:27.5.
- the last entry 406Ak is associated with the task 220Ak having an absolute invocation time TAk where TAk > TA3, for example, TAk is 14:50:28. Since all the tasks have the same frequency Fl, it is assured that TAk ⁇ TA1 +— .
- the entry 406B 1 is associated with the task 220B 1 having an absolute invocation time TBI, for example, 14:50:24.
- the next entry 406B2 is associated with the task 220B2 having an absolute invocation time TB2 where TB2 > TBI, for example, TA2 is 14:50:32.
- the next entry 406B3 is associated with the task 220B3 having an absolute invocation time TB3 where TB3 > TB2, for example, TB3 is 14:50:37.
- the last entry 406Bm is associated with the task 220Bm having an absolute invocation time TBm where TBm > TB3, for example, TBm is 14:50:41.
- the entry 406 Y 1 is associated with the task 220 Y 1 having an absolute invocation time TY1, for example, 14:51 : 10.
- the next entry 406Y2 is associated with the task 220Y2 having an absolute invocation time TY2 where TY2 > TY1, for example, TY2 is 14:51 :25.
- the next entry 406Y3 is associated with the task 220Y3 having an absolute invocation time TY3 where TY3 > TY2, for example, TY3 is 14:52: 10.
- the last entry 406Yn is associated with the task 220Yn having an absolute invocation time TYn where TYn > TY3, for example, TYn is 14:52:25. Since all the tasks have the same frequency Fy, it is assured that TYn ⁇ TY1 +— .
- the scheduler 230 updates each of the nodes with the absolute invocation time of the next (earliest) task 220 of the corresponding DLL. For example, the node 402A is updated to store TA1, the node 402B is updated to store TBI and the node 402 Y that is updated to store TY1.
- the scheduler 230 associates the root node 402A with the DLL 402A comprising the task 220 that is due for the earliest future absolute invocation time among the plurality of tasks 220.
- the process part 100 starts with the scheduler 230 calculates the time for the following (next) timeout event and configures one or more timers to expire and trigger a timeout event.
- the timer(s) may include, for example, a hardware timer, an OS provided timer, a service provided timer and/or the like.
- the scheduler 230 calculates the timing of the next timeout event according to the absolute invocation time of the next (earliest) task 220 that needs to be invoked.
- the scheduler 230 may traverse rapidly through the min-heap such as the min-heap structure 400 in order to search for the task(s) 220 that needs to be invoked next.
- the scheduler may access the entry storing the earliest future absolute invocation time with a number of operation limited by 0(1) operations of the processor(s) 202 as the min-heap structure 400 is time sorted.
- the first entry in the DLL associated with the root node of the min- heap structure 400 is therefore associated with the task 220 that needs to be invoked next.
- the scheduler 230 may be suspended (sleep), thus the scheduler 230 does not consume any processing resources of the processor(s) 202 until the timeout event.
- the scheduler 230 may handle the request and rearrange the min-heap structure 400 accordingly.
- the scheduler 230 determines the invocation frequency of the requesting task 220 and adds the requesting task 220 at the end of a corresponding DLL comprising the subset of tasks 220 having the same invocation frequency as the requesting task 220.
- the scheduler 230 may create a new DLL and add the requesting task 220 to the newly created DLL.
- the scheduler then associates the new DLL with a node of the min-heap structure.
- the scheduler 230 may need to rearrange the min-heap structure to sort the DLL(s) according to the tasks 220 absolute invocation times. Adding a new entry 406 to the min-heap structure 400 may be limited by 0(logN F ) operations of the processor(s) 202 at the worst case since that is the maximum time for rearranging the min-heap structure 400.
- Removing a task 220 from the scheduler 230 may involve several steps. First the scheduler 230 removes the entry 406 associated with the removed task 220 from the DLL it belongs to. This operation may be limited by 0(1) operations of the processor(s) 202. In case the entry associated with the removed task 220 is the first entry in the DLL, the scheduler 230 re-orders the min-heap structure 400. The re-ordering operation may be limited by 0(logN F ) operations of the processor(s) 202. In case the entry associated with the removed task 220 is the only entry in the DLL, the scheduler removes the entire DLL from the min-heap structure 400 and re-reorders the min-heap structure 400.
- the re-ordering operation of the min-heap structure 400 may be limited by 0(logN F ) operations of the processor(s) 202. After the timer(s) trigger the timeout event, the scheduler executes steps 1 10, 1 12, 1 14 and 1 16.
- the scheduler 230 analyzes the first entry such as, for example, the entry 406A1 to determine whether the task 220A1 needs to be invoked by comparing the absolute invocation time such as the absolute invocation time TAl with the current time.
- the scheduler 230 invokes the task 220Al .
- the scheduler 230 search through the min-heap structure 400 and invocation of the task 220A1 may be limited by 0(1) operations of the processor(s) 202.
- the scheduler 230 rearranges the min-heap structure 400 according to the absolute invocation times of the remaining tasks 220. As presented above, rearranging the min-heap structure 400 (heapifying) may be limited by 0(log N F ) operations of the processor(s) 202.
- the scheduler 230 removes the entry 406, for example, 406A1 associated with the invoked task 220, for example, the task 220A1.
- the scheduler 230 reinserts the entry 406 such as the entry 406 A 1 (associated with the invoked task 220A1) at the end of the DLL 404 A.
- reinserting the entry 406 at the end of the DLL 404 may be limited by 0(1) operations of the processor(s) 202 since the min-heap structure 400 is time sorted.
- the scheduler 230 analyzes the next entries 406, for example, the entry 406A2 to determine whether the task 220A2 needs to be invoked.
- the scheduler 230 may also analyze the first entry 406 of one or more of the child nodes 402B and/or 402Y such as the entries 406B1 and/or 406Y1 to determine whether the task 220B 1 and/or the task 220Y1 need to be invoked. This is done since time elapses while the timeout event is in progress.
- the invocation time of another task 220 is due at the current time and it may be significantly more efficient to invoke the pending tasks 220 that need to be invoked at the current time before the scheduler 230 returns from the timeout event instead of scheduling a new timeout event.
- steps 1 10 through 1 16 may be limited by 0(log N F ) operations of the processor(s) 202 at the worst case. This is since the steps 1 10, 1 12 and 1 16 may be limited by 0(1) operations of the processor(s) 202 and the step 1 14 may be limited by O(log N F ) operations of the processor(s) 202.
- FIG. 5 and FIG. 6, are graphs presenting a performance comparison between scheduling techniques for an exemplary periodic tasks execution environment, according to some embodiments of the present invention.
- Graphs 500, 600 and 602 present performance of a scheduler such as the scheduler 230 compared to 3 other scheduling implementations, a timerfd Linux system call, a Linux posix OS timer and a timer wheel scheduler.
- the timerfd Linux system call and the Linux posix timer are pure Linux timers while the timer wheel scheduler is implemented by an open-source timer wheel with some improvements.
- the total number of invocations in the execution environment is N * /, i.e., N * / is the number of invocations per time unit, for example, a second.
- the sessions are typically short and are periodic as the UEs 302 are served continuously and periodically.
- the sessions for serving the UEs 302 are typically also short in terms of time duration.
- the performance is measured in terms of utilization percentage (CPU Usage %) of the processor(s) 202 computing resources with respect to number of invoked tasks per second, i.e. a total number of invocations N * f per second.
- the efficiency of the mechanisms was measured using the pidstat (Report statistics for Linux tasks) function for a set of N * / values.
- the measurements were conducted on a Linux hosted on an Intel Xeon 2.6GHz processor such as the processor(s) 202 with the scheduler executing on one core of the Xeon 2.6GHz processor.
- the timer wheel scheduler was configured to have a resolution of ⁇ .
- the graph 500 presents the performance comparison for N * / range of 0 to 1,500,000 total invocations in a logarithmic scale
- the graph 600 presents the same data as in graph 500 in a linear scale
- the graph 602 presents the performance comparison for a lower portion of the graph 600 where N * / is in a range of 0 to 50,000.
- a graph curve 502 represents the performance of scheduling using the timerfd timer
- a graph curve 504 represents the performance of scheduling using the Linux posix timer
- a graph curve 506 represents the performance of scheduling using the timer wheel
- a graph curve 508 represents the performance of scheduling using the scheduler 230. It should be noted that the measurements may have some standard deviations, but they show correctly the general trend.
- the utilization of the computing resources of the processor(s) 202 increases as the total number of invocations N * f increases since the timerfd timer and the posix timer are invoked for every invocation of each of the tasks 220.
- the timerfd timer and the posix timer based scheduling utilizes computing resources of the processor(s) 202 linearly with respect to the total number of invocations N * f with a significantly high linear function coefficient. Therefore at high invocation rate values (high number of tasks 220), scheduling using the timerfd timer and the posix timer may present a significant overhead that may exhaust computing resources of the processor(s) 202.
- Scheduling the tasks 220 using the timer wheel presents a substantially uniform utilization of the computing resources of the processor(s) 202 across the range of the number of invocations N * f. Even though the processor(s) 202 computing resources utilization is increased linearly with the increase in the number of the scheduled tasks 220, the increase is very moderate (a small linear function coefficient). However, due to the time slot architecture of the timer wheel, the same computing resources utilization is presented even for a low number of invocations N * f. This means that even if no tasks 220 or a small number of invocations of the tasks 220 is scheduled, the timer wheel still consumes considerable computing resources of the processor(s) 202.
- the timer wheel overhead wastes computing resources of the processor(s) 202 even if no and/or few tasks 220 are scheduled and invoked.
- the overhead of the timer wheel is an absolute value (independent of implementation) and may depend on the resolution of timer wheel, in the presented case, 10 that is equivalent to 100000 system calls per second. Scheduling the tasks 220 using the scheduler 230 presents significantly improved efficiency compared to both the Linux timers (timerfd and posix) as evident at 510 as well as compared to the timer wheel as shown by 512.
- the scheduler 230 performs very similarly to the Linux timers (timerfd and posix) while for the high invocation rates the scheduler 230 performs slightly better than the timer wheel.
- N * f the performance of the timer wheel and the proposed scheduler is similar and may be governed mainly by the amount of calls to system calls. It should be noted that in the tested implementation, the timer wheel also skipped the actual call to the select function (to resume itself to operation) if the next timer has already expired and simply invokes the due task(s) 220.
- the enhanced performance of the scheduler 230 stems in part from the fact that the scheduler 230 is executed following the timeout event and is therefore a little late with respect to the task(s) 220 that need to be invoked at the current time. Since the tasks 220 are periodic and typically have a common invocation frequency and possible approximately similar invocation times, multiple tasks 220 may be invoked during the same timeout event. As result, while executing the steps of the timeout event (such as the steps 1 10, 1 12, 1 14 and 1 16) the scheduler may invoke multiple tasks 220 scheduled for invocation at approximately the same time as the invocation time of the task 220 that triggered the timeout event.
- timer wheel provides an API as shown in pseudo-code excerpt
- the scheduler 230 may be easily ported into the Linux environment to support the API as provided by any of the above exemplary schedulers, the Linux timers (epoll and posix) as well as the timer wheel by providing an API as shown in pseudo-code excerpt
- the scheduler 230 may be ported into the Linux environment to take precedence as the default scheduler. Since the API provided by the scheduler 230 is fully compliant with the API of the previously used schedulers, the code of the tasks 220 should not be altered thus no impact is inflicted on the tasks 220.
- the descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
- scheduling implementations It is expected that during the life of a patent maturing from this application many relevant scheduling implementations will be developed and the scope of the term scheduling implementations is intended to include all such new technologies a priori.
- compositions comprising, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. This term encompasses the terms “consisting of and “consisting essentially of.
- Consisting essentially of means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
- a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A system for scheduling periodic tasks comprising a memory and a processor. The memory stores a min-heap structure comprising a plurality of lists each comprising a plurality of entries, each entry is associated with one of a plurality of periodic tasks. Each of the plurality of lists orders a subset of the periodic tasks that share a common invocation frequency. The processor is adapted to perform a scheduling process at a timeout event by analyzing a first entry at the beginning of a top list of the plurality of lists (the top list is located at a root node of the min-heap structure) to identify a periodic task scheduled for invocation at a current time, invoking the respective periodic task associated with the first entry and rearranging the min-heap structure. Wherein the scheduling process is repeated until no periodic tasks scheduled for invocation at the current time are identified.
Description
EFFICIENT SCHEDULER FOR SHORT PERIODIC TASKS
BACKGROUND The present invention, in some embodiments thereof, relates to scheduling a plurality of tasks and, more specifically, but not exclusively, to efficiently scheduling a plurality of periodic tasks by maintaining a min-heap structure in which the plurality of periodic tasks are grouped according to their common invocation frequencies.
In computing environments, scheduling is a method for assigning resources and/or means for executing and completing a work. The work may be virtual computation elements such as, for example, threads, processes, data flows and/or the like that are scheduled onto hardware resources such as, for example, processors, network links, dedicated hardware and/or the like. Scheduling is a fundamental building block for any software implementation on any hardware platform and an intrinsic part of the execution model of the computing system. Scheduling is the element making it possible to have computer multitasking in which the plurality of computation elements share available hardware resources of the hardware platform and are executed concurrently.
The scheduling activity may be carried out by a scheduler executed on the processing hardware, for example, as a stand-alone software module, as part of an operating system (OS), as a low level pre-OS software module and/or the like. The scheduler may also be implemented as a combination of any two or more of the above.
The scheduler may be implemented using one or more of a plurality of techniques to allow the plurality of computation elements to be effectively computed by the available hardware resources. The scheduler may be implemented to achieve one or more computation objectives for the plurality of computation elements executed by the available hardware resources, for example, balancing load, maximizing throughput, minimizing response time, minimizing latency, maximizing fairness and/or the like. As some of the computation objectives may conflict, for example, throughput vs. latency,
the scheduler may be implemented to provide an optimal compromise between the conflicting computation objectives. The scheduler may by further adapted according to the characteristics of the computing system.
SUMMARY
According to a first aspect, an apparatus for scheduling periodic tasks is provided, comprising: a memory for storing a min-heap structure comprising a plurality of lists, each list comprising a plurality of entries, each entry is associated with one of a plurality of periodic tasks, each of the plurality of lists orders a subset of the plurality of periodic tasks that share a common invocation frequency; and a processor adapted to perform a scheduling process at a timeout event by: analyzing a first entry at the beginning of a top list of the plurality of lists to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure, invoking the respective periodic associated with the first entry, and rearranging the min-heap structure; wherein the scheduling process is repeated until no periodic tasks scheduled for invocation at the current time are identified.
In a first possible implementation according to the first aspect, the min-heap structure is a binary min-heap tree structure created to support the scheduling process, the min- heap structure comprises a root node and at least one child node, each of the nodes is associated with one of the plurality of lists and the lists are doubly linked lists.
In a second possible implementation form according to the first aspect as such or according to the first implementation form of the first aspect, each of the entries of each of the lists comprises a timing of invocation of the associated periodic task.
In a third possible implementation form according to the first aspect as such or according to the any of the preceding implementation forms of the first aspect, each respective entry associated with a respective task comprises a pointer to a first entry associated with a preceding periodic task and a pointer to a second entry associated with a succeeding periodic task, wherein the respective entry, the first entry and the second entry are included in a respective list of the plurality of lists, wherein the
preceding periodic task is scheduled for an earlier invocation time than the respective task and the succeeding periodic task scheduled for a later invocation time than the respective periodic task, and wherein the processor uses the pointers to traverse through the respective list. In a fourth possible implementation form according to the first aspect as such or according to the any of the preceding implementation forms of the first aspect, wherein during the timeout event the processor calculates a timing of a following timeout event and sets a system timer to expire at the timing of the following timeout event.
In a fifth possible implementation form according to the first aspect as such or according to the any of the preceding implementation forms of the first aspect the system timer is a hardware timer.
In a sixth possible implementation form according to the fifth implementation forms of the first aspect, the system timer is provided by an operating system executed by the processor. In a seventh possible implementation form according to the first aspect as such or according to the any of the preceding implementation forms of the first aspect, upon a scheduling request of a respective periodic task of the plurality of periodic tasks, the processor identifies the invocation frequency of the respective periodic task, creates a new entry for the respective periodic task and links it to the end of a respective list o the plurality of lists according to the identified invocation frequency.
In an eighth possible implementation form according to the first aspect as such or according to the any of the preceding implementation forms of the first aspect, the processor is further adapted to create a new list in the min-heap structure in case the identified invocation frequency of a scheduled request is different from existing invocation frequency associated with any o the plurality of lists.
In a ninth possible implementation form according to the first aspect as such or according to the any of the preceding implementation forms of the first aspect, the processor is adapted to remove an entry of the plurality of entries that is associated with the respective periodic task.
In a tenth possible implementation form according to the first aspect as such or according to the any of the preceding implementation forms of the first aspect, the processor is further adapted to reschedule the respective periodic task by adding a new entry associated with the respective periodic task in case the respective periodic task requests rescheduling, the new entry is added at the end of a respective list of the plurality of lists.
According to a second aspect a method of scheduling periodic tasks is provided, comprising: identifying, at a timeout event, one of a plurality of periodic tasks that is scheduled for invocation at a current time by analyzing a first entry at the beginning of a top list of a plurality of lists of a min-heap structure to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure; invoking the respective periodic task associated with the first entry; and rearranging the min-heap structure; wherein the identifying, invoking and rearranging are repeated until no periodic tasks scheduled for invocation at the current time are identified.
In a first possible implementation according to the second aspect, the min-heap structure is a binary min-heap tree structure created to support the scheduling process, the min-heap structure comprises a root node and at least one child node, each of the nodes is associated with one of the plurality of lists and the lists are doubly linked lists. In a second possible implementation form according to the second aspect as such or according to the first implementation form of the second aspect, each of the entries of each of the lists comprises a timing of invocation of the associated periodic task.
In a third possible implementation form according to the second aspect as such or according to the any of the preceding implementation forms of the second aspect, each respective entry associated with a respective task comprises a pointer to a first entry associated with a preceding periodic task and a pointer to a second entry associated with a succeeding periodic task, wherein the respective entry, the first entry and the second entry are included in a respective list o the plurality of lists, wherein the preceding periodic task is scheduled for an earlier invocation time than the respective task and the succeeding periodic task scheduled for a later invocation time than the
respective periodic task, and wherein the processor uses the pointers to traverse through the respective list.
In a fourth possible implementation form according to the second aspect as such or according to the any of the preceding implementation forms of the second aspect, wherein during the timeout event the processor calculates a timing of a following timeout event and sets a system timer to expire at the timing of the following timeout event.
In a fifth possible implementation form according to the second aspect as such or according to the any of the preceding implementation forms of the second aspect the system timer is a hardware timer.
In a sixth possible implementation form according to the fifth implementation forms of the second aspect, the system timer is provided by an operating system executed by the processor.
In a seventh possible implementation form according to the second aspect as such or according to the any of the preceding implementation forms of the second aspect, upon a scheduling request of a respective periodic task of the plurality of periodic tasks, the processor identifies the invocation frequency of the respective periodic task, creates a new entry for the respective periodic task and links it to the end of a respective list of the plurality of lists according to the identified invocation frequency. In an eighth possible implementation form according to the second aspect as such or according to the any of the preceding implementation forms of the second aspect, the processor is further adapted to create a new list in the min-heap structure in case the identified invocation frequency of a scheduled request is different from existing invocation frequency associated with any of the plurality of lists. In a ninth possible implementation form according to the second aspect as such or according to the any of the preceding implementation forms of the second aspect, the processor is adapted to remove an entry of the plurality of entries that is associated with the respective periodic task.
In a tenth possible implementation form according to the second aspect as such or according to the any of the preceding implementation forms of the second aspect, the processor is further adapted to reschedule the respective periodic task by adding a new entry associated with the respective periodic task in case the respective periodic task requests rescheduling, the new entry is added at the end of a respective list of the plurality of lists.
According to an aspect of some embodiments of the present invention there is provided a system for scheduling periodic tasks, comprising a memory for storing a min-heap structure comprising a plurality of lists, each list comprising a plurality of entries, each entry is associated with one of a plurality of periodic tasks, each of the plurality of lists orders a subset of the plurality of periodic tasks that share a common invocation frequency and a processor adapted to perform a scheduling process at a timeout event by:
Analyzing a first entry at the beginning of a top list of the plurality of lists to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure.
Invoking the respective periodic associated with the first entry.
Rearranging the min-heap structure.
Wherein the scheduling process is repeated until no periodic tasks scheduled for invocation at the current time are identified.
According to another aspect of some embodiments of the present invention there is provided a method of scheduling periodic tasks, comprising:
Identifying, at a timeout event, one of a plurality of periodic tasks that is scheduled for invocation at a current time by analyzing a first entry at the beginning of a top list of a plurality of lists of a min-heap structure to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure.
Invoking the respective periodic task associated with the first entry.
Rearranging the min-heap structure.
Wherein the identifying, invoking and rearranging are repeated until no periodic tasks scheduled for invocation at the current time are identified. According to another aspect of the present invention there is provided a computer program having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
In the drawings:
FIG. 1 is a flowchart of an exemplary process for scheduling periodic tasks, according to some embodiments of the present invention; FIG. 2 is a schematic illustration of an exemplary system for scheduling periodic tasks, according to some embodiments of the present invention;
FIG. 3 is a schematic illustration of an exemplary network system embodiment employing the periodic tasks scheduling, according to some embodiments of the present invention;
FIG. 4 is a schematic illustration of an exemplary min-heap structure used for scheduling periodic tasks, according to some embodiments of the present invention;
FIG. 5 is a graph presenting a performance comparison between scheduling techniques for an exemplary periodic tasks execution environment in a first range, according to some embodiments of the present invention; and
FIG. 6 is a graph presenting a performance comparison between scheduling techniques for an exemplary periodic tasks execution environment in a second and a third range, according to some embodiments of the present invention.
DETAILED DESCRIPTION
The present invention, in some embodiments thereof, relates to scheduling a plurality of tasks and, more specifically, but not exclusively, to efficiently scheduling a plurality of periodic tasks by maintaining a min-heap structure in which the plurality of periodic tasks are grouped according to their common invocation frequencies.
The present invention presents apparatuses, systems and methods for scheduling tasks, in particular periodic tasks by utilizing a scheduler that uses a min-heap structure for representing one or more double-link lists (DLLs) arranged to map a plurality of periodic tasks grouped together according to an invocation frequency of the tasks. The plurality of periodic tasks is grouped in DLL(s) such that a subset of the plurality of periodic tasks grouped in each of the lists share a common invocation frequency. Each of the DLLs comprises one or more entries each associated with one of the periodic tasks. Each of the entries stores an absolute invocation time of the associated periodic task and the entries in each respective list are ordered according to the absolute invocation time of the associated periodic task. In accordance with the DLL architecture, each entry holds a pointer to a preceding entry associated with a periodic task having an earlier invocation time and a pointer to a succeeding entry associated with a periodic task having a later invocation time. The scheduler dynamically rearranges the min-heap structure to maintain the entries in each of the DLL(s) time sorted according to the absolute
invocation time of their associated task. The DLL architecture allows the scheduler to traverse rapidly and efficiently through the time sorted min-heap structure (using the forward and/or backward pointers available in each of the entries) to search of task(s) to be invoked, to add tasks, to remove tasks and/or to rearrange the min-heap structure. The scheduler schedules timeout events using one or more timers, for example, a hardware timer, an operating system timer and/or the like. The scheduler schedules the timeout events according to the earliest future absolute invocation time identified in the entries in the DLL(s). During each timeout event, the scheduler is invoked and analyzes the absolute invocation times stored in the entries in the DLL(s) to identify one or more of the periodic tasks that needs to be invoked at the current time and invokes them. The scheduler then rearranges (heapifies) the min-heap structure DLLs and/or entries and repeats the analysis to determine whether one or more additional tasks need to be invoked. In case the invoked task(s) is a periodic task, the scheduler may reinsert the entry(s) associated with the invoked task(s) at the end of the respective DLL. As part of the min-heap structure rearrangement, the scheduler may rearrange a root node and/or one or more of child nodes of the min-heap structure according to the absolute invocation time of the associated tasks. The scheduler also sorts the entries in the DLL according to the absolute invocation time of their associated tasks and updates accordingly the respective pointers of the entries. During each timeout event, the analysis and invocation process is repeated until no more tasks need to be invoked at the current time and the scheduler then schedules a new timeout event, i.e. for the following timeout event. The scheduler may receive a plurality of requests from new and/or existing tasks and may create, add and/or remove entry(s) and/or DLL(s) in the min-heap structure. Following the requests, the scheduler rearranges the min-heap structure to maintain the time order of the entries in each of the DLLs and/or of the min-heap structure nodes.
The scheduler presented herein may significantly improve utilization of the computing system compared to currently existing scheduling methodologies and/or implementations .
The existing scheduling implementations, for example, a Linux posix system call, may utilize one or more timers, hardware timers and/or OS provided timers for scheduling
the plurality of tasks. However, typically the timers generate a timeout event for every invocation of one of the tasks and may therefore present a significant overhead, particularly for systems having a high number of task invocations thus exhausting computing resources of the system. The presented scheduler on the other hand allows for high scalability as it relies on analyzing the min-heap structure to identify a plurality of tasks that need to be invoked at the current time. While configuring the timer to trigger the next timeout event according to the absolute invocation time of the next task (earliest future invocation), once the timeout event is triggered, the scheduler analyzes the min-heap structure to identify additional tasks that may need to be invoked at the current time. As time elapses during the timeout event, the invocation time of additional tasks may be due and therefore the process of rescheduling additional timeout event(s) may be avoided and the respective tasks may be invoked immediately. Analyzing the min-heap structure to identify the plurality of tasks that may need to be invoked may be of particular advantage when the tasks are short and periodic, and possibly having common invocation frequencies. This may significantly reduce the number of invocations of the scheduler thus reducing the overhead of the scheduler itself. The scheduler may present significant improvement in particular for high invocation rate systems where the invocations may rapidly take place one after the other. Moreover, only a single timer may be required to manage the scheduling when using the presented scheduler. This may reduce the hardware and/or OS resources required for scheduling the tasks.
Furthermore, the scheduler may provide a finer granularity compared to the timers used by the existing scheduling implementations, specifically a timer wheel mechanism. While the present scheduler may dynamically configure the hardware timer to a granularity as required by the next task scheduled for invocation thus avoiding a fixed granularity as may be applied by the timer wheel.
While some existing scheduling implementations, for example, the timer wheel, present improved performance especially for high invocation numbers (high number of applications) they present drawbacks that may be overcome by the presented scheduler.
The timer wheel employs a time slots implementation in which the tasks are executed within their assigned time slots. This results in the timer wheel being constantly active even when no tasks' invocation is scheduled and/or when a low invocation number is required. The overhead exhibited by the timer wheel is therefore significant, especially for low invocation number scenarios.
The scheduler presented herein is invoked only when task invocation is required and is suspended at any other time. Therefore, the scheduler presents a very low overhead with no respect to the number of invocations (number of tasks). Moreover, the scheduler may present significant performance improvement over the timer wheel in scenarios where the tasks are periodic tasks and even more so when the tasks are periodic and short. As opposed to the timer wheel that may hold a single ordered list for listing the scheduled tasks, the presented scheduler relies on the min-heap structure comprising the DLLs that are arranged to group together tasks having a common invocation frequency. This may allow the presented scheduler to traverse more rapidly and efficiently through the min- heap structure to search for the next task(s) to be invoked and/or to rearrange the min- heap to maintain time ordered. In scenarios where the tasks are periodic and share common invocation frequencies the advantage of the invocation frequency arranged DLLs is even more evident since the scheduler dynamically arranges the DLLs such that the periodic tasks that have the common invocation frequency are already ordered one after the other making the search and invocation highly efficient for the scheduler. Therefore, arranging the tasks in the min-heap tree structure may significantly reduce the seek time thus reducing computing resource and/or computation time.
Naturally, the reduced overhead, the fast search and the efficient min-heap structure rearrangement may significantly reduce the required computing resource of the scheduler that in turn may be translated to reducing costs and/or computation time.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The
invention is capable of other embodiments or of being practiced or carried out in various ways.
The present invention may be an apparatus, a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer such as the user equipment (UE), as a standalone software package, partly on the user's computer and partly on a remote computer such as the network apparatus or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Reference is now made to FIG. 1, which is a flowchart of an exemplary process for scheduling periodic tasks, according to some embodiments of the present invention. A process 100 for scheduling periodic tasks utilizes a scheduler that uses a min-heap structure comprising a root node and one or more child nodes each associated with a list of a plurality of periodic tasks grouped together according to their invocation frequency. The plurality of periodic tasks is grouped in the lists such that a subset of the plurality of periodic tasks grouped in each of the lists share a common invocation frequency. Each of the lists is a DLL comprising one or more entries each associated with one of the periodic tasks. Each of the entries stores an absolute invocation time of the associated periodic task and the entries in each respective list are ordered according to the absolute invocation time of the associated periodic task. In accordance with the DLL architecture,
each entry holds a pointer to a preceding entry associated with a periodic task having an earlier invocation time and a pointer to a succeeding entry associated with a periodic task having a later invocation time. The scheduler schedules a timeout event using one or more timers, for example, a hardware timer, an operating system timer and/or the like, typically a single timer. The scheduler schedules the timeout event according to the earliest future absolute invocation time identified in the entries in the DLL(s). During the timeout event, the scheduler is invoked and analyzes the absolute invocation times stored in the entries in the DLL(s) to identify one or more of the periodic tasks that needs to be invoked at the current time and invokes them. The scheduler traverses the list(s) using the forward pointer available in each of the entries to continue scanning the list, invoking the tasks until it finds a task that is not scheduled for invocation at the current time but rather to a future time. Since the DLLs are arranged in the min-heap according to the absolute invocation time of the first associated task, the scheduler may traverse rapidly through the min-heap structure. The scheduler then rearranges (heapifies) the min-heap structure lists and/or entries and repeats the analysis to determine whether one or more additional tasks need to be invoked. During the min-heap structure rearrangement, the scheduler may rearrange the root node and one or more of the child nodes according to the absolute invocation time of the associated tasks with respect to the current absolute time. As part of the min-heap structure rearrangement, the scheduler may also update the order of one or more of the entries and update accordingly the respective pointers of the one or more entries. The analysis and invocation process is repeated until no more tasks need to be invoked at the current time and the scheduler then schedules a new timeout event. The scheduler may receive a plurality of scheduling requests from the plurality of periodic tasks and may create, add and/or remove entries and/or lists in the min-heap structure according to the invocation time(s) of the requesting tasks.
Reference is also made to FIG. 2, which is a schematic illustration of an exemplary system for scheduling periodic tasks, according to some embodiments of the present invention. An exemplary system 200, for example, a computer, a server, a processing node, a network apparatus and/or any device comprising a processor(s) 202 may execute a scheduler 230 employing the process 100 for scheduling an execution of a plurality of tasks 220, in particular periodic tasks. The scheduler 230 and the tasks 220 are software
modules each comprising a plurality of program instructions executed by the processor(s) 202 from one or more volatile and/or non-volatile memories, for example, a cache, a random access memory (RAM), a read only memory (ROM), a Flash array, a disk drive and/or the like. Typically, the processor(s) 202 hosts an OS 210, for example, Windows, Linux, Android and/or the like such that the tasks 220 are executed in the OS 210 environment. However, the scheduler 230 and the tasks 220 may be executed in an OS less environment where the processor(s) 202 executes the scheduler 230 to schedule the invocation of the tasks 220. In some embodiments, the processor(s) 202 hosts a plurality of instances of the OS 210 where each OS 210 instance executes a scheduler such as the scheduler 230 for scheduling the invocation of the tasks 220. The scheduler 230 may further be a pre-OS software module in the scenario of multiple OS 210 instances such that the scheduler 230 manages the invocation of the OS 210 instances and/or the tasks 220 executed in the environment of each of the OS 210 instances for sharing of the processor(s) 202 computation resources .
The processor(s) 202 may include one or more processing cores. The processor(s) 202, homogenous or heterogeneous, may be further arranged for parallel processing, as clusters and/or as one or more multi core processor(s). The processor(s) 202 may also include one or more distributed processing nodes, for example, a cluster of processing nodes, a network processing node and/or the like each comprising one or more processors and/or processing cores.
In some embodiments of the present invention, the system 200 is employed for a network environment comprising a plurality of end devices (clients) that communicate over the network through one or more access devices connecting to one or more core devices constituting the network infrastructure.
Reference is now made to FIG. 3, which is a schematic illustration of an exemplary network system embodiment employing the periodic tasks scheduling, according to some embodiments of the present invention. An exemplary network 300, in particular but not limited to, a cellular network comprises a plurality of user equipment (UE) devices 302 communicating over the network through one or more access devices 304
that connect to a core device 306 that may be a part of a backbone infrastructure of the network 300. Each of the UEs 302 may be, for example, a cellular device, a mobile phone, a laptop, a processing node, an end-point and/or the like. The access devices 304 may include for example, an aggregation device, a gateway, an access point and/or the like. Each of the UEs 302 maintains one or more sessions with the respective access device 304 that in turn maintains one or more sessions with the core device 306. The access device 304 having a processor such as the processor 202 may therefore need to manage a number of sessions in magnitude of 0(10) for serving the UEs 302. The core device 306 also having a processor such as the processor 202 may further need to manage a number of sessions in magnitude of 0(10000). The sessions (that may be considered as the tasks 220) may typically be periodic in nature, for example, Operations, Administration, and Management (OAM) messages, control messages, maintenance messages, monitoring messages and/or the like as the UEs 320 and/or the access devices 304 may be monitored at the same rate. The access device(s) 304 and/or the core device 306 may execute a scheduler such as the scheduler 230 in order to manage efficiently and/or effectively the invocation and execution of the tasks 220 (sessions).
Reference is made once again to FIG. 1.
As shown at 102, the process part 100 starts with the scheduler 230 creating a min-heap structure. The min-heap structure is a tree structure comprising a root node and one or more child nodes each representing a respective DLL. Optionally, for one more execution environments, deployments and/or scenarios the min-heap structure includes only the root node with no child nodes.
As shown at 104, the scheduler 230 receives scheduling requests from the plurality of tasks 220 and creates one or more DLLs each grouping a subset of the plurality of tasks 220 that share a common invocation frequency. Each entry in each of the DLL(s) is associated with a task of the plurality of tasks 220 and stores an absolute invocation time of the associated task 220. In accordance with the DLL architecture, each entry in the DLL comprises two pointers. The first pointer points to a preceding entry associated with a task 220 due for invocation at an earlier absolute time with respect to the invocation time of the task 220 associated with the current entry. The second pointer
points to a succeeding entry associated with a task 220 due for invocation at a later absolute time with respect to the invocation time of the task 220 associated with the current entry. The scheduler 230 sorts the entries in each of the DLLs according to the absolute invocation time of the associated tasks 220. The scheduler 230 then associates each of the DLLs with one of the nodes in the min- heap structure. The scheduler 230 arranges the min-heap structure such that the DLL comprising a task 220 that is due for the earliest future absolute invocation time among the plurality of tasks 220 is associated with the root node. As discussed before there may be execution environments, deployments and/or scenarios for which the min-heap structure includes no child nodes. For example, in case the plurality of tasks and/or a subset thereof share only one common invocation frequency. In such case, the scheduler 230 may create a single DLL and associate it with the root node.
As shown at 106, the scheduler 230 rearranges (heapifies) the min-heap structure to sort the entries in each of the DLL(s) according to the absolute invocation time of the associated tasks 220. Inherently, due to the structure of the min-heap structure, rearranging the min-heap structure (heapifying) may be limited by O(log NF) operations of the processor(s) 202 where NF is the number of DLLs, i.e. the number of different invocation frequencies identified for the plurality of tasks 220.
Reference is now made to FIG. 4, which is a schematic illustration of an exemplary min- heap structure used for scheduling periodic tasks, according to some embodiments of the present invention. An exemplary min-heap structure 400 includes a root node 402A and a plurality of child nodes 402B through 402Y. Each of the nodes is associated with a DLL 404, for example, the root node 402A is associated with a DLL 404A, the node 402B is associated with a DLL 404B and the node 402 Y is associated with a DLL 404 Y. Each DLL includes a plurality of entries 406 associated with a subset of the tasks 220 that share a common invocation frequency F.
The DLL 404A includes entries 406A1 associated with a task 220A1, 406A2 associated with a task 220A2, 406A3 associated with a task 220A3 through 406Ak associated with a task 220 Ak. The tasks 220 A 1 through 220 Ak have a common invocation frequency i
Fl, for example, .
5 seconds
The DLL 404B includes entries 406B1 associated with a task 220B1, 406B2 associated with a task 220B2, 406B3 associated with a task 220B3 through 406Bm associated with a task 220Bm. The tasks 220B1 through 220Bm have a common invocation frequency
F2, for example, .
20 seconds The DLL 404 Y includes entries 406 Yl associated with a task 220 Yl , 406 Y2 associated with a task 220Y2, 406Y3 associated with a task 220Y3 through 406Yn associated with a task 220 Yn. The tasks 220 Yl through 220 Yn have a common invocation frequency
Fy, for example, .
J 120 seconds
Each entry 406 is doubly linked (through memory location pointers) to a preceding entry 406 and a succeeding entry. After the scheduler 230 rearranges the min-heap, the entries 406 in each of the DLLs 404 are sorted according to the absolute invocation time of the associated tasks 220.
In the DLL 404 A, for example, the entry 406 A 1 is associated with the task 220 A 1 having an absolute invocation time TA1, for example, 14:50:26. The next entry 406A2 is associated with the task 220A2 having an absolute invocation time TA2 where TA2 > TA1, for example, TA2 is 14:50:27. The next entry 406A3 is associated with the task 220A3 having an absolute invocation time TA3 where TA3 > TA2, for example, TA3 is 14:50:27.5. The last entry 406Ak is associated with the task 220Ak having an absolute invocation time TAk where TAk > TA3, for example, TAk is 14:50:28. Since all the tasks have the same frequency Fl, it is assured that TAk < TA1 +— .
In the DLL 404B, for example, the entry 406B 1 is associated with the task 220B 1 having an absolute invocation time TBI, for example, 14:50:24. The next entry 406B2 is associated with the task 220B2 having an absolute invocation time TB2 where TB2 > TBI, for example, TA2 is 14:50:32. The next entry 406B3 is associated with the task 220B3 having an absolute invocation time TB3 where TB3 > TB2, for example, TB3 is 14:50:37. The last entry 406Bm is associated with the task 220Bm having an absolute invocation time TBm where TBm > TB3, for example, TBm is 14:50:41. Since all the tasks have the same frequency F2, it is assured that TBm < TBI H .
In the DLL 404 Y, for example, the entry 406 Y 1 is associated with the task 220 Y 1 having an absolute invocation time TY1, for example, 14:51 : 10. The next entry 406Y2 is associated with the task 220Y2 having an absolute invocation time TY2 where TY2 > TY1, for example, TY2 is 14:51 :25. The next entry 406Y3 is associated with the task 220Y3 having an absolute invocation time TY3 where TY3 > TY2, for example, TY3 is 14:52: 10. The last entry 406Yn is associated with the task 220Yn having an absolute invocation time TYn where TYn > TY3, for example, TYn is 14:52:25. Since all the tasks have the same frequency Fy, it is assured that TYn < TY1 +— .
The scheduler 230 updates each of the nodes with the absolute invocation time of the next (earliest) task 220 of the corresponding DLL. For example, the node 402A is updated to store TA1, the node 402B is updated to store TBI and the node 402 Y that is updated to store TY1.
As described before, while rearranging the min-heap structure 400, the scheduler 230 associates the root node 402A with the DLL 402A comprising the task 220 that is due for the earliest future absolute invocation time among the plurality of tasks 220.
Reference is made once again to FIG. 1.
As shown at 108, the process part 100 starts with the scheduler 230 calculates the time for the following (next) timeout event and configures one or more timers to expire and trigger a timeout event. The timer(s) may include, for example, a hardware timer, an OS provided timer, a service provided timer and/or the like. The scheduler 230 calculates the timing of the next timeout event according to the absolute invocation time of the next (earliest) task 220 that needs to be invoked.
Due to the time sorted DLL implementation, the scheduler 230 may traverse rapidly through the min-heap such as the min-heap structure 400 in order to search for the task(s) 220 that needs to be invoked next. Using the min-heap structure 400, the scheduler may access the entry storing the earliest future absolute invocation time with a number of operation limited by 0(1) operations of the processor(s) 202 as the min-heap structure 400 is time sorted. The first entry in the DLL associated with the root node of the min-
heap structure 400 is therefore associated with the task 220 that needs to be invoked next.
After the scheduler 230 configures the timer(s) to trigger the next timeout event, the scheduler 230 may be suspended (sleep), thus the scheduler 230 does not consume any processing resources of the processor(s) 202 until the timeout event.
However, in case one or more requests are received from new and/or existing tasks 220 for scheduling the task 220, removing the task 220 and/or quitting the task 220, the scheduler 230 may handle the request and rearrange the min-heap structure 400 accordingly. Upon receiving a new scheduling request, the scheduler 230 determines the invocation frequency of the requesting task 220 and adds the requesting task 220 at the end of a corresponding DLL comprising the subset of tasks 220 having the same invocation frequency as the requesting task 220. In case no DLL exists that groups a subset of tasks 220 having the same invocation frequency as the requesting task 220, the scheduler 230 may create a new DLL and add the requesting task 220 to the newly created DLL. The scheduler then associates the new DLL with a node of the min-heap structure. Following an addition of one or more requesting tasks 220, the scheduler 230 may need to rearrange the min-heap structure to sort the DLL(s) according to the tasks 220 absolute invocation times. Adding a new entry 406 to the min-heap structure 400 may be limited by 0(logNF) operations of the processor(s) 202 at the worst case since that is the maximum time for rearranging the min-heap structure 400.
Removing a task 220 from the scheduler 230 may involve several steps. First the scheduler 230 removes the entry 406 associated with the removed task 220 from the DLL it belongs to. This operation may be limited by 0(1) operations of the processor(s) 202. In case the entry associated with the removed task 220 is the first entry in the DLL, the scheduler 230 re-orders the min-heap structure 400. The re-ordering operation may be limited by 0(logNF) operations of the processor(s) 202. In case the entry associated with the removed task 220 is the only entry in the DLL, the scheduler removes the entire DLL from the min-heap structure 400 and re-reorders the min-heap structure 400. Again, the re-ordering operation of the min-heap structure 400 may be limited by 0(logNF) operations of the processor(s) 202.
After the timer(s) trigger the timeout event, the scheduler executes steps 1 10, 1 12, 1 14 and 1 16.
As shown at 1 10, the scheduler 230 analyzes the first entry such as, for example, the entry 406A1 to determine whether the task 220A1 needs to be invoked by comparing the absolute invocation time such as the absolute invocation time TAl with the current time.
As shown at 1 12, in case the task 220A1 needs to be invoked, the scheduler 230 invokes the task 220Al .
Again, due to the time sorted DLL implementation, the scheduler 230 search through the min-heap structure 400 and invocation of the task 220A1 may be limited by 0(1) operations of the processor(s) 202.
As shown at 1 14, the scheduler 230 rearranges the min-heap structure 400 according to the absolute invocation times of the remaining tasks 220. As presented above, rearranging the min-heap structure 400 (heapifying) may be limited by 0(log NF) operations of the processor(s) 202.
As part of rearranging the min-heap structure 400, the scheduler 230 removes the entry 406, for example, 406A1 associated with the invoked task 220, for example, the task 220A1. In case the invoked task 220, for example, the task 220A1 is a periodic task, the scheduler 230 reinserts the entry 406 such as the entry 406 A 1 (associated with the invoked task 220A1) at the end of the DLL 404 A. Naturally, reinserting the entry 406 at the end of the DLL 404 may be limited by 0(1) operations of the processor(s) 202 since the min-heap structure 400 is time sorted.
As shown at 1 16, which is a decision point, the scheduler 230 analyzes the next entries 406, for example, the entry 406A2 to determine whether the task 220A2 needs to be invoked. The scheduler 230 may also analyze the first entry 406 of one or more of the child nodes 402B and/or 402Y such as the entries 406B1 and/or 406Y1 to determine whether the task 220B 1 and/or the task 220Y1 need to be invoked. This is done since time elapses while the timeout event is in progress. It may therefore be possible that the invocation time of another task 220 is due at the current time and it may be significantly
more efficient to invoke the pending tasks 220 that need to be invoked at the current time before the scheduler 230 returns from the timeout event instead of scheduling a new timeout event.
The overall processing time of each timeout event (steps 1 10 through 1 16) may be limited by 0(log NF) operations of the processor(s) 202 at the worst case. This is since the steps 1 10, 1 12 and 1 16 may be limited by 0(1) operations of the processor(s) 202 and the step 1 14 may be limited by O(log NF) operations of the processor(s) 202.
However, for a small number invocation frequencies F, NF, that translates to a small number of DLLs, the practical processing time may be limited by 0(1) since for small NF the expression 0 (log NF) « 0(1) .
Experiments were conducted to validate the performance of the scheduler 230 compared to exiting scheduling algorithms and/or implementations.
Reference is now made to FIG. 5 and FIG. 6, which are graphs presenting a performance comparison between scheduling techniques for an exemplary periodic tasks execution environment, according to some embodiments of the present invention. Graphs 500, 600 and 602 present performance of a scheduler such as the scheduler 230 compared to 3 other scheduling implementations, a timerfd Linux system call, a Linux posix OS timer and a timer wheel scheduler. The timerfd Linux system call and the Linux posix timer are pure Linux timers while the timer wheel scheduler is implemented by an open-source timer wheel with some improvements.
The experiments were conducted for a network system such as the network system 300 having N UEs such as the UE 302 served with sessions (the sessions correspond to tasks such as the tasks 220) at frequency /. The total number of invocations in the execution environment is N * /, i.e., N * / is the number of invocations per time unit, for example, a second. The sessions are typically short and are periodic as the UEs 302 are served continuously and periodically. The sessions for serving the UEs 302 are typically also short in terms of time duration.
The performance is measured in terms of utilization percentage (CPU Usage %) of the processor(s) 202 computing resources with respect to number of invoked tasks per
second, i.e. a total number of invocations N * f per second. The efficiency of the mechanisms was measured using the pidstat (Report statistics for Linux tasks) function for a set of N * / values. The measurements were conducted on a Linux hosted on an Intel Xeon 2.6GHz processor such as the processor(s) 202 with the scheduler executing on one core of the Xeon 2.6GHz processor. The timer wheel scheduler was configured to have a resolution of ΙΟμβ.
The graph 500 presents the performance comparison for N * / range of 0 to 1,500,000 total invocations in a logarithmic scale, the graph 600 presents the same data as in graph 500 in a linear scale and the graph 602 presents the performance comparison for a lower portion of the graph 600 where N * / is in a range of 0 to 50,000.
A graph curve 502 represents the performance of scheduling using the timerfd timer, a graph curve 504 represents the performance of scheduling using the Linux posix timer, a graph curve 506 represents the performance of scheduling using the timer wheel and a graph curve 508 represents the performance of scheduling using the scheduler 230. It should be noted that the measurements may have some standard deviations, but they show correctly the general trend.
As evident from the graphs 500, 600 and 602, scheduling the tasks 220 using the timerfd timer and the posix timer, the utilization of the computing resources of the processor(s) 202 increases as the total number of invocations N * f increases since the timerfd timer and the posix timer are invoked for every invocation of each of the tasks 220. As seen in the graphs 600 and 602, the timerfd timer and the posix timer based scheduling utilizes computing resources of the processor(s) 202 linearly with respect to the total number of invocations N * f with a significantly high linear function coefficient. Therefore at high invocation rate values (high number of tasks 220), scheduling using the timerfd timer and the posix timer may present a significant overhead that may exhaust computing resources of the processor(s) 202.
Scheduling the tasks 220 using the timer wheel presents a substantially uniform utilization of the computing resources of the processor(s) 202 across the range of the number of invocations N * f. Even though the processor(s) 202 computing resources utilization is increased linearly with the increase in the number of the scheduled tasks
220, the increase is very moderate (a small linear function coefficient). However, due to the time slot architecture of the timer wheel, the same computing resources utilization is presented even for a low number of invocations N * f. This means that even if no tasks 220 or a small number of invocations of the tasks 220 is scheduled, the timer wheel still consumes considerable computing resources of the processor(s) 202. The timer wheel overhead wastes computing resources of the processor(s) 202 even if no and/or few tasks 220 are scheduled and invoked. The overhead of the timer wheel is an absolute value (independent of implementation) and may depend on the resolution of timer wheel, in the presented case, 10 that is equivalent to 100000 system calls per second. Scheduling the tasks 220 using the scheduler 230 presents significantly improved efficiency compared to both the Linux timers (timerfd and posix) as evident at 510 as well as compared to the timer wheel as shown by 512. For the low invocation rates N * /, the scheduler 230 performs very similarly to the Linux timers (timerfd and posix) while for the high invocation rates the scheduler 230 performs slightly better than the timer wheel. For a large enough invocation rate value N * f, the performance of the timer wheel and the proposed scheduler is similar and may be governed mainly by the amount of calls to system calls. It should be noted that in the tested implementation, the timer wheel also skipped the actual call to the select function (to resume itself to operation) if the next timer has already expired and simply invokes the due task(s) 220. The enhanced performance of the scheduler 230 stems in part from the fact that the scheduler 230 is executed following the timeout event and is therefore a little late with respect to the task(s) 220 that need to be invoked at the current time. Since the tasks 220 are periodic and typically have a common invocation frequency and possible approximately similar invocation times, multiple tasks 220 may be invoked during the same timeout event. As result, while executing the steps of the timeout event (such as the steps 1 10, 1 12, 1 14 and 1 16) the scheduler may invoke multiple tasks 220 scheduled for invocation at approximately the same time as the invocation time of the task 220 that triggered the timeout event. This may significantly reduce the number of incidents the scheduler 230 calls the sleep/select system calls (to resume and suspend itself) with respect to the number of invocations N * f.
Applying the scheduler 230 in the computing environment is straightforward. The mechanisms and application programming interface (API) used for task scheduling are practically similar across most popular computing platforms, specifically in OS environments. For example, the Linux timers (timerfd and posix) provide an API as shown in pseudocode excerpt 1 below.
Pseudo-code excerpt 1 : timer _create {callback) timer _settime{ time ) where callback relates to the task requesting scheduling and time is the required invocation time.
As another example, the timer wheel provides an API as shown in pseudo-code excerpt
2 below.
Pseudo-code excerpt 2: StartTimer(time, callback)
The scheduler 230 may be easily ported into the Linux environment to support the API as provided by any of the above exemplary schedulers, the Linux timers (epoll and posix) as well as the timer wheel by providing an API as shown in pseudo-code excerpt
3 below. Pseudo-code excerpt 2:
StartTimer( time, callback) timer _create {callback) timer _settime{ time )
The scheduler 230 may be ported into the Linux environment to take precedence as the default scheduler. Since the API provided by the scheduler 230 is fully compliant with the API of the previously used schedulers, the code of the tasks 220 should not be altered thus no impact is inflicted on the tasks 220. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
It is expected that during the life of a patent maturing from this application many relevant scheduling implementations will be developed and the scope of the term scheduling implementations is intended to include all such new technologies a priori.
As used herein the term "about" refers to ± 10 %.
The terms "comprises", "comprising", "includes", "including", "having" and their conjugates mean "including but not limited to". This term encompasses the terms "consisting of and "consisting essentially of. The phrase "consisting essentially of means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof.
The word "exemplary" is used herein to mean "serving as an example, instance or illustration". Any embodiment described as "exemplary" is not necessarily to be
construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". Any particular embodiment of the invention may include a plurality of "optional" features unless such features conflict.
Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases "ranging/ranges between" a first indicate number and a second indicate number and "ranging/ranges from" a first indicate number "to" a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals there between.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
Claims
1. An apparatus for scheduling periodic tasks, comprising: a memory for storing a min-heap structure comprising a plurality of lists, each list comprising a plurality of entries, each entry is associated with one of a plurality of periodic tasks, each of the plurality of lists orders a subset of the plurality of periodic tasks that share a common invocation frequency; and a processor adapted to perform a scheduling process at a timeout event by: analyzing a first entry at the beginning of a top list of the plurality of lists to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure, invoking the respective periodic associated with the first entry, and rearranging the min-heap structure; wherein the scheduling process is repeated until no periodic tasks scheduled for invocation at the current time are identified.
2. The apparatus according to claim 1, wherein the min-heap structure is a binary min-heap tree structure created to support the scheduling process, the min-heap structure comprises a root node and at least one child node, each of the nodes is associated with one of the plurality of lists and the lists are doubly linked lists.
3. The apparatus according to any of the previous claims, wherein each of the
entries of each of the lists comprises a timing of invocation of the associated periodic task.
4. The apparatus according to any of the previous claims, wherein each respective entry associated with a respective task comprises a pointer to a first entry associated with a preceding periodic task and a pointer to a second entry associated with a succeeding periodic task, wherein the respective entry, the first entry and the second entry are included in a respective list of the plurality of lists,
wherein the preceding periodic task is scheduled for an earlier invocation time than the respective task and the succeeding periodic task scheduled for a later invocation time than the respective periodic task, and wherein the processor uses the pointers to traverse through the respective list.
5. The apparatus according to any of the previous claims, wherein during the
timeout event the processor calculates a timing of a following timeout event and sets a system timer to expire at the timing of the following timeout event.
6. The apparatus according claim 5, wherein the system timer is a hardware timer.
7. The apparatus according claim 5, wherein the system timer is provided by an operating system executed by the processor.
8. The apparatus according to any of the previous claims, wherein upon a
scheduling request of a respective periodic task of the plurality of periodic tasks, the processor identifies the invocation frequency of the respective periodic task, creates a new entry for the respective periodic task and links it to the end of a respective list of the plurality of lists according to the identified invocation frequency.
9. The apparatus according to any of the previous claims, wherein the processor is further adapted to create a new list in the min-heap structure in case the identified invocation frequency of a scheduled request is different from existing invocation frequency associated with any of the plurality of lists.
10. The apparatus according to any of the previous claims, wherein the processor is adapted to remove an entry of the plurality of entries that is associated with the respective periodic task.
1 1 . The apparatus according to any of the previous claims, wherein the processor is further adapted to reschedule the respective periodic task by adding a new entry associated with the respective periodic task in case the respective periodic task requests rescheduling, the new entry is added at the end of a respective list of the plurality of lists.
12. A method of scheduling periodic tasks, comprising: identifying, at a timeout event, one of a plurality of periodic tasks that is scheduled for invocation at a current time by analyzing a first entry at the beginning of a top list of a plurality of lists of a min-heap structure to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure; invoking the respective periodic task associated with the first entry; and rearranging the min-heap structure; wherein the identifying, invoking and rearranging are repeated until no periodic tasks scheduled for invocation at the current time are identified.
13. A system for scheduling periodic tasks, comprising: a memory for storing a min-heap structure comprising a plurality of lists, each list comprising a plurality of entries, each entry is associated with one of a plurality of periodic tasks, each of the plurality of lists orders a subset of the plurality of periodic tasks that share a common invocation frequency; and a processor adapted to perform a scheduling process at a timeout event by: analyzing a first entry at the beginning of a top list of the plurality of lists to identify one of the plurality of periodic tasks scheduled for invocation at a current time, the top list is located at a root node of the min-heap structure, invoking the respective periodic associated with the first entry, and rearranging the min-heap structure; wherein the scheduling process is repeated until no periodic tasks scheduled for invocation at the current time are identified.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2016/072014 WO2018050242A1 (en) | 2016-09-16 | 2016-09-16 | Efficient scheduler for short periodic tasks |
CN201680089358.3A CN109716298A (en) | 2016-09-16 | 2016-09-16 | Effective scheduler for short cycle task |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2016/072014 WO2018050242A1 (en) | 2016-09-16 | 2016-09-16 | Efficient scheduler for short periodic tasks |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018050242A1 true WO2018050242A1 (en) | 2018-03-22 |
Family
ID=56936427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2016/072014 WO2018050242A1 (en) | 2016-09-16 | 2016-09-16 | Efficient scheduler for short periodic tasks |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109716298A (en) |
WO (1) | WO2018050242A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111143053A (en) * | 2019-11-15 | 2020-05-12 | 杭州涂鸦信息技术有限公司 | Scheduling method of timing task, server and storage device |
EP3805924A1 (en) * | 2019-10-11 | 2021-04-14 | Unify Patente GmbH & Co. KG | Method of scheduling an additional new processing task to be executed by a processor, scheduler, and central processing unit |
WO2022252026A1 (en) * | 2021-05-31 | 2022-12-08 | Boe Technology Group Co., Ltd. | Computer-implemented method in system comprising one or more processors for executing periodic tasks, system comprising one or more processors for executing periodic tasks, and computer-program product |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110865877B (en) * | 2019-10-14 | 2024-04-19 | 平安银行股份有限公司 | Task request response method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007117540A2 (en) * | 2006-04-03 | 2007-10-18 | Secure64 Software Corporation | System and method for time-out management |
US20090183157A1 (en) * | 2008-01-10 | 2009-07-16 | Microsoft Corporation | Aggregating recurrent schedules to optimize resource consumption |
US20090307519A1 (en) * | 2008-06-04 | 2009-12-10 | Edward Craig Hyatt | Power saving scheduler for timed events |
US20150347186A1 (en) * | 2014-05-29 | 2015-12-03 | Netapp, Inc. | Method and system for scheduling repetitive tasks in o(1) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7533383B2 (en) * | 2003-11-12 | 2009-05-12 | International Business Machines Corporation | Method, system, and apparatus for scheduling pattern based web services |
FR2873830B1 (en) * | 2004-07-30 | 2008-02-22 | Commissariat Energie Atomique | TASK PROCESSING ORDERING METHOD AND DEVICE FOR CARRYING OUT THE METHOD |
JP4074296B2 (en) * | 2005-03-25 | 2008-04-09 | 株式会社東芝 | Schedulability determination method, real-time system, and program |
CN103377078B (en) * | 2012-04-11 | 2017-04-12 | 广州地铁集团有限公司 | Real-time task scheduling method and system for vehicular ATP |
-
2016
- 2016-09-16 WO PCT/EP2016/072014 patent/WO2018050242A1/en active Application Filing
- 2016-09-16 CN CN201680089358.3A patent/CN109716298A/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007117540A2 (en) * | 2006-04-03 | 2007-10-18 | Secure64 Software Corporation | System and method for time-out management |
US20090183157A1 (en) * | 2008-01-10 | 2009-07-16 | Microsoft Corporation | Aggregating recurrent schedules to optimize resource consumption |
US20090307519A1 (en) * | 2008-06-04 | 2009-12-10 | Edward Craig Hyatt | Power saving scheduler for timed events |
US20150347186A1 (en) * | 2014-05-29 | 2015-12-03 | Netapp, Inc. | Method and system for scheduling repetitive tasks in o(1) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3805924A1 (en) * | 2019-10-11 | 2021-04-14 | Unify Patente GmbH & Co. KG | Method of scheduling an additional new processing task to be executed by a processor, scheduler, and central processing unit |
US11334386B2 (en) | 2019-10-11 | 2022-05-17 | Unify Patente Gmbh & Co. Kg | Method of scheduling an additional new processing task to be executed by a processor, scheduler, and central processing unit |
CN111143053A (en) * | 2019-11-15 | 2020-05-12 | 杭州涂鸦信息技术有限公司 | Scheduling method of timing task, server and storage device |
WO2022252026A1 (en) * | 2021-05-31 | 2022-12-08 | Boe Technology Group Co., Ltd. | Computer-implemented method in system comprising one or more processors for executing periodic tasks, system comprising one or more processors for executing periodic tasks, and computer-program product |
Also Published As
Publication number | Publication date |
---|---|
CN109716298A (en) | 2019-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11010193B2 (en) | Efficient queue management for cluster scheduling | |
US8689226B2 (en) | Assigning resources to processing stages of a processing subsystem | |
Pietri et al. | Mapping virtual machines onto physical machines in cloud computing: A survey | |
Peng et al. | R-storm: Resource-aware scheduling in storm | |
US9477521B2 (en) | Method and system for scheduling repetitive tasks in O(1) | |
Bini et al. | Resource management on multicore systems: The ACTORS approach | |
US20190294469A1 (en) | Techniques to dynamically partition tasks | |
US8943353B2 (en) | Assigning nodes to jobs based on reliability factors | |
US20170017521A1 (en) | Dynamically adaptive, resource aware system and method for scheduling | |
US20160054781A1 (en) | Methods and Apparatus to Manage Jobs that can and Cannot be Suspended When there is a Change in Power Allocation to a Distributed Computer System | |
US9639403B2 (en) | Receive-side scaling in a computer system using sub-queues assigned to processing cores | |
WO2018050242A1 (en) | Efficient scheduler for short periodic tasks | |
Pakize | A comprehensive view of Hadoop MapReduce scheduling algorithms | |
He et al. | Scheduling parallel tasks onto opportunistically available cloud resources | |
Lai et al. | Sol: Fast distributed computation over slow networks | |
Sun et al. | Scheduling parallel tasks under multiple resources: List scheduling vs. pack scheduling | |
US20200379804A1 (en) | Multi-level scheduling | |
Jonathan et al. | Awan: Locality-aware resource manager for geo-distributed data-intensive applications | |
Garikipati et al. | Rt-opex: Flexible scheduling for cloud-ran processing | |
US20130139176A1 (en) | Scheduling for real-time and quality of service support on multicore systems | |
Li et al. | Aryl: An elastic cluster scheduler for deep learning | |
Armstrong et al. | Scheduling many-task workloads on supercomputers: Dealing with trailing tasks | |
CN109002381B (en) | Process communication monitoring method, electronic device and computer readable storage medium | |
Hassan et al. | Efficient resource scheduling for big data processing in cloud platform | |
Nicodemus et al. | Managing vertical memory elasticity in containers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16766319 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16766319 Country of ref document: EP Kind code of ref document: A1 |