CN115167996A - Scheduling method and device, chip, electronic equipment and storage medium - Google Patents
Scheduling method and device, chip, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115167996A CN115167996A CN202210724490.4A CN202210724490A CN115167996A CN 115167996 A CN115167996 A CN 115167996A CN 202210724490 A CN202210724490 A CN 202210724490A CN 115167996 A CN115167996 A CN 115167996A
- Authority
- CN
- China
- Prior art keywords
- message
- interrupt
- queue
- processed
- message queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000012545 processing Methods 0.000 claims abstract description 123
- 230000008569 process Effects 0.000 claims description 34
- 238000004590 computer program Methods 0.000 claims description 10
- 230000004044 response Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4812—Task transfer initiation or dispatching by interrupt, e.g. masked
- G06F9/4825—Interrupt from clock, e.g. time of day
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides a scheduling method, a scheduling device, a chip, an electronic device and a non-volatile computer-readable storage medium. The method comprises the following steps: under the condition of receiving the message to be processed, sending the message to be processed to a corresponding target service message queue so that the interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue; and determining a processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue to be processed. The message is distributed to the service message queue corresponding to the interrupt source, the binding of the message and the interrupt signal is realized, the interrupt triggering is realized by utilizing the interrupt controller of the system, and the task scheduling is realized based on the interrupt priority, so that the system scheduling performance can be improved without adding an additional hardware scheduler, the hardware cost and the power consumption are saved, the task scheduling overhead is low, and the interrupt response time delay is low.
Description
Technical Field
The present application relates to the field of task scheduling technologies, and in particular, to a scheduling method, a scheduling apparatus, a chip, an electronic device, and a non-volatile computer-readable storage medium.
Background
The operation process of the real-time operating system realized based on software and related to task scheduling comprises the following steps: operating system clock cycle maintenance, task polling, ready list operation, task scheduling and interrupt response. The scheduling function based on hardware multithreading is realized in hardware, and has the capability of simultaneously generating mutually independent instruction streams, so that compared with a real-time operating system realized based on software, the system reduces the operations of ready list operation, task polling and task scheduling, the task scheduling efficiency of the hardware is higher, but each hardware thread has more hardware threads, a group of special register sets needs to be added, the cost is increased, and the negative influence on the power consumption of the system is brought.
Disclosure of Invention
The embodiment of the application provides a scheduling method, a scheduling device, a chip, an electronic device and a non-volatile computer readable storage medium.
The embodiment of the application provides a scheduling method. The scheduling method comprises the following steps: under the condition that a message to be processed is received, the message to be processed is sent to a corresponding target service message queue, so that an interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue; and determining a processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue to be processed.
The embodiment of the application provides a scheduling device. The scheduling device comprises a sending module and a determining module. The sending module is used for sending the message to be processed to a corresponding target service message queue under the condition that the message to be processed is received, so that the interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue; the determining module is configured to determine a processing policy for the message to be processed according to a first interrupt priority of an interrupt source corresponding to the interrupt signal and a second interrupt priority of an interrupt source corresponding to a currently processed currently served message queue.
The embodiment of the application provides a chip, which is connected with an interrupt controller of an electronic device, and is used for sending a message to be processed to a corresponding target service message queue under the condition that the message to be processed is received, so that the interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue; and determining a processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue.
The embodiment of the application provides electronic equipment. The electronic equipment comprises a processor, wherein the processor is used for sending a message to be processed to a corresponding target service message queue under the condition that the message to be processed is received, so that an interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue; and determining a processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue.
The present embodiments provide a non-transitory computer-readable storage medium having a computer program stored thereon. The computer program, when executed by a processor, implements a scheduling method. The scheduling method comprises the following steps: under the condition that a message to be processed is received, the message to be processed is sent to a corresponding target service message queue, so that an interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue; and determining a processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue to be processed.
In the scheduling method, the scheduling device, the chip, the electronic device and the non-volatile computer readable storage medium, each service message queue is a task, and the execution of one task is the processing of one or more messages contained in the task, the binding of the message and an interrupt signal is realized by distributing the message to the service message queue corresponding to the interrupt source, the triggering of the interrupt is realized by using an interrupt controller of the system, and the task scheduling is realized based on the interrupt priority, so that the system scheduling performance can be improved without adding an additional hardware scheduler, the hardware cost and the power consumption are saved, and compared with a scheduler of a traditional real-time operating system realized based on software, the task scheduling cost is lower and the interrupt response time delay is lower.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of a scheduling method according to some embodiments of the present application;
FIG. 2 is a schematic flow chart diagram of a scheduling method according to some embodiments of the present application;
FIG. 3 is a schematic flow chart diagram of a scheduling method according to some embodiments of the present application;
FIG. 4 is a schematic flow chart diagram of a scheduling method according to some embodiments of the present application;
FIG. 5 is a schematic diagram of a scheduling method of certain embodiments of the present application;
FIG. 6 is a schematic flow chart diagram of a scheduling method of some embodiments of the present application;
FIG. 7 is a schematic flow chart diagram of a scheduling method according to some embodiments of the present application;
FIG. 8 is a schematic illustration of a scheduling method of some embodiments of the present application;
FIG. 9 is a block diagram of a scheduling apparatus according to some embodiments of the present application;
FIG. 10 is a schematic plan view of an electronic device of some embodiments of the present application; and
FIG. 11 is a schematic diagram of the interaction of a non-volatile computer readable storage medium and a processor of certain embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
The terms appearing in the present application are explained first below:
message: the message is a section of service logic, and can complete the specific operation of the system by processing the message, wherein the message comprises description information and a message body, the description information is used for analyzing and processing the message, and the message body contains the actual content of the message.
The control process in a complex system consists of a set of tasks, the running of which is done by a processor and a scheduler of tasks. This system can be a complex communication system, an industrial control system or a central control system of a car. Generally, a control process comprises the execution of tasks having different functions, which are responsible for the processing of messages relating to the transactional operation of the system, that is to say the processing of each task, i.e. representing the processing of one or more messages relating to that task.
Service message queue: the service message queue corresponds to a system task, for example, one service message queue corresponds to one task, and the service message queue is used for storing messages and sequentially processing the messages by sending the messages of the service message queue to the corresponding message processing entities, thereby completing the task.
A message processing entity: a processor may have one or more message processing entities for processing objects of messages in the processor, and a plurality of message processing entities may process a plurality of messages simultaneously.
An interrupt controller: the interrupt signal of the interrupt source corresponding to the service message queue can be triggered after the message is sent to the service message queue, the interrupt signal is routed to the corresponding processor, and the processor determines whether to interrupt the currently processed service message queue according to the interrupt priority of the interrupt source corresponding to the interrupt signal so as to process the service message queue corresponding to the interrupt signal.
Interrupt priority: in order for the system to respond and handle all interrupts that occur in a timely manner, the system classifies the interrupt sources into several levels, called interrupt priority, based on the importance and urgency of the interrupt event that caused it.
Referring to fig. 1, a scheduling method according to an embodiment of the present disclosure includes:
step 011: and under the condition that the message to be processed is received, sending the message to be processed to the corresponding target service message queue, so that the target service message queue generates an interrupt signal at the corresponding interrupt source in the interrupt controller.
Specifically, during the operation of the system, various tasks are executed, and during the execution of each task, the system continuously sends messages, and completes a specific task by processing the messages one by one. When a message to be processed is received, the message to be processed is sent to a target service message queue corresponding to the task to which the message belongs, and it can be understood that each message has a corresponding target service message queue according to different tasks to which the message belongs.
After the message to be processed is sent to the target service message queue, the target service message queue generates an interrupt signal at the corresponding interrupt source in the interrupt controller, that is, the service message queue and the interrupt source have a corresponding relationship, each service message queue can correspond to at least one interrupt source, that is, a plurality of messages in the service message queue can be respectively served to different interrupt sources, and the service message queue is bound and associated with the interrupt source.
The priority of the service message queue is determined according to the interrupt priority of the interrupt source bound with the service message queue, and it can be understood that the interrupt priorities of one or more interrupt sources bound with the service message queue are the same, so that when one service message queue is processed, the interrupt sources can be sequentially processed according to the queue order, messages of the interrupt sources corresponding to different interrupt priorities exist in the same service message queue, and therefore the situation that messages with low interrupt priorities are processed in a priority mode is avoided.
After the message to be processed is sent to the target service message queue, the interrupt source corresponding to the message can be triggered to generate an interrupt signal in a plurality of interrupt sources bound by the target service message queue; or, because the interrupt priority of at least one interrupt source corresponding to the service message queue is the same, and the task scheduling is performed based on the interrupt priority, after the message to be processed is sent to the target service message queue, the task scheduling can be realized only by triggering any one interrupt source of the plurality of interrupt sources bound by the target service message queue to generate an interrupt signal. For example, the corresponding interrupt signal is directly triggered and generated by operating an interrupt pending status register corresponding to the interrupt source in the interrupt controller so as not to depend on the external interrupt signal for triggering.
Step 012: and determining the processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue to be processed.
After the interrupt signal is generated, the task scheduling can be realized by using the interrupt mechanism of the interrupt controller of the system. The processing strategy of the message to be processed is determined according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue which is processed normally, if the processing of the current service message queue is interrupted so as to process the target service message queue, or the processing of the current service message queue is continued without interruption.
Referring to fig. 2, step 012 may optionally include:
step 0121: under the condition that the first interrupt priority is greater than the second interrupt priority, suspending the message of the current service message queue and processing the message of the target service message queue;
step 0122: in the event that the first interrupt priority is less than or equal to the second interrupt priority, processing of messages of the currently serving message queue continues.
Specifically, when the processing policy of the message to be processed is determined, the size between a first interrupt priority corresponding to the message to be processed and a second interrupt priority corresponding to the current service message queue may be determined, and when the first interrupt priority is greater than the second interrupt priority, it is described that the message to be processed needs to be processed preferentially, so that the current service message queue may be suspended at this time, and then the target service message queue where the message to be processed is located may be processed. The suspension of the current service message queue refers to suspending the processing of the current service message queue and storing the context information of the current service message queue, so that the processing of the current service message queue is resumed after the target service message queue is processed conveniently.
The context information of the current service message queue can be saved by performing register push operation on the current service message queue, namely, the relevant context information of a processor for processing the current service message queue is pushed. Referring to table 1, in the present application, only the processor register is pushed when pushing is performed, and the pushing overhead is reduced without pushing the context of the whole operating system.
Table 1 interrupt process processor overhead
The application reduces (1- (3 +8+ 15)/(3 +8+15+ 40)) + 100% =66% for interrupt response time delay, namely reduces time consumption for operating system context push, reduces (1- (3 +8+ 15)/(3 +8+15+40 + 300)) = 100% =90% for interrupt processing, namely reduces time required for operating system context push and pop and interrupt return, and reduces (1- (3 +8+ 15)/300) =86% for processor overhead required for scheduling ready tasks. Furthermore, in the application, for the case that a plurality of messages to be processed simultaneously exist in the service message queue, the processor will not linearly accumulate the hardware overhead, and the hardware overhead generated by the service message queue for the interrupt event is disposable, that is, the messages to be processed in the service message queue are pushed or popped once until the interrupt exits after all the messages in the service message queue are processed. The higher the frequency of message interaction of the control task in the system is, the more hardware overhead can be saved.
Taking hardware multithreading as an example, in order to improve the performance of real-time switching between threads, an independent processor-specific register set needs to be provided for each hardware thread to store the context state and data of the hardware thread in the running process. Compared with the technical scheme of a single hardware thread, the method reduces the overhead of pushing the software processing register during the switching process of the hardware thread, but copies a group of independent processor special registers and increases the complexity of processor design. Due to the fact that the scheduler function is designed and realized based on the existing hardware interrupt controller technology, extra special hardware logic is not needed to be added to improve the system scheduling performance, a large amount of processor resources are reduced, and therefore the area and the power consumption of a chip are reduced. The technical scheme provided by the application fully considers the reduction of the overhead of the special hardware resource and the software scheduler on the system, and realizes better balance in the aspects of performance and the utilization of the software and hardware resources of the system.
And when the first interrupt priority is less than or equal to the second interrupt priority, the message to be processed does not need to be processed preferentially, so that the message to be processed can be processed only after the message processing of the current service message queue is finished, at the moment, the message to be processed is enabled to wait in the target service message queue, and the message of the current service message queue is processed continuously, so that the task scheduling is realized through the interrupt priority corresponding to the service message queue.
Referring to fig. 3, step 012 may optionally include:
step 0123: and determining a processing strategy of the message to be processed according to the first interrupt priority, the second interrupt priority and whether the interrupt source corresponding to the current service message queue supports preemption.
Specifically, for the protection and atomic operation part of the mutually exclusive resource in the system, it cannot be interrupted when performing the corresponding task and process, so that different interrupt sources need to define the related attributes in advance, please refer to table 2, each interrupt source has the following attributes:
interrupt priority |
Target processor |
Whether or not to support preemption |
Destination service message queue |
Destination message processing entity |
TABLE 2 interrupt Source Attribute
The interrupt priority is used for representing the priority of an interrupt source when the interrupt is in interrupt, the target processor represents that a message corresponding to the interrupt source is processed by a corresponding processor of the electronic device, whether the preemption is supported or not represents that whether a service message queue corresponding to the interrupt source can be preempted by other service message queues when being processed or not, the destination service message queue represents identification information of a service message queue bound by the interrupt source, each service message queue has unique identification information, the destination message processing entity represents identification information of a message processing entity used for processing the service message queue corresponding to the interrupt source, and the message processing entity also has unique identification information.
Therefore, before the size determination between the first interrupt priority and the second interrupt priority is performed, it may be determined whether the interrupt source corresponding to the current service message queue (or specifically, the message currently being processed) supports preemption, and in the case that preemption is supported, it is determined whether the first interrupt priority is greater than the second interrupt priority, so that in the case that the first interrupt priority is greater than the second interrupt priority, the message in the current service message queue is suspended, and the message in the target service message queue is processed; and when the first interrupt priority is less than or equal to the second interrupt priority or the interrupt source corresponding to the current service message queue (or the message currently being processed in particular) prohibits preemption, continuing to process the message of the current service message queue. Thus, after the attribute of whether the interrupt source supports preemption is set, the high-priority service message queue (i.e. the interrupt priority of the interrupt source bound to the service message queue is higher) is prohibited from preempting the service message queue currently being processed, thereby realizing the protection of resource consistency in the system.
Referring to fig. 4, before sending the pending message to the target service message queue, the scheduling method further includes:
step 010: acquiring a message descriptor corresponding to a message to be processed from an idle message queue, wherein the message descriptor comprises a message identifier and a message body address;
step 011: sending the message to be processed to the corresponding target service message queue, including:
step 0111: and sending the message to be processed with the message descriptor to a corresponding target service message queue.
Specifically, the message distributed by the system needs to apply for the message to allocate corresponding resources (such as memory) for the specific information of the message, and the message application flow mainly applies for the message descriptor and the message body of the message. The message descriptor contains description information for explaining the overall information of the message, and the description information is used for correctly parsing and processing the message. The message body is used to store the actual content of the message, and the content may include the specific data content transferred between tasks in the control system.
In the application, a free message queue is designed for storing message descriptors and message bodies.
Referring to fig. 5, optionally, the idle message queue may include a message descriptor queue and a message body queue, the message descriptor queue is used for storing message descriptors (such as message descriptor N (N is an integer and is greater than 3) in fig. 5), and the message body queue is used for storing message bodies (such as message body M (M is an integer and is greater than 3) in fig. 5).
When a message request is made, a message descriptor may be requested from a message descriptor queue (for example, a message descriptor at the head of the message descriptor queue is popped up), the message descriptor in the message descriptor queue is actually a memory address where description information is stored, and the process of requesting for the message descriptor is a process of storing description information describing message general information into a memory address corresponding to the message descriptor, for example, a descriptor address N corresponding to a message descriptor N.
The content of the message body of the message can be empty, and at this time, the message body does not need to be applied, and the message descriptor can be obtained by directly encapsulating the address of the message descriptor, so that the message to be processed with the message descriptor can be sent to the corresponding target service message queue for processing.
When the message body of the message has specific data content, the message body needs to be applied, and the message body can be applied from the message body queue, for example, the message body at the head of the queue in the message body queue is popped up, and the message body in the message body queue is actually a memory address in which the specific data content of the message body is stored, so that the process of applying for the message body is a process of storing the specific data content of the message into the memory address corresponding to the message body, for example, the message body address M corresponding to the message body M.
After applying for obtaining the address and the message body address of the message descriptor, the address and the message body address of the message descriptor can be encapsulated to obtain the message descriptor, so that the message to be processed with the message descriptor can be sent to a corresponding target service message queue, and the message processing entity can process the message to be processed conveniently.
The specific content of the message descriptor is as follows in table 3:
message identification |
Message size |
Message type |
Message source address |
Message destination address |
Message body address |
TABLE 3 message descriptor content
The message in the message descriptor can facilitate the message processing entity to analyze and process the message, for example, the message identifier is used for the message processing entity to find out the processing function for processing the specific data content in the message body address, the size and the type of the message are the basic information of the message, and the message source address and the message target address can respectively point to different service message queues, thereby realizing the communication between different service message queues.
In addition, the address information of the message body in the memory (i.e. the message body address in table 3) will be maintained in the message descriptor, so that the copy and transmission of the message body in the system can be flexibly supported. For example, when a message needs to be copied and sent to different service message queues for processing multiple times, the system may apply for multiple message descriptors, which may only differ in message destination address, and in which the message body addresses are the same, thereby reducing the performance degradation in a real-time system caused by the need to copy the specific data content of the message body multiple times due to the sending of the same message content multiple times.
Optionally, the idle message queue in the present application may initialize an address of each message descriptor in the message descriptor queue and an address of each message body in the message body queue after the system is started. That is to say, the address and the message body address of each message descriptor in the message descriptors are allocated with resources in advance, so that memory resources do not need to be allocated when the message is applied each time, and the message application efficiency and the message sending efficiency are improved.
Optionally, the idle message queue of the present application is stored in a first queue register of the electronic device, the service message queue is stored in a second queue register, the number of the second queue registers is multiple, and each second queue register stores one or more service message queues.
Referring to fig. 6, optionally, step 0121: processing messages of a target service message queue, comprising:
step 01211: and sending the message to be processed with the message descriptor to the corresponding message processing entity so that the message processing entity obtains a preset processing function corresponding to the message identifier and message content corresponding to the message body address, and processing the message content according to the preset processing function.
Specifically, after the message to be processed with the message descriptor is sent to the target service message queue, the interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue, and when a first interrupt priority of the interrupt source corresponding to the interrupt signal is greater than a second interrupt priority of the interrupt source corresponding to the current service message queue currently being processed, the message of the current service message queue may be suspended and the message of the target service message queue may be processed.
During processing, the message processing entity corresponding to the target service message queue can be determined according to the interrupt source attribute, and then the message to be processed with the message descriptor is sent to the corresponding message processing entity for message processing. When processing the message, the message processing entity firstly obtains the message identifier in the message descriptor according to the address of the message descriptor, thereby obtaining the preset processing function bound with the message identifier, then obtains the message body address in the message descriptor, obtains the specific data content of the message body, and finally processes the specific data content according to the preset processing function, thereby completing the processing of the message to be processed.
It has been mentioned in the foregoing that the message descriptor may include a message source address and a message destination address to respectively represent a source service message queue for sending messages and a destination service message queue for receiving messages, thereby realizing communication between different service message queues. Therefore, under the condition that the message processing entity processes the message body according to the preset processing function, if the message processing entity needs to communicate with the destination service message queue, the message processing entity sends the message to the destination service message queue, so that the interrupt controller generates an interrupt signal of an interrupt source corresponding to the destination service message queue, the interrupt processing is carried out, and the communication between the service message queues is realized.
It can be understood that tasks corresponding to different service message queues are not isolated, and the execution of one task may require other tasks to be executed first, for example, when processing message 1 of the current service message queue (i.e. the source service message queue), another destination service message queue is required to process message 2 first, so that the current service message queue can apply for message 2 and send it to the destination service message queue to trigger an interrupt to process message 2 first, and after completing the processing of message 2, the interrupt is exited to process message 1 again.
Referring to fig. 7, optionally, step 0121: processing messages of a target service message queue, comprising:
step 01212: and releasing the message to be processed with the message descriptor to the free message queue under the condition that the message to be processed is processed.
Specifically, when the message to be processed is completely processed, the message descriptor of the message to be processed should not occupy system resources, for example, the message descriptor of the message to be processed needs to be released from the target service message queue to the free message queue again, and releasing the message descriptor of the message to be processed to the free message queue may specifically be to delete both information stored in the descriptor address in the message descriptor and information stored in the message body address to vacate a memory space, and to stack the message descriptor to the free message queue again, for example, to stack the address of the message descriptor to the tail of the message descriptor queue and to stack the message body address to the tail of the message body queue, so that the free message queue can implement application, allocation, and recovery of the message.
Optionally, when the pending message is not received, the messages of the service message queues are sequentially sent to the corresponding message processing entities for processing according to the interrupt priority of the interrupt source corresponding to each service message queue, and the message processing entities correspond to at least one service message queue.
It can be understood that the number of message processing entities in the processor is limited, and the number of service message queues can be determined according to the setting of the interrupt priority of the system, as in the present application, the number of service message queues is generally much greater than the number of message processing entities, therefore, one message processing entity can correspond to a plurality of service message queues, as shown in fig. 8, the service message queues 1-Z are arranged from low to high according to the interrupt priority, the service message queues 1 and 2 correspond to the message processing entity 1, and the service message queues 3 to Z (Z is an integer and greater than 3) correspond to the service message entity 2.
Taking the case that the message processing entity 1 processes the service message queues 1 and 2 as an example, when a message to be processed is not received, the messages of the service message queues 2 and 1 are sequentially sent to the corresponding message processing entity 1 from high to low according to the interrupt priority for processing, so that the message processing entity 1 processes the message of the service message queue 2 first and then processes the message of the service message queue 1. And after receiving the message to be processed and triggering the interrupt, judging whether the interrupt priority of the target service message queue corresponding to the message to be processed is greater than the interrupt priority of the current service message queue currently processed. For example, if the current service message queue is service message queue 1 and the target service message queue is service message queue 2, then the message processing entity 1 will suspend the service message queue 1 because the interrupt priority of the service message queue 2 is greater than the interrupt priority of the service message queue 1, so as to process the target service message queue.
Compared with the existing real-time preemptible multitask processing function of a software operating system and hardware multithreading, the scheduling method of the embodiment of the application realizes task scheduling by using the interrupt controller of the system, greatly reduces the software and hardware expenses consumed by the system, simplifies the system design and improves the reliability of the system.
The scheduler design in the application can support priority preemption and communication among service message queues; priority preemption forbidding to realize resource consistency protection; and the scheduling functions of application, allocation, release and the like of message resources can be realized, and simultaneously, as the messages to be processed in the service message queue of the application are pushed or popped once, the overhead proportion in the operation of the system is reduced along with the rise of the message interaction frequency between control tasks in the system.
In order to better implement the scheduling method according to the embodiment of the present application, the embodiment of the present application further provides a scheduling apparatus 10. Referring to fig. 9, the scheduling device 10 may include:
a sending module 11, configured to send a message to be processed to a corresponding target service message queue under the condition that the message to be processed is received, so that the target service message queue generates an interrupt signal at a corresponding interrupt source in the interrupt controller;
the determining module 12 is configured to determine a processing policy of the message to be processed according to a first interrupt priority of an interrupt source corresponding to the interrupt signal and a second interrupt priority of an interrupt source corresponding to the currently processed currently serviced message queue.
The determining module 12 is specifically configured to:
under the condition that the first interrupt priority is higher than the second interrupt priority, suspending the messages of the current service message queue and processing the messages of the target service message queue;
continuing to process messages of the current serving message queue if the first interrupt priority is less than or equal to the second interrupt priority.
The determining module 12 is specifically further configured to:
and determining the processing strategy of the message to be processed according to the first interrupt priority, the second interrupt priority and whether the interrupt source corresponding to the current service message queue supports preemption.
The determining module 12 is further specifically configured to:
under the condition that the first interrupt priority is greater than the second interrupt priority and an interrupt source corresponding to the current service message queue supports preemption, suspending the message of the current service message queue and processing the message of the target service message queue;
and under the condition that the first interrupt priority is less than or equal to the second interrupt priority or the interrupt source corresponding to the current service message queue forbids preemption, continuing to process the message of the current service message queue.
The determining module 12 is further specifically configured to perform stack pushing processing on the current service message queue.
The scheduling device 10 further includes:
an obtaining module 13, configured to obtain a message descriptor corresponding to the to-be-processed message from an idle message queue;
the sending module 11 is specifically configured to send the to-be-processed message with the message descriptor to a corresponding target service message queue.
The obtaining module 13 is specifically configured to:
acquiring the address of the message descriptor from the message descriptor queue to store the description information;
under the condition that the content of the message body is not empty, obtaining a message body address from the message body queue to store the message body;
encapsulating the address of the message descriptor and the message body address to generate the message descriptor.
The scheduling device 10 further includes:
an initializing module 14, configured to initialize an address of each message descriptor in the message descriptor queue and an address of each message body in the message body queue.
The determining module 12 is further specifically configured to:
and sending the message to be processed with the message descriptor to the corresponding message processing entity so that the message processing entity obtains a preset processing function corresponding to the message identifier and a message body corresponding to the message body address, and processing the message body according to the preset processing function.
The scheduling device 10 further includes:
a receiving module 15, configured to receive, when the message processing entity processes the message body according to the preset processing function, a message sent to a destination service message queue by the message processing entity, so that the interrupt controller generates an interrupt signal of an interrupt source corresponding to the destination service message queue.
The scheduling device 10 further includes:
a releasing module 16, configured to release the to-be-processed message with the message descriptor to the idle message queue when the to-be-processed message is processed.
The scheduling device 10 further includes:
and a processing module 17, configured to, when the message to be processed is not received, sequentially send the messages of the service message queues to corresponding message processing entities for processing according to the interrupt priority of the interrupt source corresponding to each service message queue, where the message processing entities correspond to at least one service message queue.
The modules in the scheduling apparatus 10 may be implemented in whole or in part by software, hardware, and a combination thereof. The modules may be embedded in hardware or independent of a processor in the computer device, or may be stored in a memory in the computer device in software, so that the processor can call and execute operations corresponding to the modules.
Referring to fig. 10, the chip 40 according to the embodiment of the present disclosure is disposed in the electronic device 100 and connected to the interrupt controller 50 of the electronic device 100. The chip 40 is configured to execute the scheduling method according to any of the above embodiments, and for brevity, the description is omitted here.
Referring to fig. 10 again, the electronic device 100 of the present embodiment includes a processor 30. The processor 30 is configured to execute the scheduling method according to any of the above embodiments, and for brevity, the description is omitted here.
Among other things, the electronic device 100 may be a mobile phone, a smart phone, a Personal Digital Assistant (PDA), a tablet computer and a video game device, a portable terminal (e.g., a notebook computer), or a larger-sized device (e.g., a desktop computer and a television).
Referring to fig. 11, the present embodiment further provides a computer-readable storage medium 300, on which a computer program 310 is stored, and steps of the scheduling method according to any of the above embodiments are implemented when the computer program 310 is executed by the processor 30, which is not described herein again for brevity.
It will be appreciated that the computer program 310 comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), software distribution medium, and the like.
In the description herein, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (16)
1. A method of scheduling, comprising:
under the condition that a message to be processed is received, the message to be processed is sent to a corresponding target service message queue, so that an interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue;
and determining a processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue.
2. The scheduling method according to claim 1, wherein the determining the processing policy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the currently processed message comprises:
under the condition that the first interrupt priority is greater than the second interrupt priority, suspending the messages of the current service message queue and processing the messages of the target service message queue;
continuing to process messages of the current service message queue if the first interrupt priority is less than or equal to the second interrupt priority.
3. The scheduling method according to claim 1, wherein the determining the processing policy of the to-be-processed message according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the currently-processed currently-serviced message queue further comprises:
and determining the processing strategy of the message to be processed according to the first interrupt priority, the second interrupt priority and whether the interrupt source corresponding to the current service message queue supports preemption.
4. The scheduling method according to claim 3, wherein the determining the processing policy of the message to be processed according to the first interrupt priority, the second interrupt priority and whether the interrupt source corresponding to the current service message queue supports preemption comprises:
under the condition that the first interrupt priority is greater than the second interrupt priority and the interrupt source corresponding to the current service message queue supports preemption, suspending the message of the current service message queue and processing the message of the target service message queue;
and under the condition that the first interrupt priority is less than or equal to the second interrupt priority or the interrupt source corresponding to the current service message queue forbids preemption, continuing to process the message of the current service message queue.
5. The scheduling method according to claim 2 or 4, wherein the suspending the message of the current service message queue comprises:
and performing stack pushing processing on the current service message queue.
6. The scheduling method according to claim 2 or 4, further comprising, before the target service message queue to which the pending message is sent:
acquiring a message descriptor corresponding to the message to be processed from an idle message queue;
the sending the to-be-processed message to a corresponding target service message queue includes:
and sending the message to be processed with the message descriptor to a corresponding target service message queue.
7. The scheduling method according to claim 6, wherein the to-be-processed message includes description information and a message body, the idle message queue includes a message descriptor queue and a message body queue, and the obtaining the message descriptor corresponding to the to-be-processed message from the idle message queue includes:
acquiring the address of the message descriptor from the message descriptor queue to store the description information;
under the condition that the content of the message body is not empty, obtaining a message body address from the message body queue to store the message body;
encapsulating the address of the message descriptor and the message body address to generate the message descriptor.
8. The scheduling method of claim 6, further comprising:
initializing an address of each message descriptor in the message descriptor queue and an address of each message body in the message body queue.
9. The method of claim 6, wherein the message descriptor comprises a message identifier and a message body address, and wherein the processing the message of the target service message queue comprises:
and sending the message to be processed with the message descriptor to the corresponding message processing entity so that the message processing entity obtains a preset processing function corresponding to the message identifier and a message body corresponding to the message body address, and processing the message body according to the preset processing function.
10. The scheduling method of claim 9, further comprising:
and under the condition that the message processing entity processes the message body according to the preset processing function, receiving a message sent to a destination service message queue by the message processing entity, so that the interrupt controller generates an interrupt signal of an interrupt source corresponding to the destination service message queue.
11. The method of scheduling of claim 6 wherein said processing the messages of the target service message queue further comprises:
and releasing the message to be processed with the message descriptor to the free message queue under the condition that the message to be processed is processed.
12. The scheduling method of claim 1, further comprising:
and under the condition that the messages to be processed are not received, sequentially sending the messages of the service message queues to corresponding message processing entities for processing according to the interrupt priority of the interrupt source corresponding to each service message queue, wherein the message processing entities correspond to at least one service message queue.
13. A scheduling apparatus, the apparatus comprising:
the device comprises a sending module, a receiving module and a processing module, wherein the sending module is used for sending a message to be processed to a corresponding target service message queue under the condition that the message to be processed is received, so that an interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue;
and the determining module is used for determining the processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the currently processed currently-served message queue.
14. A chip is connected with an interrupt controller of an electronic device, and the chip is used for sending a message to be processed to a corresponding target service message queue under the condition that the message to be processed is received, so that the interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue; and determining a processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue.
15. An electronic device, comprising a processor configured to send a message to be processed to a corresponding target service message queue if the message is received, so that an interrupt controller generates an interrupt signal of an interrupt source corresponding to the target service message queue; and determining a processing strategy of the message to be processed according to the first interrupt priority of the interrupt source corresponding to the interrupt signal and the second interrupt priority of the interrupt source corresponding to the current service message queue.
16. A non-transitory computer-readable storage medium of a computer program, wherein the computer program, when executed by one or more processors, implements the scheduling method of any one of claims 1-12.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210724490.4A CN115167996A (en) | 2022-06-23 | 2022-06-23 | Scheduling method and device, chip, electronic equipment and storage medium |
PCT/CN2022/141062 WO2023246042A1 (en) | 2022-06-23 | 2022-12-22 | Scheduling method and apparatus, chip, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210724490.4A CN115167996A (en) | 2022-06-23 | 2022-06-23 | Scheduling method and device, chip, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115167996A true CN115167996A (en) | 2022-10-11 |
Family
ID=83488279
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210724490.4A Pending CN115167996A (en) | 2022-06-23 | 2022-06-23 | Scheduling method and device, chip, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115167996A (en) |
WO (1) | WO2023246042A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115426209A (en) * | 2022-11-07 | 2022-12-02 | 湖南三湘银行股份有限公司 | High-reliability message queue broadcast control method based on message processing |
CN115981811A (en) * | 2022-12-19 | 2023-04-18 | 杭州新迪数字工程系统有限公司 | Task scheduling method, system, electronic equipment and storage medium |
WO2023246042A1 (en) * | 2022-06-23 | 2023-12-28 | 哲库科技(北京)有限公司 | Scheduling method and apparatus, chip, electronic device, and storage medium |
CN117675720A (en) * | 2024-01-31 | 2024-03-08 | 井芯微电子技术(天津)有限公司 | Message transmission method and device, electronic equipment and storage medium |
WO2024109624A1 (en) * | 2022-11-23 | 2024-05-30 | 华为技术有限公司 | Data processing method and computer device |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7448036B2 (en) * | 2002-05-02 | 2008-11-04 | International Business Machines Corporation | System and method for thread scheduling with weak preemption policy |
US20050015768A1 (en) * | 2002-12-31 | 2005-01-20 | Moore Mark Justin | System and method for providing hardware-assisted task scheduling |
CN104915254A (en) * | 2014-12-31 | 2015-09-16 | 杰瑞石油天然气工程有限公司 | Embedded system multi-task scheduling method and system |
CN111475312B (en) * | 2019-09-12 | 2021-05-18 | 北京东土科技股份有限公司 | Message driving method and device based on real-time operating system |
CN114579285B (en) * | 2022-04-29 | 2022-09-06 | 武汉深之度科技有限公司 | Task running system and method and computing device |
CN115237556A (en) * | 2022-06-23 | 2022-10-25 | 哲库科技(北京)有限公司 | Scheduling method and device, chip, electronic equipment and storage medium |
CN115167996A (en) * | 2022-06-23 | 2022-10-11 | 哲库科技(北京)有限公司 | Scheduling method and device, chip, electronic equipment and storage medium |
-
2022
- 2022-06-23 CN CN202210724490.4A patent/CN115167996A/en active Pending
- 2022-12-22 WO PCT/CN2022/141062 patent/WO2023246042A1/en unknown
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023246042A1 (en) * | 2022-06-23 | 2023-12-28 | 哲库科技(北京)有限公司 | Scheduling method and apparatus, chip, electronic device, and storage medium |
CN115426209A (en) * | 2022-11-07 | 2022-12-02 | 湖南三湘银行股份有限公司 | High-reliability message queue broadcast control method based on message processing |
CN115426209B (en) * | 2022-11-07 | 2023-02-10 | 湖南三湘银行股份有限公司 | High-reliability message queue broadcast control method based on message processing |
WO2024109624A1 (en) * | 2022-11-23 | 2024-05-30 | 华为技术有限公司 | Data processing method and computer device |
CN115981811A (en) * | 2022-12-19 | 2023-04-18 | 杭州新迪数字工程系统有限公司 | Task scheduling method, system, electronic equipment and storage medium |
CN115981811B (en) * | 2022-12-19 | 2024-03-15 | 上海新迪数字技术有限公司 | Task scheduling method, system, electronic device and storage medium |
CN117675720A (en) * | 2024-01-31 | 2024-03-08 | 井芯微电子技术(天津)有限公司 | Message transmission method and device, electronic equipment and storage medium |
CN117675720B (en) * | 2024-01-31 | 2024-05-31 | 井芯微电子技术(天津)有限公司 | Message transmission method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023246042A1 (en) | 2023-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115167996A (en) | Scheduling method and device, chip, electronic equipment and storage medium | |
US9792051B2 (en) | System and method of application aware efficient IO scheduler | |
CN115237556A (en) | Scheduling method and device, chip, electronic equipment and storage medium | |
CN109697122B (en) | Task processing method, device and computer storage medium | |
KR100628492B1 (en) | Method and system for performing real-time operation | |
US8478926B1 (en) | Co-processing acceleration method, apparatus, and system | |
US8963933B2 (en) | Method for urgency-based preemption of a process | |
CN109564528B (en) | System and method for computing resource allocation in distributed computing | |
CN110489213A (en) | A kind of task processing method and processing unit, computer system | |
CN112783659B (en) | Resource allocation method and device, computer equipment and storage medium | |
US20230229495A1 (en) | Task scheduling method and apparatus | |
KR100791296B1 (en) | Apparatus and method for providing cooperative scheduling on multi-core system | |
KR102338849B1 (en) | Method and system for providing stack memory management in real-time operating systems | |
CN112491426B (en) | Service assembly communication architecture and task scheduling and data interaction method facing multi-core DSP | |
CN111240813A (en) | DMA scheduling method, device and computer readable storage medium | |
CN114579285B (en) | Task running system and method and computing device | |
CN112925616A (en) | Task allocation method and device, storage medium and electronic equipment | |
CN116724294A (en) | Task allocation method and device | |
CN117472570A (en) | Method, apparatus, electronic device and medium for scheduling accelerator resources | |
CN113407357A (en) | Method and device for inter-process data movement | |
CN116048756A (en) | Queue scheduling method and device and related equipment | |
US8869171B2 (en) | Low-latency communications | |
CN114911538A (en) | Starting method of running system and computing equipment | |
CN113296957A (en) | Method and device for dynamically allocating network-on-chip bandwidth | |
US20240184624A1 (en) | Method and system for sequencing artificial intelligence (ai) jobs for execution at ai accelerators |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |