Nothing Special   »   [go: up one dir, main page]

WO2024001411A1 - Procédé et dispositif de planification multi-fil - Google Patents

Procédé et dispositif de planification multi-fil Download PDF

Info

Publication number
WO2024001411A1
WO2024001411A1 PCT/CN2023/087477 CN2023087477W WO2024001411A1 WO 2024001411 A1 WO2024001411 A1 WO 2024001411A1 CN 2023087477 W CN2023087477 W CN 2023087477W WO 2024001411 A1 WO2024001411 A1 WO 2024001411A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
state
message
linked list
target
Prior art date
Application number
PCT/CN2023/087477
Other languages
English (en)
Chinese (zh)
Inventor
沈洋
徐金林
牛新伟
韩建辉
李铮
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2024001411A1 publication Critical patent/WO2024001411A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Definitions

  • the embodiments of the present application relate to the technical field of core network processors, and specifically, to a multi-thread scheduling method and device.
  • network processors as the core component of data forwarding in the field of digital communication, are specifically used in various tasks in the communication field such as packet processing, protocol analysis, route lookup, voice/data aggregation, and firewalls.
  • Embodiments of the present application provide a multi-thread scheduling method and device to at least solve the problem in related technologies that it is impossible to ensure that messages entering the micro engine first are forwarded first in the order in which the messages enter the micro engine.
  • a multi-thread scheduling method including: after each message enters the processor core, the thread number carried by each message is stored in sequence corresponding to the thread group to which the message belongs. in the thread management linked list, and establish a mapping relationship between the thread number and the node of the thread management linked list;
  • the target thread in the executable state is scheduled from the thread group in the order in which messages enter the processor core, and the target thread is Enter the pipeline corresponding to the target thread.
  • a multi-thread scheduling device including: a setting module configured to sequentially store the thread number carried by each message into the processor core after each message enters the processor core. in the thread management linked list corresponding to the thread group to which the message belongs, and establish a mapping relationship between the thread number and the node of the thread management linked list; the scheduling module is used to calculate the mapping relationship according to the mapping relationship and the thread state machine corresponding to each thread. status, schedule the target thread in the executable state from the thread group according to the order in which messages enter the processor core, and input the target thread into the pipeline corresponding to the target thread.
  • a computer-readable storage medium is also provided, wherein the computer-readable storage medium A computer program is stored in the medium, wherein the computer program is configured to execute the steps in any of the above method embodiments when running.
  • an electronic device including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to run the computer program to perform any of the above. Steps in method embodiments.
  • the thread scheduling method is optimized by introducing a thread management linked list to ensure that messages that enter the processor core first are scheduled and executed first. Therefore, it is possible to solve the problem in the related technology that the packets that enter the kernel first are forwarded first according to the order in which the packets enter the kernel, thereby achieving the effect of reducing the execution delay of the packets.
  • Figure 1 is a hardware structure block diagram of a computer terminal running the multi-thread scheduling method according to the embodiment of the present application;
  • Figure 2 is a flow chart of a multi-thread scheduling method according to an embodiment of the present application.
  • Figure 3 is a structural block diagram of a multi-thread scheduling device according to an embodiment of the present application.
  • Figure 4 is a structural block diagram of a multi-thread scheduling device according to another embodiment of the present application.
  • Figure 5 is a structural block diagram of a multi-thread scheduling device according to yet another embodiment of the present application.
  • Figure 6 is a schematic structural diagram of a coarse-grained multi-thread scheduling device according to an embodiment of the present application.
  • Figure 7 is a schematic diagram corresponding to threads and thread management linked lists according to an embodiment of the present application.
  • Figure 8 is a schematic diagram of thread state switching according to an embodiment of the present application.
  • Figure 9 is a flowchart for executing coarse-grained multi-thread scheduling according to an embodiment of the present application.
  • FIG. 1 is a hardware structure block diagram of a computer terminal running the multi-thread scheduling method according to the embodiment of the present application.
  • the computer terminal may include one or more (only one is shown in Figure 1) processors 102 (the processor 102 may include but is not limited to a microprocessor (Central Processing Unit, MCU) or a programmable logic device (Field Programmable Gate Array, FPGA) and other processing devices) and a memory 104 for storing data, wherein the above-mentioned computer terminal may also include a transmission device 106 for communication functions and an input and output device 108.
  • processors 102 may include but is not limited to a microprocessor (Central Processing Unit, MCU) or a programmable logic device (Field Programmable Gate Array, FPGA) and other processing devices) and a memory 104 for storing data
  • the above-mentioned computer terminal may also include a transmission device 106 for communication functions and an input and output device 108.
  • the structure shown in Figure 1 is only illustrative, and it does not limit the structure of the above-mentioned computer terminal.
  • the computer terminal may also include more or fewer components than
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the multi-thread scheduling method in the embodiment of the present application.
  • the processor 102 executes the computer program by running the computer program stored in the memory 104.
  • Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include memory located remotely relative to the processor 102, and these remote memories may be connected to the computer terminal through a network. Examples of the above networks include, but are not limited to Internet, intranet, local area network, mobile communication network and their combinations.
  • the transmission device 106 is used to receive or send data via a network.
  • Specific examples of the above-mentioned network may include a wireless network provided by a communication provider of the computer terminal.
  • the transmission device 106 includes a network adapter (Network Interface Controller, NIC for short), which can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet wirelessly.
  • NIC Network Interface Controller
  • FIG. 2 is a flow chart of the multi-thread scheduling method according to the embodiment of the present application. As shown in Figure 2, the process includes the following steps:
  • Step S202 After each message enters the processor core, the thread number carried by each message is sequentially stored in the thread management linked list corresponding to the thread group to which the message belongs, and a node of the thread number and thread management linked list is established. the mapping relationship between;
  • Step S204 According to the mapping relationship and the state of the thread state machine corresponding to each thread, the target thread in the executable state is scheduled from the thread group according to the order in which messages enter the processor core, and the target thread is The thread input corresponds to the pipeline of the target thread.
  • the method further includes: assigning a thread number to each message entering the processor core, and dividing all threads into thread groups corresponding to the number of pipelines.
  • each thread group corresponds to a thread management linked list, and the number of nodes in each thread management linked list is the same as the number of threads included in each thread group.
  • step S202 of this embodiment the mapping relationship between the nodes of the thread management linked list and the thread numbers is represented by a bitmap.
  • step S204 of this embodiment it includes: calculating the transmission request corresponding to each node according to the value of the bitmap and the readiness status of each thread in the thread group, and performing priority scheduling on the thread with the transmission request. , so that the thread that first enters the processor core and is in the ready state is authorized, converted to the executable state, and the thread in the executable state is scheduled as the target thread; obtains the instruction corresponding to the target thread, And input the target thread into the pipeline corresponding to the target thread to execute the instruction.
  • the method further includes: scheduling the message after the instruction has been executed out of the processor core, and releasing the thread corresponding to the message; The thread number of the message is cleared from the node of the thread management linked list, and other thread numbers stored in the node of the thread management linked list are moved forward by one node in sequence.
  • each pipeline corresponds to a main control state machine
  • each thread corresponds to a thread state machine
  • each pipeline is in two states: idle and authorized
  • each thread is in four states: idle, ready, executable and waiting. Transition between states.
  • each pipeline transitions between two states: idle and authorized, and each thread transitions between four states: idle, ready, executable, and waiting, including: when the main control state machine is in an idle state, it means Allow new messages to enter the processor core. After the new message enters the processor core, the corresponding thread is in an idle state; the thread number of the message is stored in the node of the thread management linked list, and the thread number associated with the thread is retrieved from the instruction storage module.
  • the thread transfers from the idle state to the ready state; in the authorization state of the main control state machine, the thread in the ready state is authorized, and the authorized thread transfers from the ready state to the executable state; in After the thread in the executable state executes the corresponding instruction, it transitions from the executable state to the idle state; when the thread in the executable state is waiting for data, table lookup, or re-fetching instructions during the execution of the instruction, the thread is changed from the executable state to the idle state.
  • the execution state transfers to the waiting state; the thread in the waiting state waits for data After the end, or the table lookup result is returned, or the instruction is retrieved and returned, the waiting state is transferred back to the ready state; after the thread number of the thread that has completed the instruction is released, the main control state machine enters the idle state.
  • the thread scheduling method is optimized by introducing a thread management linked list to ensure that the packets that enter the processor core first are scheduled and executed first. Therefore, it is possible to solve the problem in the related technology that the packets entering the micro-engine first are forwarded first according to the order in which the packets enter the micro-engine, thereby achieving the effect of reducing the execution delay of the packets.
  • the computer software product is stored in a storage medium (such as read-only memory/random access memory).
  • the memory Read-Only Memory/Random Access Memory, ROM/RAM), magnetic disk, optical disk
  • ROM/RAM Read-Only Memory/Random Access Memory
  • magnetic disk magnetic disk
  • optical disk includes several instructions to cause a terminal device (which can be a mobile phone, computer, server, or network device, etc.) to execute this application Methods described in various embodiments.
  • module may be a combination of software and/or hardware that implements a predetermined function.
  • the apparatus described in the following embodiments is preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
  • Figure 3 is a structural block diagram of a multi-thread scheduling device according to an embodiment of the present application. As shown in Figure 3, the device includes: a creation module 10 and a scheduling module 20.
  • the establishment module 10 is used to store the thread number carried by each message into the thread management linked list corresponding to the thread group to which the message belongs after each message enters the processor core, and establish the thread number and thread number. Manage the mapping relationship between nodes in the linked list;
  • the scheduling module 20 is configured to schedule the target thread in the executable state from the thread group according to the order in which messages enter the processor core according to the mapping relationship and the state of the thread state machine corresponding to each thread. And input the target thread into the pipeline corresponding to the target thread.
  • Figure 4 is a structural block diagram of a multi-thread scheduling device according to another embodiment of the present application. As shown in Figure 4, in addition to all the modules shown in Figure 3, the device also includes:
  • the allocation module 30 is configured to allocate a thread number to each message entering the processor core, and divide all threads into thread groups corresponding to the number of pipelines.
  • each thread group corresponds to a thread management linked list
  • the number of nodes in each thread management linked list is the same as the number of threads included in each thread group.
  • Figure 5 is a structural block diagram of a multi-thread scheduling device according to yet another embodiment of the present application. As shown in Figure 5, in addition to all the modules shown in Figure 4, the device also includes:
  • the release module 40 schedules the message that has completed the execution of the instruction corresponding to the target thread out of the processor core, releases the thread corresponding to the message, and clears the thread number of the message in the node of the thread management linked list. , and move other thread numbers stored in the nodes of the thread management linked list forward by one node in sequence.
  • each pipeline corresponds to a main control state machine
  • each thread corresponds to a thread state machine
  • each pipeline is in two states: idle and authorized, and each thread is in idle, ready, and available states. Transition between execution and waiting 4 states.
  • each of the above modules can be implemented through software or hardware.
  • it can be implemented through It can be implemented in the following manner, but is not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules are located in different processors in any combination.
  • Embodiments of the present application also provide a computer-readable storage medium that stores a computer program, wherein the computer program is configured to execute the steps in any of the above method embodiments when running.
  • the computer-readable storage medium may include but is not limited to: U disk, read-only memory (Read-Only Memory, referred to as ROM), random access memory (Random Access Memory, referred to as RAM) , mobile hard disk, magnetic disk or optical disk and other media that can store computer programs.
  • ROM read-only memory
  • RAM random access memory
  • mobile hard disk magnetic disk or optical disk and other media that can store computer programs.
  • An embodiment of the present application also provides an electronic device, including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • a micro-engine when a micro-engine receives a new message, it first allocates a thread number to the new message.
  • the priority of the message is not distinguished; the traditional least recently used (Least Recently Used, LRU) scheduling algorithm It can make the most frequently used threads get the highest priority, but when the number of threads is large, there is no guarantee that the packets that enter the kernel first will be executed and forwarded first.
  • LRU least recently used
  • FIG. 6 is a multi-thread scheduling device based on coarse-grainedness according to an embodiment of the present application.
  • a schematic structural diagram of the device is shown in Figure 6.
  • the multi-thread scheduling system includes: a thread scheduling module 11, an instruction storage module 12 and a completion scheduling module 13.
  • Thread scheduling module 11 is used to allocate thread numbers to new messages, divide all threads into thread groups corresponding to the number of pipeline lines, and each thread group schedules ready executable threads according to the order in which the messages enter, from the instruction storage
  • the instruction corresponding to the thread is obtained in the module, and is launched into the pipeline corresponding to the thread group for execution. After the execution is completed, the completion scheduling module 13 is notified; in this implementation, the thread scheduling module 11 functionally includes the above-mentioned establishment in the embodiment. Functions of module 10, scheduling module 20 and allocation module 30.
  • the instruction storage module 12 is used to store instructions used for thread execution, including an instruction level 2 cache and an instruction level 1 cache;
  • the completion scheduling module 13 is used to receive the message execution completion signal sent by the thread scheduling module, schedule the corresponding message out of the kernel and release the thread number information; in this implementation, the completion scheduling module 13 is functionally equivalent to the above embodiment.
  • a thread management linked list is introduced to manage the thread number information corresponding to the packet entry sequence, and schedule advanced messages and ready executable threads from each thread group to launch into the corresponding Pipeline execution; wherein, each thread group corresponds to a thread management linked list, and the number of linked list nodes is the same as the number of threads contained in each thread group;
  • Figure 7 is a schematic diagram corresponding to threads and thread management linked lists according to an embodiment of the present application, as shown in Figure As shown in Figure 7, 20 threads are divided into 2 thread groups. One thread group contains 10 threads.
  • the corresponding thread management linked list has 10 nodes node0-node9. The above nodes are used to store the thread numbers assigned to incoming packets.
  • the thread number information it carries is stored in the nodes node0-node9 from left to right. There is an existence between the thread management list node and the thread number.
  • a layer of mapping relationship The mapping relationship between thread management linked list nodes and thread numbers can be maintained through bitmap.
  • Each thread management linked list has 10 bitmap values corresponding to the nodes; according to the bitmap value and thread
  • the readiness status (rdy) of each thread in the group is calculated to determine the launch request corresponding to each node, and participates in priority scheduling (SP). Executable threads that advance messages and are ready are authorized and launched into the pipeline corresponding to the thread group. .
  • the messages are scheduled out of the kernel, the corresponding threads are released, the corresponding thread number information is retrieved in the thread management linked list, the thread number information of the matching node is cleared, and at the same time, the thread number information stored in all nodes to the right of the matching node is The thread number information is shifted one node to the left for storage.
  • the above two thread groups correspond to two pipelines respectively; the number of threads in each thread group can be 10 or any other number; the pipeline can be divided into five levels of pipelines or other levels.
  • Running water e.g., seventh level, etc.
  • the thread scheduling module 11 can also control the state transition of each thread.
  • Each pipeline corresponds to a main control state machine, and each thread corresponds to a thread state machine.
  • the specific conversion includes the following steps:
  • the main control state machine when the main control state machine is in the idle (IDLE) state, it means that new packets are allowed to enter.
  • IDLE idle
  • new packets enter the kernel they are first in the IDLE state;
  • the authorized thread is transferred from the rdy state to the running (exe) state. Only one thread in each thread group can be authorized at the same time.
  • the two thread groups can schedule the executable thread of the most advanced package from their respective groups and launch it into the corresponding Pipeline (pipeline 0 or pipeline 1) execution;
  • the corresponding thread transfers from the exe state to the idle state, the message is scheduled out of the kernel, the thread number information of the matching node in the thread management linked list is retrieved and deleted, and the corresponding thread number is released;
  • GRANT authorizes the remaining threads with the most advanced package in the rdy state to enter the exe state. After the data waiting period of the thread that previously transferred to the wait state is completed, or the table lookup data has been returned, or the index is retrieved and returned, the wait state will be restarted. Transfer to rdy state; since the thread management linked list saves its packet entry sequence information, when the thread currently in exe state transfers to other states, the thread transferred from wait state to rdy state can still receive priority scheduling until it completes the execution of the package Send an instruction and the thread changes from exe state to idle state;
  • the main control state machine After releasing the corresponding thread number, the main control state machine enters the IDLE state.
  • Figure 9 is a flow chart for executing coarse-grained multi-thread scheduling according to an embodiment of the present application. As shown in Figure 9, when a new packet enters, it is first determined whether the thread group is in the IDLE state. If not, the new packet It is necessary to wait until there is an idle thread available for allocation; if so, select a thread i from the idle thread and assign it to the incoming message; then, send an instruction fetch instruction to the instruction storage module.
  • thread i is transferred from IDLE The state transfers to rdy state; several threads in rdy state in the same thread group perform SP scheduling (GRANT), so that the thread of the most advanced package obtains authorization GRANTi; after obtaining authorization, thread i transfers to exe state; thread i in exe state is in When an instruction with data dependency is found during instruction execution, or an instruction with table lookup returns data dependency, or when instructions need to be re-fetched and require a long wait, thread i transfers to wait state; after the data waiting cycle is completed, or the table lookup data has been returned, or after re-fetching and returning, thread i will re-enter the rdy state from the wait state. Due to the use of SP scheduling based on the packet entry sequence, waiting for the thread currently in the exe state When the thread transfers to other states, thread i can receive priority scheduling until it completes the execution of the package sending instructions, transfers to the idle state, and releases the thread.
  • SP scheduling GRANT
  • the main control state machine is in the IDLE state, indicating that new packets are allowed to enter.
  • a new packet enters the kernel, it is first in the idle state, sends an instruction fetch instruction to the instruction storage module, and maintains the correspondence in Figure 7 according to the assigned thread number.
  • the thread management linked list of the thread stores the thread number information in the corresponding node.
  • the corresponding thread transfers from the idle state to the rdy state.
  • Several threads in the rdy state in the same thread group combine each thread in the thread management linked list.
  • the order of incoming packet threads obtained by node mapping is performed by SP scheduling (GRANT), so that the thread of the most advanced package is authorized.
  • the authorized thread is transferred from the rdy state to the exe state. Only one thread in each thread group can be authorized at the same time.
  • the two thread groups can schedule the executable thread of the most advanced package from their respective groups and launch it into the corresponding pipeline (pipeline 0 or pipeline 1) for execution.
  • the thread in the exe state executes the package and sends the instruction, and the corresponding thread is transferred from the exe state.
  • the message is scheduled out of the kernel, the thread number information of the matching node in the thread management linked list is retrieved and deleted, and the corresponding thread number is released.
  • the thread in the exe state finds instructions with data correlation during the instruction execution, or returns from the table lookup.
  • the corresponding thread When there is a data-related instruction, or when instructions need to be re-fetched and require a long wait, the corresponding thread will be transferred from the exe state to the wait state, and GRANT will authorize the remaining threads with the most advanced package in the rdy state to enter the exe state until the previous transfer.
  • the thread data waiting period in the wait state is completed, or the table lookup data has been returned, or after re-fetching and returning, the wait state is transferred back to the rdy state.
  • the thread management linked list saves its packet entry sequence information, when the thread currently in the exe state When the thread transfers to other states, the thread that transfers from the wait state to the rdy state can still receive priority scheduling until it completes the execution of the package sending instructions, the thread transfers from the exe state to the idle state, and the main control state machine enters the IDLE state.
  • the coarse-grained multi-thread scheduling method ensures that the packets that enter the kernel first are scheduled first, and are only switched when a costly pause occurs (such as re-fetching, table lookup, etc.) Other threads execute, greatly reducing the possibility of slowing down the execution speed of any message, and reducing the execution delay of any message.
  • modules or steps of the present application can be implemented using general-purpose computing devices, and they can be concentrated on a single computing device, or distributed across a network composed of multiple computing devices. They may be implemented in program code executable by a computing device, such that they may be stored in a storage device for execution by the computing device, and in some cases may be executed in a sequence different from that shown herein. Or the described steps can be implemented by making them into individual integrated circuit modules respectively, or by making multiple modules or steps among them into a single integrated circuit module. As such, the application is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Des modes de réalisation de la présente demande concernent un procédé et un dispositif de planification multi-fil. Le procédé comprend : après que chaque message entre dans un cœur de processeur, le stockage séquentiel d'un numéro de fil transporté par chaque message dans une liste liée de gestion de fil correspondant à un groupe de fils auquel le message appartient, et l'établissement d'une relation de mappage entre le numéro de fil et un nœud de la liste liée de gestion de fil ; et selon la relation de mappage et l'état d'une machine à états finis de fil correspondant à chaque fil, la planification d'un fil cible dans un état exécutable à partir du groupe de fils selon l'ordre dans lequel les messages entrent dans le cœur de processeur, et l'entrée du fil cible dans un pipeline correspondant au fil cible.
PCT/CN2023/087477 2022-06-27 2023-04-11 Procédé et dispositif de planification multi-fil WO2024001411A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210738293.8 2022-06-27
CN202210738293.8A CN117331655A (zh) 2022-06-27 2022-06-27 多线程调度方法及装置

Publications (1)

Publication Number Publication Date
WO2024001411A1 true WO2024001411A1 (fr) 2024-01-04

Family

ID=89294062

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/087477 WO2024001411A1 (fr) 2022-06-27 2023-04-11 Procédé et dispositif de planification multi-fil

Country Status (2)

Country Link
CN (1) CN117331655A (fr)
WO (1) WO2024001411A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118069071B (zh) * 2024-04-19 2024-08-13 苏州元脑智能科技有限公司 资源访问控制方法、装置、计算机设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1582428A (zh) * 2001-11-07 2005-02-16 国际商业机器公司 用于在非均衡存储器存取计算机系统中调度任务的方法和设备
CN104901901A (zh) * 2014-03-07 2015-09-09 深圳市中兴微电子技术有限公司 一种微引擎及其处理报文的方法
US20150347192A1 (en) * 2014-05-29 2015-12-03 Apple Inc. Method and system for scheduling threads for execution
CN109257280A (zh) * 2017-07-14 2019-01-22 深圳市中兴微电子技术有限公司 一种微引擎及其处理报文的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1582428A (zh) * 2001-11-07 2005-02-16 国际商业机器公司 用于在非均衡存储器存取计算机系统中调度任务的方法和设备
CN104901901A (zh) * 2014-03-07 2015-09-09 深圳市中兴微电子技术有限公司 一种微引擎及其处理报文的方法
US20150347192A1 (en) * 2014-05-29 2015-12-03 Apple Inc. Method and system for scheduling threads for execution
CN109257280A (zh) * 2017-07-14 2019-01-22 深圳市中兴微电子技术有限公司 一种微引擎及其处理报文的方法

Also Published As

Publication number Publication date
CN117331655A (zh) 2024-01-02

Similar Documents

Publication Publication Date Title
US11620255B2 (en) Time sensitive networking device
CN105511954B (zh) 一种报文处理方法及装置
WO2017133623A1 (fr) Procédé, appareil et système de traitement de flux de données
CN109697122B (zh) 任务处理方法、设备及计算机存储介质
US20100088703A1 (en) Multi-core system with central transaction control
US9632977B2 (en) System and method for ordering packet transfers in a data processor
US8848532B2 (en) Method and system for processing data
CN105516086B (zh) 业务处理方法及装置
US8464269B2 (en) Handling and reporting of object state transitions on a multiprocess architecture
US9141436B2 (en) Apparatus and method for partition scheduling for a processor with cores
WO2024001411A1 (fr) Procédé et dispositif de planification multi-fil
CN111404931B (zh) 一种基于持久性内存的远程数据传输方法
WO2017185285A1 (fr) Procédé et dispositif d'attribution de tâche d'unité de traitement graphique
CN111416858B (zh) 一种媒体资源的处理平台、方法、装置和服务器
CN106411778A (zh) 数据转发的方法及装置
CN111163140A (zh) 资源获取和分配的方法、装置和计算机可读存储介质
WO2024156239A1 (fr) Procédé et appareil de transmission de diffusion en continu de vidéo, dispositif électronique et support de stockage
CN110445580A (zh) 数据发送方法及装置、存储介质、电子装置
CN114218135A (zh) 一种基于Redis缓存的源端流控方法及系统
CN109257280B (zh) 一种微引擎及其处理报文的方法
CN110445874A (zh) 一种会话处理方法、装置、设备和存储介质
CN114157619A (zh) 报文缓存管理方法、装置及网络处理器
CN111324438A (zh) 请求的调度方法、装置、存储介质及电子设备
CN118410001B (zh) 图形处理单元间数据传输方法、系统、装置、产品及设备
CN114168233B (zh) 一种数据处理方法、装置、服务器及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23829601

Country of ref document: EP

Kind code of ref document: A1