Nothing Special   »   [go: up one dir, main page]

CN111190712A - Task scheduling method, device, equipment and medium - Google Patents

Task scheduling method, device, equipment and medium Download PDF

Info

Publication number
CN111190712A
CN111190712A CN201911355725.1A CN201911355725A CN111190712A CN 111190712 A CN111190712 A CN 111190712A CN 201911355725 A CN201911355725 A CN 201911355725A CN 111190712 A CN111190712 A CN 111190712A
Authority
CN
China
Prior art keywords
task
prediction
allocated
type
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911355725.1A
Other languages
Chinese (zh)
Inventor
单亚峰
王征
颜亚军
李超明
翁黄硕羽
李新阳
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Infervision Technology Co Ltd
Infervision Co Ltd
Original Assignee
Infervision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Co Ltd filed Critical Infervision Co Ltd
Priority to CN201911355725.1A priority Critical patent/CN111190712A/en
Publication of CN111190712A publication Critical patent/CN111190712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a task scheduling method, a device, equipment and a medium, wherein the method comprises the following steps: when the task scheduling instruction is triggered, acquiring task information of a to-be-allocated prediction task and processor information of a to-be-allocated graphics processor; selecting a to-be-distributed prediction task as a target prediction task according to the task information and the processor information; and acquiring a target task type corresponding to the target prediction task, and taking the to-be-distributed prediction task corresponding to the target task type as a target execution task corresponding to the to-be-distributed graphics processor. According to the task scheduling method provided by the embodiment of the invention, the same type of task to be predicted is allocated to each graphics processor under the condition that the graphics processor resource is limited according to the task state, so that under the condition of limited graphics processor hardware resource, various different model predictions are realized, the starting and stopping times of the graphics processor are reduced, and the hardware utilization rate of the graphics processor is improved.

Description

Task scheduling method, device, equipment and medium
Technical Field
The embodiment of the invention relates to the field of data processing, in particular to a task scheduling method, a task scheduling device, a task scheduling equipment and a task scheduling medium.
Background
With the continuous progress of Graphics Processing Unit (GPU) technology, GPU is becoming one of the most important acceleration components in computing systems. At present, a learning prediction task based on a deep learning technology, for example, a neural network model prediction task for disease analysis and the like, generally realizes hardware acceleration based on a GPU, but the GPU is expensive and has limited resources, a single GPU hardware device has a memory from several G to several tens G, a common deep learning model occupies the memory from several tens M to several hundreds G, how to realize multiple different model predictions under the limited GPU hardware resources, and is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides a task scheduling method, a task scheduling device and a task scheduling medium, which are used for realizing various different model predictions under the limited GPU hardware resources and improving the hardware utilization rate of a GPU.
In a first aspect, an embodiment of the present invention provides a task scheduling method, including:
when the task scheduling instruction is triggered, acquiring task information of a to-be-allocated prediction task and processor information of a to-be-allocated graphics processor;
selecting a to-be-distributed prediction task as a target prediction task according to the task information and the processor information;
and acquiring a target task type corresponding to the target prediction task, and taking the to-be-distributed prediction task corresponding to the target task type as a target execution task corresponding to the to-be-distributed graphics processor.
In a second aspect, an embodiment of the present invention further provides a task scheduling apparatus, including:
the task information acquisition module is used for acquiring task information of a to-be-allocated prediction task and processor information of a to-be-allocated graphics processor when a task scheduling instruction is triggered;
the target task determination module is used for selecting a to-be-allocated prediction task as a target prediction task according to the task information and the processor information;
and the prediction task allocation module is used for acquiring a target task type corresponding to the target prediction task and taking the to-be-allocated prediction task corresponding to the target task type as a target execution task corresponding to the to-be-allocated graphics processor.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of task scheduling as provided by any of the embodiments of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the task scheduling method provided in any embodiment of the present invention.
The embodiment of the invention acquires the task information of the predicted task to be allocated and the processor information of the graphics processor to be allocated when the task scheduling instruction is triggered; selecting a to-be-distributed prediction task as a target prediction task according to the task information and the processor information; the method comprises the steps of obtaining a target task type corresponding to a target prediction task, taking a to-be-allocated prediction task corresponding to the target task type as a target execution task corresponding to a to-be-allocated graphics processor, and allocating the same type of to-be-predicted task to each GPU under the condition that GPU resources are limited according to task states, so that under the condition of limited GPU hardware resources, multiple different model predictions are realized, the starting and stopping times of the GPUs are reduced, and the hardware utilization rate of the GPUs is improved.
Drawings
Fig. 1 is a flowchart of a task scheduling method according to an embodiment of the present invention;
fig. 2a is a flowchart of a task scheduling method according to a second embodiment of the present invention;
FIG. 2b is a flowchart of a task scheduling algorithm according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a task scheduling device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a task scheduling method according to an embodiment of the present invention. The embodiment can be suitable for the situation when task scheduling is carried out so that a plurality of deep learning prediction tasks can be reasonably executed by a limited number of GPUs. The method may be performed by a task scheduler, which may be implemented in software and/or hardware, for example, which may be configured in a computer device. As shown in fig. 1, the method includes:
s110, when the task scheduling instruction is triggered, acquiring task information of the to-be-allocated prediction task and processor information of the to-be-allocated graphics processor.
In this embodiment, when the task scheduling instruction is triggered, the tasks to be predicted are scheduled, and the same type of prediction tasks are allocated to the same graphics processor, so that the tasks to be predicted are reasonably allocated. Optionally, when the task scheduling instruction is triggered, task information of the current to-be-allocated prediction task and processor information of the to-be-allocated graphics processor are obtained, so as to perform task scheduling according to the task information and the processor information. The task information may be information such as a task type of the prediction task to be allocated (e.g., a task prediction model corresponding to the prediction task to be allocated), a task resource requirement of the prediction task to be allocated, and a task execution time of the prediction task to be allocated. The processor information of the graphics processor to be allocated may include hardware resources of the graphics processor to be allocated, such as the number of the graphics processors to be allocated.
In an embodiment of the present invention, the task scheduling instruction is triggered, and includes: when a set task scheduling period is reached, the task scheduling instruction is triggered; and/or when a newly-built task to be predicted exists in the unassigned task type, triggering the task scheduling instruction; and/or when the allocated task type does not have a task to be predicted, the task scheduling instruction is triggered.
In general, not all task prediction models have tasks to predict at all times, and in extreme cases, there may be a very small amount of first task prediction needs, but a large amount of second task prediction needs during a day. For example, there may be only tens of CT image prediction requirements in a day, and most of them are DR prediction requirements. In order to enable the distribution of the GPU to be more reasonable and improve the hardware use efficiency of the GPU, a task scheduling period can be preset according to the prediction rule of the prediction task, and when the set task scheduling period is reached, a task scheduling instruction is triggered to reschedule the current task to be predicted. For example, the task scheduling period may be set to 24 hours, and then the current task to be predicted is rescheduled every 24 hours. In addition, when the newly-built task to be predicted exists in the unallocated task type, a task scheduling instruction is triggered, so that the newly-built task to be predicted can be timely allocated to the GPU, and the newly-built task to be predicted can be timely predicted. And when the distributed task type does not have the task to be predicted, triggering a task scheduling instruction to ensure that the task to be predicted is redistributed for the GPU in the idle state when the GPU is changed from the task execution state to the idle state, and improving the hardware use efficiency of the GPU.
And S120, selecting a to-be-distributed prediction task as a target prediction task according to the task information and the processor information.
In this embodiment, after task information of the prediction task to be allocated and processor information of the graphics processor to be allocated are obtained, the prediction task to be allocated is selected from the prediction tasks to be allocated as a target prediction task of the graphics processor to be allocated according to the task information of the prediction task to be allocated and the processor information of the graphics processor to be allocated.
Optionally, for each task to be predicted, whether the task to be predicted can be executed is determined according to the task information of the task to be predicted and the processor information of the graphics processor to be predicted. Optionally, the to-be-allocated prediction tasks that can be executed can be randomly selected from the to-be-allocated prediction tasks and used as the target prediction tasks, and whether the to-be-allocated prediction tasks can be executed or not can be sequentially judged according to the set sequence, and the first to-be-allocated prediction task that can be executed is used as the target prediction task. For example, the arrangement manner of the set sequence is not limited herein, and for example, the predicted tasks to be allocated may be sequentially ordered according to the task execution time of the predicted tasks to be allocated, or the predicted tasks to be allocated may be sequentially ordered according to the task creation time of the predicted tasks to be allocated, or the predicted tasks to be allocated may be sequentially ordered according to the task resource requirements of the predicted tasks to be allocated. The tasks to be allocated and predicted are sequentially sequenced according to the task execution time of the tasks to be allocated and predicted, so that the tasks to be allocated and predicted with short task execution time can be executed first, the tasks to be allocated and predicted are sequentially sequenced according to the task creation time of the tasks to be allocated and predicted, so that the tasks to be allocated and predicted with early task creation time can be executed first, and the tasks to be allocated and predicted are sequentially sequenced according to the task resource requirements of the tasks to be allocated and predicted so that the tasks to be allocated and predicted with low task resource requirements can be executed first.
In an embodiment of the present invention, the task information includes task creation time of the to-be-allocated prediction task and task resource requirements of the to-be-allocated prediction task, the processor information includes hardware resources of the to-be-allocated graphics processor, and the selecting the to-be-allocated prediction task as the target prediction task according to the task information and the processor information includes: according to the task creation time, performing ascending sequencing on the to-be-distributed prediction tasks; and performing resource matching on the to-be-allocated prediction tasks according to the task sequencing sequence, and taking the to-be-allocated prediction task with the first task resource demand not higher than the hardware resource as the target prediction task.
In one embodiment, in order to ensure that the task created first is predicted as far as possible under the condition that GPU resources are limited, the tasks to be allocated and predicted may be sequentially ordered according to the task creation time of the tasks to be allocated and may be sequentially resource-matched according to the order of ordering, and whether the task to be allocated can be executed is determined, and the first task to be allocated and predicted that can be executed is taken as the target prediction task. Specifically, the step of judging whether the task to be allocated can be executed may be: and judging whether the task resource demand of the prediction task to be allocated is not higher than the hardware resource of the graphics processor to be allocated. If the task resource demand of the prediction task to be allocated is not higher than the hardware resource of the graphics processor to be allocated, the prediction task to be allocated can be executed, and if the task resource demand of the prediction task to be allocated is lower than the hardware resource of the graphics processor to be allocated, the prediction task to be allocated cannot be executed. For example, if the task resource requirement of the prediction task to be allocated is 1 GPU and the hardware resource of the graphics processor to be allocated is 2 GPUs, it is determined that the task resource requirement of the prediction task to be allocated is not higher than the hardware resource of the graphics processor to be allocated, and the prediction task to be allocated can be executed.
S130, obtaining a target task type corresponding to the target prediction task, and taking the to-be-distributed prediction task corresponding to the target task type as a target execution task corresponding to the to-be-distributed graphics processor.
Generally, task resource requirements of prediction tasks of the same task type are uniform, and in order to reduce the number of start-stop times of the GPU, a target task type corresponding to a target prediction task may be used as a target task type to be executed by a to-be-allocated graphics processor, and a to-be-allocated prediction task corresponding to the target task type may be used as a target execution task to be executed by the to-be-allocated graphics processor.
Optionally, if the hardware resource of the to-be-allocated graphics processor is higher than the task resource demand of the to-be-allocated prediction task, allocating the to-be-allocated graphics processor with the hardware resource equal to the task resource demand of the to-be-allocated prediction task to the to-be-allocated prediction task, then performing re-scheduling on the remaining to-be-allocated graphics processors, and re-allocating other to-be-allocated prediction tasks to the remaining to-be-allocated graphics processors until the to-be-allocated graphics processor is evenly allocated with the corresponding to-be-allocated prediction task, or until all the to-be-allocated prediction tasks are distributed to the corresponding to-be-allocated graphics processor. For example, if the number of the graphics processors to be allocated is 2 GPUs and the task resource requirement of the prediction task to be allocated is 1 GPU, 1 GPU in the graphics processors to be allocated is used as a task execution GPU of the task type corresponding to the prediction task to be allocated, and other prediction tasks to be allocated are reallocated to the remaining 1 GPU.
The process of allocating other to-be-allocated prediction tasks to the remaining to-be-allocated graphics processors may be: acquiring ascending sequence ordering of other to-be-allocated prediction tasks according to task creation time, performing resource matching on the other to-be-allocated prediction tasks according to the ordering sequence, and taking the other to-be-allocated prediction tasks of which the first task resource demand is not higher than the hardware resources as residual target prediction tasks of the residual to-be-allocated graphics processor; and acquiring the residual target task types corresponding to the residual target prediction tasks, and taking the to-be-distributed prediction tasks corresponding to the residual target task types as residual target execution tasks corresponding to the residual to-be-distributed graphics processors. The ascending order of the other to-be-allocated prediction tasks according to the task creation time can be re-ordered according to the task creation time of the other to-be-allocated prediction tasks, or can be obtained by deleting the allocated prediction tasks in the earlier order. Optionally, more specific technical details may be referred to the above embodiments, and are not described herein again.
The embodiment of the invention acquires the task information of the predicted task to be allocated and the processor information of the graphics processor to be allocated when the task scheduling instruction is triggered; selecting a to-be-distributed prediction task as a target prediction task according to the task information and the processor information; the method comprises the steps of obtaining a target task type corresponding to a target prediction task, taking a to-be-allocated prediction task corresponding to the target task type as a target execution task corresponding to a to-be-allocated graphics processor, and allocating the same type of to-be-predicted task to each GPU under the condition that GPU resources are limited according to task states, so that under the condition of limited GPU hardware resources, multiple different model predictions are realized, the starting and stopping times of the GPUs are reduced, and the hardware utilization rate of the GPUs is improved.
On the basis of the scheme, the method further comprises the following steps: when the prediction task to be allocated does not exist, judging whether an idle graphics processor exists or not; if the idle graphic processor exists, obtaining model prediction time corresponding to each task type, and performing reverse ordering on the task types according to the model prediction time; and allocating idle prediction tasks to the idle graphics processors according to the type sorting sequence.
In this embodiment, in order to ensure that all tasks to be predicted are completed synchronously as far as possible under the condition that hardware resources of the graphics processor are sufficient, after the tasks to be predicted are all allocated to the corresponding graphics processors, the tasks to be predicted of models with longer execution time are allocated to the idle graphics processor, so as to shorten the prediction time of the task types with longer execution time, specifically, when there is no task to be predicted and there is an idle graphics processor, the model prediction time required by each task type to execute the current task to be predicted is obtained, the task types are sorted in reverse order according to the model prediction time of each task type, whether the tasks to be predicted of the task types can be executed by the idle graphics processor is judged according to the sorting order, and the first task type capable of being executed is taken as the idle task type of the idle graphics processor, and taking the task to be predicted corresponding to the idle task type as the idle task to be predicted corresponding to the idle graphic processor.
In an embodiment of the present invention, the allocating tasks to be allocated for prediction to the idle graphics processor according to the type sorting order includes: performing resource matching on the task types according to a type sorting sequence, and taking the task type of which the first task resource demand is not higher than the hardware resource of the idle graphic processor as the idle task type corresponding to the idle graphic processor; and taking the task to be predicted corresponding to the idle task type as the idle task to be predicted corresponding to the idle graphic processor.
Specifically, the step of judging whether the task to be predicted corresponding to the task type can be executed by the idle graphics processor may be: and judging whether the task resource requirement of the task to be predicted corresponding to the task type is not higher than the hardware resource of the idle graphic processor. And if the task resource demand of the task to be predicted corresponding to the task type is not higher than the hardware resource of the idle graphic processor, the task to be predicted corresponding to the task type can be executed by the idle graphic processor, and if the task resource demand of the task to be predicted corresponding to the task type is lower than the hardware resource of the idle graphic processor, the task to be predicted corresponding to the task type cannot be executed by the idle graphic processor. For example, if the task resource requirement of the task to be predicted corresponding to the task type is 2 GPUs and the hardware resource of the idle graphics processor is 2 GPUs, it is determined that the task resource requirement of the task to be predicted corresponding to the task type is not higher than the hardware resource of the idle graphics processor, and the task to be predicted corresponding to the task type can be executed by the idle graphics processor.
On the basis of the scheme, if the hardware resource of the idle graphic processor is higher than the task resource demand of the task to be predicted corresponding to the task type, the idle graphic processor of the hardware resource equal to the task resource demand of the task to be predicted corresponding to the task type is allocated to the task to be predicted corresponding to the task type, then the rest of the idle graphic processors are scheduled again, and the tasks to be predicted corresponding to other task types are allocated to the rest of the idle graphic processors again until the idle graphic processors are allocated with the tasks to be predicted corresponding to the corresponding task types. For example, if the idle graphic processors are 2 GPUs and the task resource requirement of the task type is 1 GPU, 1 GPU in the idle graphic processors is used as the task execution GPU of the task type, and the task to be predicted is reallocated to the remaining 1 idle GPU.
The re-allocating the task to be predicted to the remaining idle graphic processors may be: calculating model prediction time corresponding to each task type, performing reverse ordering on the task types according to the model prediction time, performing resource matching on the task types according to the type ordering sequence, and taking the task type with the first task resource requirement not higher than the hardware resource of the rest idle graphic processor as the rest idle task type corresponding to the rest idle graphic processor; and taking the task to be predicted corresponding to the type of the residual idle task as the idle task to be predicted corresponding to the residual idle graphic processor. Optionally, more specific technical details may be referred to the above embodiments, and are not described herein again.
In an embodiment of the present invention, the obtaining the model prediction time corresponding to each task type includes: and aiming at each task type, calculating model prediction time corresponding to the task type according to the number of tasks to be predicted corresponding to the task type, the number of instances of the task type and single task prediction time of the task type. Optionally, for each task type, the number of tasks to be predicted, the number of instances, and the prediction time of a single task included in the task type are obtained, and the model prediction time required for completing the execution of the tasks to be predicted in the task type is calculated according to the information. The number of instances of the task type is the number of graphics processors executing the task to be predicted of the task type, and the single task prediction time is the time required for executing one task to be predicted. For example, the model prediction time of the task type may be calculated by T ═ m × T)/n, where T is the model prediction time of the task type, m is the number of tasks to be predicted in the task model, T is the single task prediction time of the task model, and n is the number of instances of the task model.
On the basis of the scheme, the method further comprises the following steps: and when the feedback information corresponding to the prediction task is not received within the set time, setting the prediction task as a timeout mark, and restarting the graphics processor corresponding to the task to be predicted. . In this embodiment, in order to ensure that the graphics processor is not blocked by a single task, a timeout processing mechanism may be set for the graphics processor, and when each task runs for a set time and does not receive feedback information (such as task state information) returned by the graphics processor, the timeout processing mechanism is triggered, the task state is marked as timeout, the graphics processor is restarted, and a next predicted task is executed after the graphics processor is restarted, so that a graphics processor is prevented from being jammed due to a certain task, and the processing efficiency of the graphics processor is improved.
Example two
Fig. 2a is a flowchart of a task scheduling method according to a second embodiment of the present invention. The present embodiment provides a preferred embodiment based on the above-described embodiments. As shown in fig. 2, the method includes:
s210, GPU hardware resources of each server in the GPU cluster are obtained.
And S220, obtaining relevant information of the model prediction task.
Specifically, all model types to be predicted and the needed GPU hardware resources are obtained. In this embodiment, the model category may be a task type. Illustratively, the model category may include business model predictions such as a lung nodule prediction model, a fracture prediction model, a cerebral hemorrhage prediction model, a cerebral ischemia prediction model, a mammary gland prediction model, a bone age prediction model, and the like.
And S230, acquiring relevant information of the task to be predicted at the current moment.
Specifically, the relevant information of each prediction type task waiting for prediction at the current moment is acquired: task type, number of tasks, individual task maximum predicted time, and earliest task creation time among the types of unpredicted tasks.
And S240, executing and applying a new scheduling scheme according to the task scheduling algorithm.
And (4) integrating the information, and calculating and applying a new model scheduling scheme according to a task scheduling algorithm.
Fig. 2b is a flowchart of a task scheduling algorithm according to a second embodiment of the present invention. As shown in fig. 2b, the task scheduling algorithm may include:
s2401, creating time ascending sorting according to the model unpredicted tasks earliest.
S2402, setting the number of all model instances to be 0.
S2403, judging whether the unpredicted model exists in the unpredicted model list.
S2404, judging whether the residual GPUs meet the model requirement.
And circularly traversing all models to be predicted, and setting the number of the instances of the models to be predicted as 1 if the residual GPU resources meet the GPU resource requirement of the model single instance until the traversal is finished or no residual GPU resources exist.
S2405, setting the number of the model instances to be 1.
And selecting a model meeting the conditions according to the residual GPU resources, and adding 1 to the number of the instances of the model.
S2406, judging whether the GPU remains.
And after all models to be predicted are distributed to the corresponding GPUs, judging whether the residual GPUs exist or not. And if the GPU resources still remain, continuing scheduling until no GPU resources remain. And if no GPU resource is left, stopping scheduling and generating a new scheduling scheme.
S2407, calculating the predicted residual prediction time of each model to be predicted, and performing reverse ordering on each model to be predicted according to the residual prediction time.
And if the residual GPU resources exist, calculating the residual prediction time of each model according to the model prediction time, the residual task number and the current instance number, and sorting in a reverse order according to the residual prediction time.
S2408, judging whether the residual GPUs meet the requirements of a certain model.
And judging whether the residual GPUs meet the requirement of a certain model or not according to the reverse sorting sequence.
S2409, adding 1 to the number of corresponding model instances.
If the remaining GPUs meet a certain model requirement, the number of corresponding model instances is increased by 1, and S246 is continuously executed to determine whether there are remaining GPUs until there are no remaining GPUs.
S24109, generating a new scheduling scheme.
And if the residual GPUs do not meet the requirements of a certain model, stopping scheduling and generating a new scheduling scheme.
And S250, after the fixed period is operated, entering a new scheduling period.
In the embodiment, besides periodic scheduling, scheduling can be started when a task is created and a certain type of predicted task has no new task; in addition, when the scheduling scheme is executed, the scheduling can be started after the task in the current prediction is executed, so that the failure of the current prediction task caused by the scheduling is avoided; and a timeout mechanism can be added to each task, if a certain task is picked up by the corresponding instance and no task completion mark is returned after the certain time is run, the task is marked as a timeout state, and the instance is restarted, so that the instance is prevented from being stuck due to the certain task. It should be noted that the GPU does not need to be started and stopped every time, and is greedy to run for a while as much as possible until new scheduling or all of such types of tasks are predicted.
According to the embodiment of the invention, different models are periodically scheduled according to the task state on the whole, so that the prediction requirements of various different models are realized under the condition of limited GPU resources; through periodic scheduling, GPU hardware resources run different models according to scheduling, and the utilization rate of the GPU overall resources is improved; according to the scheduling strategy, under the condition of limited GPU resources, the task created firstly is ensured to be predicted as much as possible, and under the condition of sufficient GPU resources, all current predicted tasks are ensured to be completed synchronously as much as possible; because the task is not fixed to a specific hardware for running, the single-point fault of the hardware is avoided; the design considers the compatible multi-machine scheme, thereby being very convenient for expanding the hardware resource.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a task scheduling device according to a third embodiment of the present invention. The task scheduler may be implemented in software and/or hardware, for example, the task scheduler may be configured in a computer device. As shown in fig. 3, the apparatus includes a task information obtaining module 310, a target task determining module 320, and a predicted task allocating module 330, wherein:
a task information obtaining module 310, configured to obtain task information of a to-be-allocated prediction task and processor information of a to-be-allocated graphics processor when a task scheduling instruction is triggered;
the target task determination module 320 is configured to select a to-be-allocated prediction task as a target prediction task according to the task information and the processor information;
the prediction task allocation module 330 is configured to obtain a target task type corresponding to the target prediction task, and use the to-be-allocated prediction task corresponding to the target task type as a target execution task corresponding to the to-be-allocated graphics processor.
In the embodiment of the invention, when a task scheduling instruction is triggered, the task information of a to-be-allocated prediction task and the processor information of a to-be-allocated graphics processor are acquired through the task information acquisition module; the target task determination module selects a to-be-allocated prediction task as a target prediction task according to the task information and the processor information; the prediction task allocation module acquires a target task type corresponding to the target prediction task, takes the to-be-allocated prediction task corresponding to the target task type as a target execution task corresponding to the to-be-allocated graphics processor, and allocates the same type of to-be-predicted task to each GPU under the condition that GPU resources are limited according to task states, so that under the condition of limited GPU hardware resources, multiple different model predictions are realized, the starting and stopping times of the GPUs are reduced, and the hardware utilization rate of the GPUs is improved.
Optionally, on the basis of the foregoing scheme, the task information includes task creation time of the to-be-allocated prediction task and task resource requirements of the to-be-allocated prediction task, the processor information includes hardware resources of the to-be-allocated graphics processor, and the target task determining module 320 is specifically configured to:
according to the task creation time, performing ascending sequencing on the to-be-distributed prediction tasks;
and performing resource matching on the to-be-allocated prediction tasks according to the task sequencing sequence, and taking the to-be-allocated prediction task with the first task resource demand not higher than the hardware resource as the target prediction task.
Optionally, on the basis of the foregoing scheme, the apparatus further includes an idle processor task allocation module, configured to:
when the prediction task to be allocated does not exist, judging whether an idle graphics processor exists or not;
if the idle graphic processor exists, obtaining model prediction time corresponding to each task type, and performing reverse ordering on the task types according to the model prediction time;
and allocating idle prediction tasks to the idle graphics processors according to the type sorting sequence.
Optionally, on the basis of the above scheme, the idle processor task allocation module is specifically configured to:
and aiming at each task type, calculating model prediction time corresponding to the task type according to the number of tasks to be predicted corresponding to the task type, the number of instances of the task type and single task prediction time of the task type.
Optionally, on the basis of the above scheme, the idle processor task allocation module is specifically configured to:
performing resource matching on the task types according to a type sorting sequence, and taking the task type of which the first task resource demand is not higher than the hardware resource of the idle graphic processor as the idle task type corresponding to the idle graphic processor;
and taking the task to be predicted corresponding to the idle task type as the idle prediction task corresponding to the idle graphic processor.
Optionally, on the basis of the foregoing scheme, the task information obtaining module 310 is specifically configured to:
when a set task scheduling period is reached, the task scheduling instruction is triggered;
and/or when a newly-built task to be predicted exists in the unassigned task type, triggering the task scheduling instruction;
and/or when the allocated task type does not have a task to be predicted, the task scheduling instruction is triggered.
Optionally, on the basis of the foregoing scheme, the apparatus further includes a timeout processing module, configured to:
and when the feedback information corresponding to the prediction task is not received within the set time, setting the prediction task as a timeout mark, and restarting the graphics processor corresponding to the task to be predicted. .
The task scheduling device provided by the embodiment of the invention can execute the task scheduling method provided by any embodiment, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary computer device 412 suitable for use in implementing embodiments of the present invention. The computer device 412 shown in FIG. 4 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 4, computer device 412 is in the form of a general purpose computing device. Components of computer device 412 may include, but are not limited to: one or more processors 416, a system memory 428, and a bus 418 that couples the various system components (including the system memory 428 and the processors 416).
Bus 418 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and processor 416, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 412 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 412 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 428 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)430 and/or cache memory 432. The computer device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Memory 428 can include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in memory 428, such program modules 442 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 442 generally perform the functions and/or methodologies of the described embodiments of the invention.
The computer device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, display 424, etc.), with one or more devices that enable a user to interact with the computer device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, computer device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) through network adapter 420. As shown, network adapter 420 communicates with the other modules of computer device 412 over bus 418. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computer device 412, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 416 executes various functional applications and data processing by executing programs stored in the system memory 428, for example, to implement a task scheduling method provided by the embodiment of the present invention, the method includes:
when the task scheduling instruction is triggered, acquiring task information of a to-be-allocated prediction task and processor information of a to-be-allocated graphics processor;
selecting a to-be-distributed prediction task as a target prediction task according to the task information and the processor information;
and acquiring a target task type corresponding to the target prediction task, and taking the to-be-distributed prediction task corresponding to the target task type as a target execution task corresponding to the to-be-distributed graphics processor.
Of course, those skilled in the art can understand that the processor can also implement the technical solution of the task scheduling method provided by any embodiment of the present invention.
EXAMPLE five
The fifth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the task scheduling method provided in the fifth embodiment of the present invention, where the method includes:
when the task scheduling instruction is triggered, acquiring task information of a to-be-allocated prediction task and processor information of a to-be-allocated graphics processor;
selecting a to-be-distributed prediction task as a target prediction task according to the task information and the processor information;
and acquiring a target task type corresponding to the target prediction task, and taking the to-be-distributed prediction task corresponding to the target task type as a target execution task corresponding to the to-be-distributed graphics processor.
Of course, the computer program stored on the computer-readable storage medium provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the task scheduling method provided by any embodiments of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for task scheduling, comprising:
when the task scheduling instruction is triggered, acquiring task information of a to-be-allocated prediction task and processor information of a to-be-allocated graphics processor;
selecting a to-be-distributed prediction task as a target prediction task according to the task information and the processor information;
and acquiring a target task type corresponding to the target prediction task, and taking the to-be-distributed prediction task corresponding to the target task type as a target execution task corresponding to the to-be-distributed graphics processor.
2. The method according to claim 1, wherein the task information includes task creation time of the predicted task to be allocated and task resource requirements of the predicted task to be allocated, the processor information includes hardware resources of the graphics processor to be allocated, and the selecting a predicted task to be allocated as a target predicted task according to the task information and the processor information includes:
according to the task creation time, performing ascending sequencing on the to-be-distributed prediction tasks;
and performing resource matching on the to-be-allocated prediction tasks according to the task sequencing sequence, and taking the to-be-allocated prediction task with the first task resource demand not higher than the hardware resource as the target prediction task.
3. The method of claim 1, further comprising:
when the prediction task to be allocated does not exist, judging whether an idle graphics processor exists or not;
if the idle graphic processor exists, obtaining model prediction time corresponding to each task type, and performing reverse ordering on the task types according to the model prediction time;
and allocating idle prediction tasks to the idle graphics processors according to the type sorting sequence.
4. The method according to claim 3, wherein the obtaining of the model prediction time corresponding to each task type comprises:
and aiming at each task type, calculating model prediction time corresponding to the task type according to the number of tasks to be predicted corresponding to the task type, the number of instances of the task type and single task prediction time of the task type.
5. The method of claim 3, wherein assigning the idle graphics processor to be assigned prediction tasks according to the type ordering comprises:
performing resource matching on the task types according to a type sorting sequence, and taking the task type of which the first task resource demand is not higher than the hardware resource of the idle graphic processor as the idle task type corresponding to the idle graphic processor;
and taking the task to be predicted corresponding to the idle task type as the idle prediction task corresponding to the idle graphic processor.
6. The method of claim 1, wherein the task scheduling instruction is triggered, comprising:
when a set task scheduling period is reached, the task scheduling instruction is triggered;
and/or when a newly-built task to be predicted exists in the unassigned task type, triggering the task scheduling instruction;
and/or when the allocated task type does not have a task to be predicted, the task scheduling instruction is triggered.
7. The method of claim 1, further comprising:
and when the feedback information corresponding to the prediction task is not received within the set time, setting the prediction task as a timeout mark, and restarting the graphics processor corresponding to the task to be predicted. .
8. A task scheduling apparatus, comprising:
the task information acquisition module is used for acquiring task information of a to-be-allocated prediction task and processor information of a to-be-allocated graphics processor when a task scheduling instruction is triggered;
the target task determination module is used for selecting a to-be-allocated prediction task as a target prediction task according to the task information and the processor information;
and the prediction task allocation module is used for acquiring a target task type corresponding to the target prediction task and taking the to-be-allocated prediction task corresponding to the target task type as a target execution task corresponding to the to-be-allocated graphics processor.
9. A computer device, the device comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a task scheduling method as recited in any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method for task scheduling according to any one of claims 1 to 7.
CN201911355725.1A 2019-12-25 2019-12-25 Task scheduling method, device, equipment and medium Pending CN111190712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911355725.1A CN111190712A (en) 2019-12-25 2019-12-25 Task scheduling method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911355725.1A CN111190712A (en) 2019-12-25 2019-12-25 Task scheduling method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN111190712A true CN111190712A (en) 2020-05-22

Family

ID=70705831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911355725.1A Pending CN111190712A (en) 2019-12-25 2019-12-25 Task scheduling method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111190712A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984385A (en) * 2020-08-25 2020-11-24 广联达科技股份有限公司 Task scheduling method and task scheduling device based on decorative BIM model
CN112162856A (en) * 2020-09-23 2021-01-01 武汉联影医疗科技有限公司 GPU virtual resource allocation method and device, computer equipment and storage medium
CN113238848A (en) * 2021-05-27 2021-08-10 上海商汤科技开发有限公司 Task scheduling method and device, computer equipment and storage medium
CN113742059A (en) * 2021-07-15 2021-12-03 上海朋熙半导体有限公司 Task allocation method and device, computer equipment and storage medium
CN114356534A (en) * 2022-03-16 2022-04-15 苏州云途半导体有限公司 Processing unit task scheduling method and device
CN115220921A (en) * 2022-09-19 2022-10-21 浙江大华技术股份有限公司 Resource scheduling method, image processor, image pickup device, and medium
TWI782845B (en) * 2022-01-04 2022-11-01 國立高雄大學 Configuration setting prediction system and method for general-purpose graphics processor core functions
WO2024082692A1 (en) * 2022-10-21 2024-04-25 华为技术有限公司 Task execution method and heterogeneous server

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247629A (en) * 2017-07-04 2017-10-13 北京百度网讯科技有限公司 Cloud computing system and cloud computing method and device for controlling server
CN108268318A (en) * 2016-12-30 2018-07-10 华为技术有限公司 A kind of method and apparatus of distributed system task distribution
CN108965364A (en) * 2017-05-22 2018-12-07 杭州海康威视数字技术股份有限公司 Resource allocation method, apparatus and system
CN109074281A (en) * 2016-04-28 2018-12-21 华为技术有限公司 The distribution method and device of graphic processor task
CN109936604A (en) * 2017-12-18 2019-06-25 北京图森未来科技有限公司 A kind of resource regulating method, device and system
CN110162398A (en) * 2019-04-11 2019-08-23 平安科技(深圳)有限公司 A kind of dispatching method, device and the terminal device of diseases analysis model
CN110494848A (en) * 2018-03-28 2019-11-22 深圳市大疆创新科技有限公司 Task processing method, equipment and machine readable storage medium
CN110597626A (en) * 2019-08-23 2019-12-20 第四范式(北京)技术有限公司 Method, device and system for allocating resources and tasks in distributed system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074281A (en) * 2016-04-28 2018-12-21 华为技术有限公司 The distribution method and device of graphic processor task
CN108268318A (en) * 2016-12-30 2018-07-10 华为技术有限公司 A kind of method and apparatus of distributed system task distribution
CN108965364A (en) * 2017-05-22 2018-12-07 杭州海康威视数字技术股份有限公司 Resource allocation method, apparatus and system
CN107247629A (en) * 2017-07-04 2017-10-13 北京百度网讯科技有限公司 Cloud computing system and cloud computing method and device for controlling server
CN109936604A (en) * 2017-12-18 2019-06-25 北京图森未来科技有限公司 A kind of resource regulating method, device and system
CN110494848A (en) * 2018-03-28 2019-11-22 深圳市大疆创新科技有限公司 Task processing method, equipment and machine readable storage medium
CN110162398A (en) * 2019-04-11 2019-08-23 平安科技(深圳)有限公司 A kind of dispatching method, device and the terminal device of diseases analysis model
CN110597626A (en) * 2019-08-23 2019-12-20 第四范式(北京)技术有限公司 Method, device and system for allocating resources and tasks in distributed system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈敏,李勇: "软件定义5G网络 面向智能服务5G移动网络关键技术探索", 《华中科技大学出版社》, pages: 0058 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984385A (en) * 2020-08-25 2020-11-24 广联达科技股份有限公司 Task scheduling method and task scheduling device based on decorative BIM model
CN112162856A (en) * 2020-09-23 2021-01-01 武汉联影医疗科技有限公司 GPU virtual resource allocation method and device, computer equipment and storage medium
CN113238848A (en) * 2021-05-27 2021-08-10 上海商汤科技开发有限公司 Task scheduling method and device, computer equipment and storage medium
WO2022247105A1 (en) * 2021-05-27 2022-12-01 上海商汤科技开发有限公司 Task scheduling method and apparatus, computer device and storage medium
CN113742059A (en) * 2021-07-15 2021-12-03 上海朋熙半导体有限公司 Task allocation method and device, computer equipment and storage medium
CN113742059B (en) * 2021-07-15 2024-03-29 上海朋熙半导体有限公司 Task allocation method, device, computer equipment and storage medium
TWI782845B (en) * 2022-01-04 2022-11-01 國立高雄大學 Configuration setting prediction system and method for general-purpose graphics processor core functions
CN114356534A (en) * 2022-03-16 2022-04-15 苏州云途半导体有限公司 Processing unit task scheduling method and device
CN115220921A (en) * 2022-09-19 2022-10-21 浙江大华技术股份有限公司 Resource scheduling method, image processor, image pickup device, and medium
CN115220921B (en) * 2022-09-19 2023-01-03 浙江大华技术股份有限公司 Resource scheduling method, image processor, image pickup device, and medium
WO2024082692A1 (en) * 2022-10-21 2024-04-25 华为技术有限公司 Task execution method and heterogeneous server

Similar Documents

Publication Publication Date Title
CN111190712A (en) Task scheduling method, device, equipment and medium
US11720408B2 (en) Method and system for assigning a virtual machine in virtual GPU enabled systems
CN109117260B (en) Task scheduling method, device, equipment and medium
CN109034396B (en) Method and apparatus for processing deep learning jobs in a distributed cluster
CN107766148B (en) Heterogeneous cluster and task processing method and device
US9277003B2 (en) Automated cloud workload management in a map-reduce environment
US8434085B2 (en) Scalable scheduling of tasks in heterogeneous systems
CN110389816B (en) Method, apparatus and computer readable medium for resource scheduling
CN112667376A (en) Task scheduling processing method and device, computer equipment and storage medium
US10866832B2 (en) Workflow scheduling system, workflow scheduling method, and electronic apparatus
US11816509B2 (en) Workload placement for virtual GPU enabled systems
US11010195B2 (en) K-tier architecture scheduling
CN111176818B (en) Distributed prediction method, device, system, electronic equipment and storage medium
US9423957B2 (en) Adaptive system provisioning
CN113886089B (en) Task processing method, device, system, equipment and medium
US10635492B2 (en) Leveraging shared work to enhance job performance across analytics platforms
CN110287022A (en) A kind of scheduling node selection method, device, storage medium and server
CN111679911A (en) Management method, device, equipment and medium for GPU (graphics processing Unit) card in cloud environment
CN115185697A (en) Cluster resource scheduling method, system, equipment and storage medium based on kubernets
CN114327894A (en) Resource allocation method, device, electronic equipment and storage medium
CA2631255A1 (en) Scalable scheduling of tasks in heterogeneous systems
EP3343370A1 (en) Method of processing opencl kernel and computing device therefor
CN110908791B (en) Scheduling method, scheduling device and scheduling system
CN114661475A (en) Distributed resource scheduling method and device for machine learning
CN110659312B (en) Data processing method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room B401, 4 / F, building 1, No. 12, shangdixin Road, Haidian District, Beijing 100085

Applicant after: Tuxiang Medical Technology Co.,Ltd.

Address before: Room B401, 4 / F, building 1, No. 12, shangdixin Road, Haidian District, Beijing 100085

Applicant before: INFERVISION

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200522

RJ01 Rejection of invention patent application after publication