CN117724823A - Task execution method of multi-model workflow description based on declarative semantics - Google Patents
Task execution method of multi-model workflow description based on declarative semantics Download PDFInfo
- Publication number
- CN117724823A CN117724823A CN202410175197.6A CN202410175197A CN117724823A CN 117724823 A CN117724823 A CN 117724823A CN 202410175197 A CN202410175197 A CN 202410175197A CN 117724823 A CN117724823 A CN 117724823A
- Authority
- CN
- China
- Prior art keywords
- task
- target
- model
- executing
- description
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000013468 resource allocation Methods 0.000 claims abstract description 31
- 238000003860 storage Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 15
- 230000008878 coupling Effects 0.000 claims description 11
- 238000010168 coupling process Methods 0.000 claims description 11
- 238000005859 coupling reaction Methods 0.000 claims description 11
- 230000002093 peripheral effect Effects 0.000 claims description 5
- 238000009826 distribution Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 13
- 235000019580 granularity Nutrition 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000001419 dependent effect Effects 0.000 description 8
- 230000006872 improvement Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000007667 floating Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000012014 frustrated Lewis pair Substances 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 101150025052 Pklr gene Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000007943 implant Substances 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000009469 supplementation Effects 0.000 description 1
- 239000003826 tablet Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Stored Programmes (AREA)
Abstract
The specification discloses a task execution method for a multi-model workflow description based on declarative semantics. In the task execution method provided by the specification, determining a target task composed of a plurality of subtasks and target models required for executing the target task, wherein different target models are used for executing different subtasks; acquiring declarative semantic descriptions of the target tasks and model information of the target models; splitting each target model to obtain a plurality of task units, wherein different task units are used for executing different task contents under subtasks; determining resource requirements of each task unit when executing the task content, and determining resource allocation information of each task unit according to the resource requirements and the declarative semantic description; and executing each task content by adopting each unit of the task according to the resource allocation information to obtain an execution result of the target task.
Description
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a method for performing tasks in a multi-model workflow description based on declarative semantics.
Background
Declarative grammar is a grammar that describes a problem as a series of declarations or rules. The developer need only define the goals that need to be achieved and then let the computer calculate and achieve those goals. The grammar does not need to pay attention to the bottom implementation by a developer, only needs to define targets and rules, and the computer can automatically calculate and realize. When executing various intelligent computing tasks, the workflow can be greatly optimized by using the declarative grammar, and unnecessary workload is reduced.
Today, as the application of generative models based on AI construction becomes more mature, more and more computing tasks in intelligent computing begin to be completed in concert depending on multiple AI models. However, managing and executing such multi-model tasks is often very complex, and there is also a performance bottleneck in using a single node for computation. The tasks are decomposed, and task planning, resource allocation and collaborative execution are effectively performed on the decomposed tasks. At present, when the existing method adopts declarative grammar to describe a computing task, general workflow description can be generated only for general tasks, and the dynamics and complexity of an AI model in executing the computing task are difficult to reflect.
Therefore, how to better describe the computational tasks performed in concert by declarative grammar that rely on multiple AI-based built models is a challenge.
Disclosure of Invention
The present specification provides a task execution method based on declarative semantics for multi-model workflow descriptions that at least partially addresses the above-described problems with the prior art.
The technical scheme adopted in the specification is as follows:
the specification provides a task execution method of a multi-model workflow description based on declarative semantics, comprising:
determining a target task formed by a plurality of subtasks and each target model required for executing the target task, wherein different target models are used for executing different subtasks;
acquiring declarative semantic descriptions of the target tasks and model information of the target models;
splitting each target model to obtain a plurality of task units, wherein different task units are used for executing different task contents under subtasks;
determining resource requirements of each task unit when executing the task content, and determining resource allocation information of each task unit according to the resource requirements and the declarative semantic description;
And executing each task content by adopting each unit of the task according to the resource allocation information to obtain an execution result of the target task.
Optionally, acquiring the declarative semantic description of the target task specifically includes:
and acquiring task body description of the target task, subtask description of each subtask, task running description of each subtask and task peripheral description of each subtask as declarative semantic description of the target task.
Optionally, the task ontology description at least includes an execution sequence of each subtask, an execution condition of each subtask, and a dependency relationship between the subtasks.
Optionally, the subtask description of each subtask includes at least a task name, a task identification, a dependency, a task input, a task output, a task parameter, a resolution capability, a task support resource.
Optionally, the task runtime description of each subtask includes at least a task identification, a task calculation amount, a task access amount.
Optionally, the task perimeter description of each subtask includes at least a task hierarchy, resources required by the task, and a task scheduling policy.
Optionally, obtaining the model information of the target model specifically includes:
And obtaining a model source code, an auxiliary tool code, model parameters and input data of the target model.
Optionally, splitting each target model to obtain a plurality of task units, which specifically includes:
for each target model, determining the coupling degree between operators required by the target model when the subtasks are executed according to the topological structure of the target model;
and splitting the target model according to the coupling degree to obtain each task unit of the target model.
Optionally, determining the resource requirement of each task unit when executing the task content specifically includes:
and determining the resource requirement of each task unit when executing the task content according to the task content executed by the task unit and the resources required by the task.
Optionally, the resource allocation information at least includes resource availability evaluation, task priority, parallel execution information, and resource allocation policy.
Optionally, each unit of the task is adopted to execute each task content, which specifically includes:
determining program codes corresponding to the task contents according to the declarative semantic description;
and executing program codes corresponding to the task contents by adopting the task units.
A task execution device for declarative semantics-based multi-model workflow descriptions provided herein, the device comprising:
the determining module is used for determining a target task formed by a plurality of subtasks and each target model required for executing the target task, wherein different target models are used for executing different subtasks;
the acquisition module is used for acquiring the declarative semantic description of the target task and the model information of the target model;
the splitting module is used for splitting each target model to obtain a plurality of task units, wherein different task units are used for executing different task contents under subtasks;
the distribution module is used for determining the resource requirement of each task unit when executing the task content and determining the resource distribution information of each task unit according to the resource requirement and the declarative semantic description;
and the execution module is used for executing the contents of each task by adopting each unit of the task according to the resource allocation information to obtain the execution result of the target task.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the task execution method of the above described declarative semantic based multi-model workflow description.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the task execution method of the declarative semantic based multi-model workflow description described above when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the task execution method based on the declarative semantic multi-model workflow description provided by the specification, determining a target task composed of a plurality of subtasks and each target model required for executing the target task, wherein different target models are used for executing different subtasks; acquiring declarative semantic descriptions of the target tasks and model information of the target models; splitting each target model to obtain a plurality of task units, wherein different task units are used for executing different task contents under subtasks; determining resource requirements of each task unit when executing the task content, and determining resource allocation information of each task unit according to the resource requirements and the declarative semantic description; and executing each task content by adopting each unit of the task according to the resource allocation information to obtain an execution result of the target task.
When the task execution method based on the declarative semantic multi-model workflow description provided by the specification is adopted to execute complex intelligent computing tasks relying on multi-model cooperation, model information of the declarative semantic description and a target model can be obtained after the target task and the target model for executing the target task are determined; splitting the target model into task units with finer granularity, and determining resource allocation information according to the resource requirements of each task unit; and finally, executing the task content by adopting each task unit according to the resource allocation information to obtain the execution result of the target task. The method can allow a user to declaratively define a plurality of model tasks, and the system accepts the declarative task descriptions and automatically decomposes, schedules and executes the tasks according to the dependency relationship and constraint among the tasks, and describes the relationship between the tasks and the models in a high-level abstract manner. The method provides a flexible and extensible method for managing and executing the multi-model tasks through the workflow, so that a user can efficiently utilize computing resources, meet performance requirements and improve task efficiency.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
FIG. 1 is a flow diagram of a method for task execution based on declarative semantics multi-model workflow description in the present specification;
FIG. 2 is a schematic diagram of a task execution device of a declarative semantics-based multi-model workflow description provided herein;
fig. 3 is a schematic view of the electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present application based on the embodiments herein.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a flow chart of a task execution method described by a multi-model workflow based on declarative semantics in the present specification, specifically including the following steps:
s100: a target task consisting of a plurality of subtasks is determined, and target models required for executing the target task are determined, wherein different target models are used for executing different subtasks.
All steps in the task execution method described by the declarative semantic-based multi-model workflow provided in the present specification can be implemented by any electronic device having a computing function, such as a terminal, a server, or the like.
The task execution method of the declarative semantic-based multi-model workflow description provided by the specification is mainly used for executing complex tasks which are dependent on multi-model collaboration completion. Based on this, in this step, the target task to be completed may be determined first. In complex cases where the target task relies on multi-model collaboration, the target task may typically be composed of several subtasks. In the case of determining each subtask that needs to be completed, the required target model may be determined accordingly. Typically, different object models may be used to perform different subtasks.
For example, if a user needs to describe the number of specified objects present in one image. This task may consist of two subtasks, and accordingly two different object models are required to complete. Wherein, the first subtask can be to generate language description of the image, which can be accomplished by using a large language model; the second subtask may be counting the number of specified objects in the image, which may be accomplished using a convolutional neural network model.
S102: and acquiring declarative semantic descriptions of the target tasks and model information of the target models.
After determining the target task and the target model required for executing the target task in step S100, the declarative semantic description of the target task and the model information of each target model may be further determined in this step.
Unlike the conventional general description, in the method, when the declarative semantic description of the target task is acquired, the task body description of the target task, the subtask description of each subtask, the task runtime description of each subtask, and the task peripheral description of each subtask may be acquired as the declarative semantic description of the target task.
In the method, a complex target task depending on multiple models is specifically described from coarse granularity to fine granularity through declarative semantic description, and the four dimensions of task ontology description, subtask description, task runtime description and task peripheral description are specifically described.
The task ontology description may include at least an execution order of each sub-task, an execution condition of each sub-task, and a dependency relationship between each sub-task. The task ontology description may be used to describe dependencies between subtasks under a target task to specify execution order and conditions of the subtasks. There may be parallel, serial, etc. structures between sub-tasks according to the execution order. In an abstract way, subtasks are communicated through data flow, and if the subtasks are nodes and the data flow is edges, the subtasks in the task body description can be abstracted into a directed graph structure. The multi-model declaration semantics define the dependencies between subtasks as:
(1) Serial: expressed as < dependent task name A > & & < dependent task name B >;
(2) parallel: expressed as < dependent task name A > | < dependent task name B >;
(3) and (3) circulation: expressed as always@condition, { < dependent task name A > };
(4) branch judgment: expressed as if (condition) < dependent task name A > < dependent task name B >.
The subtask description of each subtask may include at least a task name, a task identification, a dependency, a task input, a task output, a task parameter, a resolution capability, a task support resource. For each subtask, a further decomposition may still be performed, splitting the subtask into multiple task contents at a finer granularity. Similarly, there is a data flow between pages of task content, which can also be represented abstractly as a directed graph structure. In other words, the task content has consistent properties as subtasks, just a finer granularity, recursively expanding subtasks from another dimension. Task content can be seen as splitting by one larger subtask into several smaller subtasks; subtasks can be seen as one larger task content consisting of several smaller task contents. The manner in which the subtasks are described may be as follows:
{
'task': 'Module_0', # task name
'id': '0-0-0', # task unique identifier, format 0-0-0 is used to characterize the subdivision granularity of the task
'dep': [ -1], # denotes task dependency
'input_desc': { 'loc': [ '/home/data/' ], dim ': [ [3, 224, 224] ], unit': [ float32] } # represents a task input description, including data location, dimension, and unit
'output_desc': { 'loc': [ < resource-0-1 > ], 'dim': [ [1, 128, 128] ], unit': [ float32] } # output description
'args': { 'Module-specific': [ 'value' ], task parameters #
'atomic': true/False, # boolean value, indicating whether the task can be further subdivided
Supported'. Resources supported by the 'resource_id_0', 'resource_id_1', 'resource_id_2' ] # task
}
The above content is a set of descriptions of task content which can be split in subtasks, and for each subtask, there may be multiple sets of the above description content, which are respectively used to describe different task content under one subtask. Of course, since the subtasks are consistent with the attributes of the task content, only the granularity is different, and the subtasks themselves can be described as the task content in the above manner. It should be noted that in the above examples of subtask descriptions, all specific values are given for the convenience of understanding one of the possible embodiments, and may be set according to specific requirements in practical applications.
The task runtime description of each subtask may include at least a task identification, a task calculation amount, a task access amount. In the method, the target tasks, subtasks and task contents can be regarded as concepts of consistent attributes and different granularities. The target task is the coarsest granularity, and the task content is the finest granularity; the sub-task can be obtained by splitting the target task, and the task content can be obtained by splitting the sub-task. It should be noted that the task content of the finest granularity split must be executed separately, otherwise the split cannot be performed. The format of the task identity may be understood as a hierarchy of "target task-subtask-task content". The task operation time description mainly aims at performance parameters of target tasks, subtasks or task contents in the operation process, and the task operation time description aims at expanding the task form in the operation process, and comprises the calculation amount of a convolution layer, the access amount, the parallelism, the data transmission amount and the like. The manner in which tasks are described at run-time may be as follows:
{
'id': 0-0-0, # task identification
'arithmetical_intensity': 10 FLPs, # calculated, wherein the calculated formula of the convolution layer is as follows: m is M 2 • K 2 • C in • C out
'memory_intensity': 200 Byte, # memory quantity, sum of memory occupation of weight parameters of each layer of model and memory occupation of feature map output by each layer
}
In the formula for representing the calculated quantity, M represents the size of the convolution kernel, namely the height and the width of the convolution kernel; k represents the number of channels per convolution kernel, i.e. the depth of the convolution kernel; c (C) in Representing the number of channels of the input data, i.e., the depth of the input image or the output of the previous layer; c (C) out The number of channels representing the output data, i.e. the number of neurons of the convolutional layer. The meaning of this formula is that for one convolution layer, the amount of computation depends on the size of the convolution kernel, the number of channels per convolution kernel, the number of channels of the input data, and the number of channels of the output data. The calculation is obtained by sliding a convolution kernel over the input data and dot multiplying and summing each position. The index of this calculation is usually expressed in terms of floating point numbers (FLPs)Representing the number of floating point operations performed.
The access quantity of the model is the sum of the memory occupation of weight parameters of each layer of the model and the memory occupation of the feature map output by each layer when the model processes the target task/subtask/task content. Similarly, all the specific data in the above description are not unique, and may be set according to the specific requirements in actual needs.
The task perimeter description for each subtask may include at least a task hierarchy, resources required for the task, task scheduling policies. The task perimeter description is used to describe definitions and parameters of other feature quantities related to the task, allowing the user to customize the allocation hierarchy, for example, the allocation of a hierarchy may be defined as the task content with subtask id 0 being executed on the CPU device, the task content with subtask id 1 being executed on the GPU device, etc., i.e. in one task perimeter description, whether the task allows a representation of its different hierarchy to be placed on a different device. The manner in which the task perimeter is described may be as follows:
{
'id': 0-0-0, # task identification
'device_id': 'gpu-0', # resource type required by task
'method': 'dynamic-drl', # scheduling strategy required by task
},
{
‘id’: 0-1-0
‘device-id’:’cpu-0’
‘method’:’dynamic-dp’
}
It should be noted that, in addition to the above, the user may also make other definitions in the task peripheral description according to specific requirements, which is not particularly limited in this specification.
Additionally, declarative semantic descriptions may describe the state of communication between computing resources in addition to describing tasks. To represent a resource more generally, the system can model the time consumption of a peer-to-peer communication using a series of parameters through the communication performance model, the cost of peer-to-peer communication can be further generalized to aggregate communication, and the following parameters can be used to describe the status of a trunked communication network:
{
'Source_id': 'zjnode-5',# here describes inter-host communication if device id, intra-host communication if resource id
‘target_id’:‘zjnode-6’,
'L':84ms, # delay (latency)
'pklr': packet loss Rate (Packet Low Rate) < 1% >
'jitter': jitter of < 20ms, # jitter
'overlap': # overhead
'gpm': # message transmission interval
'gpb': # byte implant interval
'p': total number of # Process
}
Where delay is the amount of Time required for a packet to travel from a sender to a receiver over a network, and is typically measured by the Round-Trip Time (Round-Trip Time) of the data. Jitter reflects the variation in delay between transmitting individual packets, which may arrive with different delays due to jitter, although the packets are sent uniformly. Overhead refers to the time it takes for a processor to send or receive a message, including preparing the message, queuing in a send queue, signaling to the NIC, etc. A message transmission interval (gpm) represents the minimum time required for a network card to inject a link between two adjacent packets, the reciprocal of which reflects the bandwidth of the network, and the processor can send L/g messages before being blocked in the network. A byte injection interval (gpb) represents the minimum interval of two bytes injected into the network, 1/gpb representing the network bandwidth for long messages.
In the method, at least the model source code, the auxiliary tool code, the model parameters and the input data of the target model can be acquired in the process of acquiring the model information of the target model. In practical application, each different model information can be packaged in a single file and stored in a preset path for application. The architecture of the preset folder may be presented as follows:
├── in_model_seg/
│├── model_1/
││├── source_code/
I-model py# model source code file
An-easy-handle. Py# auxiliary tool code file
Model weight model parameter file
├── Input_Data/
│├── dataset_1/
Data files/# input data files
│││├── input_1.jpg
│││├── input_2.jpg
├── Model_Description/
Model_1_desc. Json# semantic description file
In the above architecture, the three folders of the outermost layer are in_model_seg, input_data, and model_description. The in_model_seg folder is provided with a model_1 folder, the model_1 folder is provided with a source_code folder and a model parameter file model_weight.pth, and the model_code folder is provided with a model source code file and an auxiliary tool code file. The input_data folder has a dataset_1 folder, the dataset_1 folder has a data_files folder, and the data_files folder has Input Data, in the embodiment, image files input_1.Jpg and input_2.Jpg. The semantic Description file model_1_desc. Json is stored under the model_description folder, which contains the declarative semantic Description of the target task introduced in this step.
Through the method, declarative semantic description of the target task and model information of the target model can be obtained.
Additionally, after the acquired declarative semantic description, certain semantic analysis can be performed on the declarative semantic description so as to ensure that the content in the declarative semantic description is accurate. In particular, the semantic parsing may include at least field supplementation, grammar checking, semantic checking, intermediate code generation, and object code generation.
Wherein field supplements are descriptions in checking declarative semantics, ensuring that all necessary fields are provided, and automatically populating missing fields if necessary. This helps to reduce user errors and ensure the integrity of the task; grammar checking is to verify the grammar correctness of declarative semantics to ensure that they conform to grammar rules and conventions. This typically involves using a parser to check for syntax errors; semantic verification is to ensure that descriptions of declarative semantics are semantically reasonable and consistent, e.g., check if dependencies between tasks are valid; the intermediate code is generated by converting each task and node into an intermediate code, splicing the intermediate codes in a specific sequence, and generating and optimizing the intermediate code; the object code is generated to be understandable and executable by the back-end system from the front-end language, and the executable object program is generated.
S104: splitting each target model to obtain a plurality of task units, wherein different task units are used for executing different task contents under subtasks.
In the step, according to the task content divided in the declarative semantic description, the target model is correspondingly split, and the structure of the target model is divided into smaller task units which are more convenient to manage. In step S100, it is explained that different object models are used for executing different subtasks, and similarly, different task units obtained by dividing the object model corresponding to one subtask are used for executing different task contents under the subtask.
When dividing, the corresponding target model can be divided into task units directly according to the task content divided by the subtasks in the declarative semantic description. Better, can also carry out better division of effect according to the topological structure of model. Specifically, for each target model, determining the coupling degree between operators required by the target model in the process of executing the subtasks according to the topological structure of the target model; and splitting the target model according to the coupling degree to obtain each task unit of the target model.
The topology of a model can characterize the operators employed by the model. The degree of coupling between different operators of the model is different according to the different topological structures of the model. When the degree of coupling between two or more operators is high, the effect of putting the operators together to operate is better than that of a separate operation. Thus, operators with higher coupling can be divided into one task unit. Correspondingly, the task unit executes all task contents corresponding to the contained operators. In other words, one task unit may execute more than one task content when distinguishing task content as specified in the declarative semantic description.
Because the task contents divided in the declarative semantics are independent of each other, the task contents can be independently executed. Therefore, under the condition of sufficient resources, the target model is further divided into independent task units corresponding to the task contents, so that different task contents can be executed in parallel by each task unit, and the operation efficiency is further improved.
S106: and determining the resource requirement of each task unit when executing the task content, and determining the resource allocation information of each task unit according to the resource requirement and the declarative semantic description.
In order to ensure that the decomposed task contents can be successfully executed, before the execution, the resources required by the task units when executing the corresponding task contents are determined, and the corresponding planning is performed. In this step, the planning of tasks is based on resource demand analysis, including processors, memory, storage, network bandwidth, and the like.
In general, computing resources required by each task content are specified in the declarative semantic description of the target task, and the resource requirements can be directly determined according to the specification in the declarative semantic description. Specifically, for each task unit, the resource requirement of the task unit when executing the task content can be determined according to the task content executed by the task unit and the resources required by the task.
Alternatively, the rooline Model may be used to determine the resource requirements of the task content when the corresponding fields are missing from the declarative semantic description. By combining the rooline Model, the calculation/memory ratio of the program is analyzed, thereby judging which computing resource the task is more suitable to run on. The rooline Model uses computational and memory quantities to quantitatively describe the computational intensity of tasks and uses computational effort and bandwidth to quantitatively describe the computational intensity of computing platforms. The rooline Model is aimed at: the model can reach the floating point computing speed under the limit of a computing platform. More specifically, the rooline Model solves the problem of "how much the Model with a calculated amount a and a visited amount B is at the theoretical upper performance limit E that can be achieved by the computing platform with a calculated force C and a bandwidth D".
After the resource requirement of each task content is determined, the resources required by each task unit in executing the task content can be allocated, namely, the resource allocation information is determined. In particular, the resource allocation information may include at least resource availability assessment, task priority, parallel execution information, resource allocation policy.
The resource availability is evaluated to obtain the number and performance of the currently available resources of the system, including compute nodes, number of central processing units (Central Processing Unit, CPU), number of graphics processors (Graphics Processing Unit, GPU), memory capacity, storage, bandwidth, etc. Task priority is to assign a priority to each task content so that the most important task, i.e., the task with the highest priority, can be executed first when resources are limited. The parallel execution information is a description in the declarative description semantics, which task contents can be executed in parallel, and those task contents can be executed on different resources at the same time. The resource allocation strategy is to make the content of each task obtain proper resource allocation so as to meet the task demands to the greatest extent.
S108: and executing each task content by adopting each unit of the task according to the resource allocation information to obtain an execution result of the target task.
In this step, according to the resource allocation information determined in step S106, required computing resources may be allocated to each task unit, so that each task unit executes corresponding task content, and a final execution result of the target task is obtained. Of course, in the execution process, the computing resources can be dynamically adjusted, and the allocation of the resources is dynamically performed according to the execution condition and the resource availability of each task content, so that the task is ensured to be supported by the optimal resources in the whole execution process.
When each unit of the task is adopted to execute each task content, program codes corresponding to each task content can be determined specifically according to the declarative semantic description; and executing program codes corresponding to the task contents by adopting the task units. As described in the background, the declarative semantic descriptions written based on the declarative grammar cannot be directly executed by the machine, and need to be first converted into code that can be understood by the machine and then executed.
When the task execution method based on the declarative semantic multi-model workflow description provided by the specification is adopted to execute complex intelligent computing tasks relying on multi-model cooperation, model information of the declarative semantic description and a target model can be obtained after the target task and the target model for executing the target task are determined; splitting the target model into task units with finer granularity, and determining resource allocation information according to the resource requirements of each task unit; and finally, executing the task content by adopting each task unit according to the resource allocation information to obtain the execution result of the target task. The method can allow a user to declaratively define a plurality of model tasks, and the system accepts the declarative task descriptions and automatically decomposes, schedules and executes the tasks according to the dependency relationship and constraint among the tasks, and describes the relationship between the tasks and the models in a high-level abstract manner. The method provides a flexible and extensible method for managing and executing the multi-model tasks through the workflow, so that a user can efficiently utilize computing resources, meet performance requirements and improve task efficiency.
The task execution method of the multi-model workflow description based on the declarative semantics provided by the specification is based on the same thought, and the specification also provides a corresponding task execution device of the multi-model workflow description based on the declarative semantics, as shown in fig. 2.
Fig. 2 is a schematic diagram of a task execution device of a declarative semantic-based multi-model workflow description provided in the present specification, specifically including:
a determining module 200, configured to determine a target task composed of a plurality of subtasks, and each target model required for executing the target task, where different target models are used for executing different subtasks;
an obtaining module 202, configured to obtain a declarative semantic description of the target task and model information of the target model;
the splitting module 204 is configured to split the target models to obtain a plurality of task units, where different task units are used to execute different task contents under subtasks;
an allocation module 206, configured to determine a resource requirement of each task unit when executing the task content, and determine resource allocation information of each task unit according to the resource requirement and the declarative semantic description;
And the execution module 208 is configured to execute each task content by using each unit of the task according to the resource allocation information, so as to obtain an execution result of the target task.
Optionally, the obtaining module 202 is specifically configured to obtain, as the declarative semantic description of the target task, a task ontology description of the target task, a subtask description of each subtask, a task runtime description of each subtask, and a task perimeter description of each subtask.
Optionally, the task ontology description at least includes an execution sequence of each subtask, an execution condition of each subtask, and a dependency relationship between the subtasks.
Optionally, the subtask description of each subtask includes at least a task name, a task identification, a dependency, a task input, a task output, a task parameter, a resolution capability, a task support resource.
Optionally, the task runtime description of each subtask includes at least a task identification, a task calculation amount, a task access amount.
Optionally, the task perimeter description of each subtask includes at least a task hierarchy, resources required by the task, and a task scheduling policy.
Optionally, the acquiring module 202 is specifically configured to acquire a model source code, an auxiliary tool code, a model parameter, and input data of the target model.
Optionally, the splitting module 204 is specifically configured to determine, for each target model, a degree of coupling between operators required by the target model when performing the subtasks according to a topology structure of the target model; and splitting the target model according to the coupling degree to obtain each task unit of the target model.
Optionally, the allocation module 206 is specifically configured to determine, for each task unit, a resource requirement of the task unit when executing the task content according to task content executed by the task unit and resources required by the task.
Optionally, the resource allocation information at least includes resource availability evaluation, task priority, parallel execution information, and resource allocation policy.
Optionally, the executing module 208 is specifically configured to determine, according to the declarative semantic description, program codes corresponding to the task contents; and executing program codes corresponding to the task contents by adopting the task units.
The present specification also provides a computer readable storage medium storing a computer program operable to perform the task execution method of the declarative semantics-based multi-model workflow description provided in fig. 1 above.
The present specification also provides a schematic structural diagram of the electronic device shown in fig. 3. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 3, although other hardware required by other services may be included. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs to implement the task execution method described above for the declarative semantic based multi-model workflow description of fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
Improvements to one technology can clearly distinguish between improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) and software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present application.
Claims (14)
1. A method of task execution for a multi-model workflow description based on declarative semantics, comprising:
determining a target task formed by a plurality of subtasks and each target model required for executing the target task, wherein different target models are used for executing different subtasks;
acquiring declarative semantic descriptions of the target tasks and model information of the target models;
splitting each target model to obtain a plurality of task units, wherein different task units are used for executing different task contents under subtasks;
determining resource requirements of each task unit when executing the task content, and determining resource allocation information of each task unit according to the resource requirements and the declarative semantic description;
and executing each task content by adopting each unit of the task according to the resource allocation information to obtain an execution result of the target task.
2. The method according to claim 1, wherein obtaining the declarative semantic description of the target task comprises:
and acquiring task body description of the target task, subtask description of each subtask, task running description of each subtask and task peripheral description of each subtask as declarative semantic description of the target task.
3. The method of claim 2, wherein the task ontology description includes at least an execution order of each sub-task, an execution condition of each sub-task, and a dependency relationship between each sub-task.
4. The method of claim 2, wherein the subtask description for each subtask includes at least a task name, a task identification, a dependency, a task input, a task output, a task parameter, a resolution capability, a task support resource.
5. The method of claim 2, wherein the task runtime description of each subtask includes at least a task identification, a task computation, a task access amount.
6. The method of claim 2, wherein the task perimeter description for each subtask includes at least a task hierarchy, resources required for the task, and a task scheduling policy.
7. The method of claim 1, wherein obtaining model information of the target model specifically comprises:
and obtaining a model source code, an auxiliary tool code, model parameters and input data of the target model.
8. The method of claim 1, wherein splitting each object model to obtain a plurality of task units specifically comprises:
For each target model, determining the coupling degree between operators required by the target model when the subtasks are executed according to the topological structure of the target model;
and splitting the target model according to the coupling degree to obtain each task unit of the target model.
9. The method of claim 6, wherein determining resource requirements of each task unit in executing the task content comprises:
and determining the resource requirement of each task unit when executing the task content according to the task content executed by the task unit and the resources required by the task.
10. The method of claim 1, wherein the resource allocation information comprises at least resource availability assessment, task priority, parallel execution information, resource allocation policy.
11. The method of claim 1, wherein executing each task content with each unit of the task specifically comprises:
determining program codes corresponding to the task contents according to the declarative semantic description;
and executing program codes corresponding to the task contents by adopting the task units.
12. A task execution device for a declarative semantics-based multi-model workflow description, comprising:
the determining module is used for determining a target task formed by a plurality of subtasks and each target model required for executing the target task, wherein different target models are used for executing different subtasks;
the acquisition module is used for acquiring the declarative semantic description of the target task and the model information of the target model;
the splitting module is used for splitting each target model to obtain a plurality of task units, wherein different task units are used for executing different task contents under subtasks;
the distribution module is used for determining the resource requirement of each task unit when executing the task content and determining the resource distribution information of each task unit according to the resource requirement and the declarative semantic description;
and the execution module is used for executing the contents of each task by adopting each unit of the task according to the resource allocation information to obtain the execution result of the target task.
13. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-11.
14. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-11 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410175197.6A CN117724823A (en) | 2024-02-07 | 2024-02-07 | Task execution method of multi-model workflow description based on declarative semantics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410175197.6A CN117724823A (en) | 2024-02-07 | 2024-02-07 | Task execution method of multi-model workflow description based on declarative semantics |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117724823A true CN117724823A (en) | 2024-03-19 |
Family
ID=90211021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410175197.6A Pending CN117724823A (en) | 2024-02-07 | 2024-02-07 | Task execution method of multi-model workflow description based on declarative semantics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117724823A (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107066240A (en) * | 2016-10-08 | 2017-08-18 | 阿里巴巴集团控股有限公司 | The implementation method and device of assembly function |
CN113722083A (en) * | 2020-05-25 | 2021-11-30 | 中兴通讯股份有限公司 | Big data processing method and device, server and storage medium |
CN116185629A (en) * | 2023-02-22 | 2023-05-30 | 之江实验室 | Task execution method and device, storage medium and electronic equipment |
CN116225653A (en) * | 2023-03-09 | 2023-06-06 | 中国科学院软件研究所 | QOS-aware resource allocation method and device under deep learning multi-model deployment scene |
CN116383797A (en) * | 2023-05-31 | 2023-07-04 | 北京顶象技术有限公司 | Non-notch sliding verification code and generation method thereof |
CN116541176A (en) * | 2023-05-24 | 2023-08-04 | 中国电信股份有限公司北京研究院 | Optimization method and optimization device for computing power resource allocation, electronic equipment and medium |
CN116701001A (en) * | 2023-08-08 | 2023-09-05 | 国网浙江省电力有限公司信息通信分公司 | Target task allocation method and device, electronic equipment and storage medium |
CN116880995A (en) * | 2023-09-08 | 2023-10-13 | 之江实验室 | Execution method and device of model task, storage medium and electronic equipment |
CN117076535A (en) * | 2023-08-17 | 2023-11-17 | 安元科技股份有限公司 | Enterprise-level declarative domain model definition and storage model conversion method and system |
CN117311998A (en) * | 2023-11-30 | 2023-12-29 | 卓世未来(天津)科技有限公司 | Large model deployment method and system |
WO2024007849A1 (en) * | 2023-04-26 | 2024-01-11 | 之江实验室 | Distributed training container scheduling for intelligent computing |
CN117519877A (en) * | 2023-11-15 | 2024-02-06 | Oppo广东移动通信有限公司 | Rendering method and device of quick application card, storage medium and electronic equipment |
-
2024
- 2024-02-07 CN CN202410175197.6A patent/CN117724823A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107066240A (en) * | 2016-10-08 | 2017-08-18 | 阿里巴巴集团控股有限公司 | The implementation method and device of assembly function |
CN113722083A (en) * | 2020-05-25 | 2021-11-30 | 中兴通讯股份有限公司 | Big data processing method and device, server and storage medium |
CN116185629A (en) * | 2023-02-22 | 2023-05-30 | 之江实验室 | Task execution method and device, storage medium and electronic equipment |
CN116225653A (en) * | 2023-03-09 | 2023-06-06 | 中国科学院软件研究所 | QOS-aware resource allocation method and device under deep learning multi-model deployment scene |
WO2024007849A1 (en) * | 2023-04-26 | 2024-01-11 | 之江实验室 | Distributed training container scheduling for intelligent computing |
CN116541176A (en) * | 2023-05-24 | 2023-08-04 | 中国电信股份有限公司北京研究院 | Optimization method and optimization device for computing power resource allocation, electronic equipment and medium |
CN116383797A (en) * | 2023-05-31 | 2023-07-04 | 北京顶象技术有限公司 | Non-notch sliding verification code and generation method thereof |
CN116701001A (en) * | 2023-08-08 | 2023-09-05 | 国网浙江省电力有限公司信息通信分公司 | Target task allocation method and device, electronic equipment and storage medium |
CN117076535A (en) * | 2023-08-17 | 2023-11-17 | 安元科技股份有限公司 | Enterprise-level declarative domain model definition and storage model conversion method and system |
CN116880995A (en) * | 2023-09-08 | 2023-10-13 | 之江实验室 | Execution method and device of model task, storage medium and electronic equipment |
CN117519877A (en) * | 2023-11-15 | 2024-02-06 | Oppo广东移动通信有限公司 | Rendering method and device of quick application card, storage medium and electronic equipment |
CN117311998A (en) * | 2023-11-30 | 2023-12-29 | 卓世未来(天津)科技有限公司 | Large model deployment method and system |
Non-Patent Citations (2)
Title |
---|
丁刚毅等: "《OpenHarmony操作系统》", 30 November 2022, 北京理工大学出版社, pages: 86 - 87 * |
潘洪军: "一个面向对象语言的形式语义模型", 通化师范学院学报, no. 06, 31 December 1998 (1998-12-31) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7689998B1 (en) | Systems and methods that manage processing resources | |
Posse et al. | An executable formal semantics for UML-RT | |
CN112068957B (en) | Resource allocation method, device, computer equipment and storage medium | |
Lee et al. | A systematic design space exploration of MPSoC based on synchronous data flow specification | |
Cannella et al. | Adaptivity support for MPSoCs based on process migration in polyhedral process networks | |
JP2024536659A (en) | Task execution method, apparatus, storage medium and electronic device | |
CN114416360A (en) | Resource allocation method and device and Internet of things system | |
CN116467061B (en) | Task execution method and device, storage medium and electronic equipment | |
US12088451B2 (en) | Cross-platform programmable network communication | |
CN116151363B (en) | Distributed Reinforcement Learning System | |
CN110750359B (en) | Hardware resource configuration method and device, cloud side equipment and storage medium | |
CN116011562A (en) | Operator processing method, operator processing device, electronic device and readable storage medium | |
Maruf et al. | Requirements-preserving design automation for multiprocessor embedded system applications | |
CN117724823A (en) | Task execution method of multi-model workflow description based on declarative semantics | |
CN116204324A (en) | Task execution method and device, storage medium and electronic equipment | |
US11573777B2 (en) | Method and apparatus for enabling autonomous acceleration of dataflow AI applications | |
Kreku et al. | Automatic workload generation for system-level exploration based on modified GCC compiler | |
Jaber | High-Level soc modeling and performance estimation applied to a multi-core implementation of LTE enodeb physical layer | |
CN117170669B (en) | Page display method based on front-end high-low code fusion | |
CN115051980B (en) | HTCondor super-calculation grid file transmission method and system | |
Cera | Providing adaptability to MPI applications on current parallel architectures | |
Wong et al. | Requirements for static task scheduling in real time embedded systems | |
CN118939275B (en) | Compiling method, calling method, device, equipment and program product of classification parameters | |
WO2024060256A1 (en) | Self-evolving and multi-versioning code | |
Lumpp | Software Optimization and Orchestration for Heterogeneous and Distributed Architectures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |