GB2623992A - Machine learning inference emulation - Google Patents
Machine learning inference emulation Download PDFInfo
- Publication number
- GB2623992A GB2623992A GB2216346.3A GB202216346A GB2623992A GB 2623992 A GB2623992 A GB 2623992A GB 202216346 A GB202216346 A GB 202216346A GB 2623992 A GB2623992 A GB 2623992A
- Authority
- GB
- United Kingdom
- Prior art keywords
- inference
- machine learning
- emulation
- emulator
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 887
- 238000000034 method Methods 0.000 claims abstract description 120
- 238000004891 communication Methods 0.000 claims abstract description 55
- 238000004088 simulation Methods 0.000 claims description 19
- 230000004044 response Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 15
- 230000001960 triggered effect Effects 0.000 claims description 11
- 238000012217 deletion Methods 0.000 claims description 6
- 230000037430 deletion Effects 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 abstract description 5
- 230000002068 genetic effect Effects 0.000 abstract description 2
- 238000012706 support-vector machine Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 97
- 230000008569 process Effects 0.000 description 34
- 238000007726 management method Methods 0.000 description 30
- 230000015654 memory Effects 0.000 description 14
- 238000012549 training Methods 0.000 description 14
- 230000006399 behavior Effects 0.000 description 11
- 238000010200 validation analysis Methods 0.000 description 9
- 230000009471 action Effects 0.000 description 8
- 238000004519 manufacturing process Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000013480 data collection Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 5
- 101100014660 Rattus norvegicus Gimap8 gene Proteins 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 241000286209 Phasianidae Species 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 241000933095 Neotragus moschatus Species 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000011748 cell maturation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- MOVRNJGDXREIBM-UHFFFAOYSA-N aid-1 Chemical compound O=C1NC(=O)C(C)=CN1C1OC(COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C(NC(=O)C(C)=C2)=O)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C(NC(=O)C(C)=C2)=O)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C(NC(=O)C(C)=C2)=O)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)CO)C(O)C1 MOVRNJGDXREIBM-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000003920 cognitive function Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- XMHIUKTWLZUKEX-UHFFFAOYSA-N hexacosanoic acid Chemical compound CCCCCCCCCCCCCCCCCCCCCCCCCC(O)=O XMHIUKTWLZUKEX-UHFFFAOYSA-N 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 239000002674 ointment Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0866—Checking the configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W36/00—Hand-off or reselection arrangements
- H04W36/0005—Control or signalling for completing the hand-off
- H04W36/0083—Determination of parameters used for hand-off, e.g. generation or modification of neighbour cell lists
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Debugging And Monitoring (AREA)
Abstract
A method for emulating machine learning (ML) inference in a communication network. A ML emulation consumer 134 transmits a ML inference emulator 135, a request for instantiating inference emulation of a ML entity (e.g. a ML mode, neural network, genetic algorithm, or support vector machine) of the communication network 503, the request comprising an identifier of the ML entity. The ML emulation consumer then receives from the ML inference emulator, a ML inference emulation report based on an emulated output of the ML entity 511. The emulated output of the ML entity may be obtained by executing an inference of the ML entity 505 based on a simulated model or portion of the communication network. The validity of the ML entity may be determined based on the emulation report 512 and deployment of the ML entity may be enabled or disabled based on this determination.
Description
MACHINE LEARNING INFERENCE EMULATION
TECHNICAL FIELD
[0001] Various example embodiments generally relate to the field of 5 communication networks. Some example embodiments relate to emulation of machine learning inference in a communication network.
BACKGROUND
[0002] Machine learning (ML) may be used to address complexity of managing communication networks, or to improve their performance. For example, in cognitive autonomous networks (CAN) intelligence and autonomy may be provided in network operations, administration and management (OAM), as well as in network procedures, for example to support increased flexibility and complexity of a radio network. Through use of ML in cognitive functions, CANs may be configured to 1) take as input higher level goals and derive appropriate performance targets, 2) learn from their environment and their individual or shared experiences therein, 3) learn to contextualize their operating condition, or 4) learn their optimal behaviour fitting to the specific environment and contexts. One use case for such cognitive automation is handover optimization.
BRIEF DESCRIPTION
[0003] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
[0004] According to a first aspect, a method is disclosed. The method may comprise: receiving, by a machine learning inference emulator from a machine learning emulation consumer, a request for instantiating inference emulation of a machine learning entity of a communication network, wherein the request for instantiating the inference emulation comprises an identifier of the machine learning entity; causing execution of inference of the machine learning entity based on a simulation model or a portion of the communication network to obtain an emulated output of the machine learning entity; and transmitting, to the machine learning emulation consumer, a machine learning inference emulation report based on the emulated output of the machine learning entity.
[0005] According to an example embodiment of the first aspect, the method comprises: causing instantiation of a machine learning inference emulation job for the inference emulation of the machine learning entity, in response to determining that no machine learning inference emulation job exist at the machine learning inference emulator for the machine learning entity.
[0006] According to an example embodiment of the first aspect, the method comprises: determining that a machine learning inference emulation job exists at the machine learning inference emulator for the machine learning entity; and causing the execution of the inference of the machine learning entity based on the simulation model or the portion of the communication network in association with the machine learning inference emulation job.
[0007] According to an example embodiment of the first aspect, the method comprises: causing collection of input data for the inference of the machine learning entity, wherein the collection of the input data is based on the simulation model or the portion of the communication network; and causing execution of the inference of the machine learning entity based on the input data to obtain the emulated output of the machine learning entity.
[0008] According to an example embodiment of the first aspect, the method comprises: receiving, from the machine learning emulation consumer, a request for at least one characteristic of the machine learning inference emulator; and transmitting, to the machine learning emulation consumer, a notification of the at least one characteristic of the machine learning inference emulator.
[0009] According to an example embodiment of the first aspect, the method comprises: receiving, from the machine learning emulation consumer, a request for information on at least one execution resource available at the machine learning inference emulator; and transmitting, to the machine learning emulation consumer, a notification of the at least one execution resource available at the machine learning inference emulator.
[0010] According to an example embodiment of the first aspect, the method comprises: receiving, from the machine learning emulation consumer, a request for a status of the request for instantiating inference emulation of the machine learning entity or a status of the machine learning inference emulation job; and transmitting, to the machine learning emulation consumer, a notification of the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job.
[0011] According to an example embodiment of the first aspect, the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job is indicative of at least one of the following: that the request for instantiating inference emulation of the machine learning entity is pending, that the machine learning inference emulation job has been triggered, that the request for instantiating inference emulation of the machine learning entity has been suspended, or that the machine learning inference emulation job has been served.
[0012] According to an example embodiment of the first aspect, the at least one characteristic of the machine learning inference emulator comprises at least one attribute of a machine learning inference emulator object, wherein the status of the request for instantiating inference emulation of the machine learning entity comprises an attribute of a machine learning inference emulation request object, wherein the status of the machine learning inference emulation job comprises an attribute of a machine learning inference emulation job object, or wherein the information on the at least one execution resource available at the machine learning inference emulator comprises an attribute of the machine learning inference emulator object.
[0013] According to an example embodiment of the first aspect, the request for at least one characteristic of the machine learning inference emulator, the request for information on the at least one execution resource available at the machine learning inference emulator, or the request for the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job comprises an attribute read request.
[0014] According to an example embodiment of the first aspect, the method comprises: receiving, from the machine learning emulation consumer, a request for reading at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator; and transmitting, to the machine learning emulation consumer, a notification of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0015] According to an example embodiment of the first aspect, the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises at least one of the following: a number of received, ongoing, or completed requests for instantiating inference emulation at the machine learning inference emulator, a priority of the one or more requests for instantiating inference emulation at the machine learning inference emulator, or a status or a completion level of the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0016] According to an example embodiment of the first aspect, the method comprises: receiving, from the machine learning emulation consumer, a request for configuring at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator, and configuring the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0017] According to an example embodiment of the first aspect, the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises a priority of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0018] According to an example embodiment of the first aspect, the method comprises: receiving, from the machine learning emulation consumer, a request for deleting one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator; and deleting the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0019] According to an example embodiment of the first aspect, the method comprises: transmitting, to the machine learning emulation consumer, a notification of the configuration of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator, or a notification of the deletion of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0020] According to an example embodiment of the first aspect, the method comprises: configuring reporting of the machine learning inference emulation job, in response to receiving, from the machine learning emulation consumer, a request for configuring reporting associated with the request for instantiating inference emulation of the machine learning entity.
[0021] According to an example embodiment of the first aspect, the request for configuring reporting associated with the request for instantiating the inference emulation of the machine learning entity comprises a reporting period, and the method further comprises: transmitting, to the machine learning inference emulation consumer, a plurality of the machine learning inference emulation reports based on the reporting period.
[0022] According to a second aspect, a method is disclosed. The method may comprise transmitting, by a machine learning emulation consumer to a machine learning inference emulator, a request for instantiating inference emulation of a machine learning entity of a communication network, wherein the request for instantiating the inference emulation comprises an identifier of the machine learning entity; and receiving, from the machine learning inference emulator, a machine learning inference emulation report based on an emulated output of the machine learning entity.
[0023] According to an example embodiment of the second aspect, the method comprises: determining validity of the machine learning entity based on the machine learning inference emulation report; and determining to enable or disable deployment of the machine learning entity based on the validity of the machine learning entity.
[0024] According to an example embodiment of the second aspect, the method comprises: transmitting, to the machine learning inference emulator, a request for at least one characteristic of the machine learning inference emulator; and receiving, from the machine learning inference emulator, a notification of the at least one characteristic of the machine learning inference emulator.
[0025] According to an example embodiment of the second aspect, the method comprises: transmitting, to the machine learning inference emulator, a request for information on at least one execution resource available at the machine learning inference emulator; and receiving, from the machine learning inference emulator, a notification of the at least one execution resource available at the machine learning inference emulator.
[0026] According to an example embodiment of the second aspect, the method comprises: transmitting, to the machine learning inference emulator, a request for a status of the request for instantiating inference emulation of the machine learning entity or a status of a machine learning inference emulation job; and receiving, from the machine learning inference emulator, a notification of the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job.
[0027] According to an example embodiment of the second aspect, the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job is indicative of at least one of the following: that the request for instantiating inference emulation of the machine learning entity is pending, that the machine learning inference emulation job has been triggered, that the request for instantiating inference emulation of the machine learning entity has been suspended, or that the machine learning inference emul ati on j ob has been served.
[0028] According to an example embodiment of the second aspect, the at least one characteristic of the machine learning inference emulator comprises at least one attribute of a machine learning inference emulator object, wherein the status of the request for instantiating inference emulation of the machine learning entity comprises an attribute of a machine learning inference emulation request object, wherein the status of the machine learning inference emulation job comprises an attribute of a machine learning inference emulation job object, or wherein the information on the at least one execution resource available at the machine learning inference emulator comprises an attribute of the machine learning inference emulator object.
[0029] According to an example embodiment of the second aspect, the request for at least one characteristic of the machine learning inference emulator, the request for information on the at least one execution resource available at the machine learning inference emulator, or the request for the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job comprises an attribute read request.
[0030] According to an example embodiment of the second aspect, the method comprises: transmitting, to the machine learning inference emulator, a request for reading at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator; and receiving, from the machine learning inference emulator, a notification of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0031] According to an example embodiment of the second aspect, the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises at least one of the following: a number of received, ongoing, or completed requests for instantiating inference emulation at the machine learning inference emulator, a priority of the one or more requests for instantiating inference emulation at the machine learning inference emulator, or a status or a completion level of the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0032] According to an example embodiment of the second aspect, the method comprises: transmitting, to the machine learning inference emulator, a request for configuring at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0033] According to an example embodiment of the second aspect, at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises a priority of the one or more requests for instantiating inference emulation at the in learning in emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0034] According to an example embodiment of the second aspect, the method comprises: transmitting, to the machine learning inference emulator, a request for deleting one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0035] According to an example embodiment of the second aspect, the method comprises: receiving, from the machine learning inference emulator, a notification of the configuration of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator, or a notification of the deletion of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0036] According to an example embodiment of the second aspect, the method comprises: transmitting, to the machine learning inference emulator, a request for configuring reporting associated with the request for instantiating inference emulation of the machine learning entity.
[0037] According to an example embodiment of the second aspect, the request for configuring reporting associated with the request for instantiating the inference emulation of the machine learning entity comprises a reporting period, and the method further comprises: receiving, from the machine learning inference emulator, a plurality of the machine learning inference emulation reports based on the reporting period.
[0038] According to a third aspect, an apparatus is disclosed. The apparatus may comprise means for performing a method according to the first aspect, or any example thereof [0039] According to a fourth aspect, an apparatus is disclosed. The apparatus may comprise means for performing a method according to the second aspect, or any example thereof [0040] According to a fifth aspect, a computer program or a computer program product is disclosed. The computer program or computer program product may comprise instructions, which when executed by an apparatus, cause the apparatus perform the method according to the first aspect, or any example thereof [0041] According to a sixth aspect, a computer program or a computer program product is disclosed. The computer program or computer program product may comprise instructions, which when executed by an apparatus, cause the apparatus perform the method according to the second aspect, or any example thereof [0042] According to a seventh aspect, an apparatus is disclosed. The apparatus may comprise at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: receive, by a machine learning inference emulator from a machine learning emulation consumer, a request for instantiating inference emulation of a machine learning entity of a communication network, wherein the request for instantiating the inference emulation comprises an identifier of the machine learning entity; cause execution of inference of the machine learning entity based on a simulation model or a portion of the communication network to obtain an emulated output of the machine learning entity; and transmit, to the machine learning emulation consumer, a machine learning inference emulation report based on the emulated output of the machine learning entity.
[0043] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: cause instantiation of a machine learning inference emulation job for the inference emulation of the machine learning entity, in response to determining that no machine learning inference emulation job exist at the machine learning inference emulator for the machine learning entity.
[0044] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: determine that a machine learning inference emulation job exists at the machine learning inference emulator for the machine learning entity; and cause the execution of the inference of the machine learning entity based on the simulation model or the portion of the communication network in association with the machine learning inference emulation job.
[0045] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: cause collection of input data for the inference of the machine learning entity, wherein the collection of the input data is based on the simulation model or the portion of the communication network; and cause execution of the inference of the machine learning entity based on the input data to obtain the emulated output of the machine learning entity.
[0046] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: receive, from the machine learning emulation consumer, a request for at least one characteristic of the machine learning inference emulator; and transmit, to the machine learning emulation consumer, a notification of the at least one characteristic of the machine learning inference emulator.
[0047] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: receive, from the machine learning emulation consumer, a request for information on at least one execution resource available at the machine learning inference emulator; and transmit, to the machine learning emulation consumer, a notification of the at least one execution resource available at the machine learning inference emulator.
[0048] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: receive, from the machine learning emulation consumer, a request for a status of the request for instantiating inference emulation of the machine learning entity or a status of the machine learning inference emulation job; and transmit, to the machine learning emulation consumer, a notification of the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job.
[0049] According to an example embodiment of the seventh aspect, the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job is indicative of at least one of the following: that the request for instantiating inference emulation of the machine learning entity is pending, that the machine learning inference emulation job has been triggered, that the request for instantiating inference emulation of the machine learning entity has been suspended, or that the machine learning inference emul ati on j ob has been served.
[0050] According to an example embodiment of the seventh aspect, the at least one characteristic of the machine learning inference emulator comprises at least one attribute of a machine learning inference emulator object, wherein the status of the request for instantiating inference emulation of the machine learning entity comprises an attribute of a machine learning inference emulation request object, wherein the status of the machine learning inference emulation job comprises an attribute of a machine learning inference emulation job object, or wherein the information on the at least one execution resource available at the machine learning inference emulator comprises an attribute of the machine learning inference emulator object.
[0051] According to an example embodiment of the seventh aspect, the request for at least one characteristic of the machine learning inference emulator, the request for information on the at least one execution resource available at the machine learning inference emulator, or the request for the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job comprises an attribute read request. [0052] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to. receive, from the machine learning emulation consumer, a request for reading at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator, and transmit, to the machine learning emulation consumer, a notification of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0053] According to an example embodiment of the seventh aspect, the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises at least one of the following: a number of received, ongoing, or completed requests for instantiating inference emulation at the machine learning inference emulator, a priority of the one or more requests for instantiating inference emulation at the machine learning inference emulator, or a status or a completion level of the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0054] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: receive, from the machine learning emulation consumer, a request for configuring at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator; and configure the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0055] According to an example embodiment of the seventh aspect, the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises a priority of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0056] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to receive, from the machine learning emulation consumer, a request for deleting one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator; and delete the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0057] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: transmit, to the machine learning emulation consumer, a notification of the configuration of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator, or a notification of the deletion of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0058] According to an example embodiment of the seventh aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: configure reporting of the machine learning inference emulation job, in response to receiving, from the machine learning emulation consumer, a request for configuring reporting associated with the request for instantiating inference emulation of the machine learning entity.
[0059] According to an example embodiment of the seventh aspect, the request for configuring reporting associated with the request for instantiating the inference emulation of the machine learning entity comprises a reporting period, the instnictions are configured, when executed by the at least one processor, to cause the apparatus to: transmit, to the machine learning inference emulation consumer, a plurality of the machine learning inference emulation reports based on the reporting period.
[0060] According to an eighth aspect, an apparatus is disclosed. The apparatus may comprise at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: transmit, by a machine learning emulation consumer to a machine learning inference emulator, a request for instantiating inference emulation of a machine learning entity of a communication network, wherein the request for instantiating the inference emulation comprises an identifier of the machine learning entity; and receive, from the machine learning inference emulator, a machine learning inference emulation report based on an emulated output of the machine learning entity.
[0061] According to an example embodiment of the eighth aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: determine validity of the machine learning entity based on the machine learning inference emulation report; and determine to enable or disable deployment of the machine learning entity based on the validity of the machine learning entity.
[0062] According to an example embodiment of the eighth aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: transmit, to the machine learning inference emulator, a request for at least one characteristic of the machine learning inference emulator; and receive, from the machine learning inference emulator, a notification of the at least one characteristic of the machine learning inference emulator.
[0063] According to an example embodiment of the eighth aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: transmit, to the machine learning inference emulator, a request for information on at least one execution resource available at the machine learning inference emulator; and receive, from the machine learning inference emulator, a notification of the at least one execution resource available at the machine learning inference emulator.
[0064] According to an example embodiment of the eighth aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: transmit, to the machine learning inference emulator, a request for a status of the request for instantiating inference emulation of the machine learning entity or a status of a machine learning inference emulation job; and receive, from the machine learning inference emulator, a notification of the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job.
[0065] According to an example embodiment of the eighth aspect, the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job is indicative of at least one of the following: that the request for instantiating inference emulation of the machine learning entity is pending, that the machine learning inference emulation job has been triggered, that the request for instantiating inference emulation of the machine learning entity has been suspended, or that the machine learning inference emulation j ob has been served.
[0066] According to an example embodiment of the eighth aspect, the at least one characteristic of the machine learning inference emulator comprises at least one attribute of a machine learning inference emulator object, wherein the status of the request for instantiating inference emulation of the machine learning entity comprises an attribute of a machine learning inference emulation request object, wherein the status of the machine learning inference emulation job comprises an attribute of a machine learning inference emulation job object, or wherein the information on the at least one execution resource available at the machine learning inference emulator comprises an attribute of the machine learning inference emulator object.
[0067] According to an example embodiment of the eighth aspect, the request for at least one characteristic of the machine learning inference emulator, the request for information on the at least one execution resource available at the machine learning inference emulator, or the request for the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job comprises an attribute read request. [0068] According to an example embodiment of the eighth aspect, the instructions may be configured to, when executed by the at least one processor, to cause the apparatus: transmit, to the machine learning inference emulator, a request for reading at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator; and receive, from the machine learning inference emulator, a notification of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0069] According to an example embodiment of the eighth aspect, the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises at least one of the following: a number of received, ongoing, or completed requests for instantiating inference emulation at the machine learning inference emulator, a priority of the one or more requests for instantiating inference emulation at the machine learning inference emulator, or a status or a completion level of the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0070] According to an example embodiment of the eighth aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: transmit, to the machine learning inference emulator, a request for configuring at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0071] According to an example embodiment of the eighth aspect, at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises a priority of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine I earning inference emulation jobs instantiated at the machine learning inference emulator.
[0072] According to an example embodiment of the eighth aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: transmit, to the machine learning inference emulator, a request for deleting one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0073] According to an example embodiment of the eighth aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: receive, from the machine learning inference emulator, a notification of the configuration of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator, or a notification of the deletion of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
[0074] According to an example embodiment of the eighth aspect, the instructions may be configured to, when executed by the at least one processor, cause the apparatus to: transmit, to the machine learning inference emulator, a request for configuring reporting associated with the request for instantiating inference emulation of the machine learning entity.
[0075] According to an example embodiment of the eighth aspect, the request for configuring reporting associated with the request for instantiating the inference emulation of the machine learning entity comprises a reporting period, and the instructions are configured to, when executed by the at least one processor, cause the apparatus to: receive, from the machine learning inference emulator, a plurality of the machine learning inference emulation reports based on the reporting period. [0076] According to a ninth aspect, a (non-transitory) computer readable medium is disclosed. The (non-transitory) computer readable medium may comprise program instructions that, when executed by an apparatus, cause the apparatus to perform a method according to the first aspect, or any example thereof [0077] According to a tenth aspect, a (non-transitory) computer readable medium is disclosed. The (non-transitory) computer readable medium may comprise program instructions that, when executed by an apparatus, cause the apparatus to perform a method according to the second aspect, or any example thereof.
[0078] According to some aspects, there is provided the subject matter of the independent claims. Some further aspects are defined in the dependent claims. Many of the attendant features will be more readily appreciated as they become better understood by reference to the following description considered in connection with the accompanying drawings.
LIST OF DRAWINGS
[0079] The accompanying drawings, which are included to provide a further understanding of the example embodiments and constitute a part of this specification, illustrate example embodiments and, together with the description, help to explain the example embodiments. In the drawings: [0080] FIG. 1 illustrates an example of a communication network; [0081] FIG. 2 illustrates an example an apparatus configured to practise one or more example embodiments; [0082] FIG. 3 illustrates an example of use, management, and control of an MI inference emulation process; [0083] FIG. 4 illustrates examples of implementing an ALL inference function and a network emulator; [0084] FIG. 5 illustrates an example of messaging and operations for ML inference emulation at a communication network; [0085] FIG. 6 illustrates an example of retrieving information on characteristics, status, or resources of a ML inference emulator; [0086] FIG. 7 illustrates an example of instantiating and reporting of IVIL inference emulation jobs; [0087] FIG. 8 illustrates an example of reading characteristics of ML inference emulation requests of jobs; [0088] FIG. 9 illustrates an example of modifying characteristics of ML inference emulation requests of jobs; [0089] FIG. 10 illustrates an example of deleting ML inference emulation requests of jobs; [0090] FIG. 11 illustrates an example of a ML inference emulation process; [0091] FIG. 12 illustrates an example of an object diagram associated with a ML inference emulator; [0092] FIG. 13 illustrates an example of inheritance relations of MI_ inference emulation; [0093] FIG. 14 illustrates an example of an information model for ML inference emulation control for an M1_, inference emulator contained within a MIL inference function; [0094] FIG. 15 illustrates an example of inheritance relations of ML inference emulation for a managed function; [0095] FIG 16 illustrates an example of a method for inference emulation; and [0096] FIG 17 illustrates an example of a method for requesting inference emulation.
[0097] Like references are used to designate like parts in the accompanying drawings.
DESCRIPTION
[0098] Reference will now be made to embodiments, examples of which are illustrated in the accompanying drawings. The description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
[0099] Standardization of A1/ML capabilities in communication networks, such as for example networks defined by the 3rd Generation Partnership Project (3GPP), is ongoing, for example in order to support production of analytics. In many cases, machine learning-based functions may not be trained within the network operator's environment and therefore it may be difficult for the network operator to determine how inference based on those functions will perform in practise. In some cases, applying such ML function for inference may result in negative consequences that were not anticipated when training the ML function, e.g., if' the data used for training did not include data that would enable to accurately model all scenarios. Therefore, prior to deploying the NIL model or a ML-based function, for example an NIL entity (MEntity) comprising an ML model (MIAlode 0 into a production system, for example a real communication network, it may be desired to provide the network operator with means to execute inference of the ML model in a controlled environment where the ML model or NIL-based function can be evaluated against possible (unanticipated) negative impacts. Such a controlled execution environment may be called a network emulator for ML inference or an ML inference emulator. The process of executing such a ML inference emulator may be referred to as NIL inference emulation. The network or its management system may be configured with management services to support NIL inference emulation for any AllAlodel or ML Emily.
[00100] An NIL inference emulator may comprise any means for realizing behaviour similar to normal behaviour of a production network or behaviour that mimics the normal behaviour of a production network. Correspondingly, an ML inference emulator may comprise any one of the following: a network simulation environment, a test network, a digital twin of a network, the real communication network under constrained conditions, e.g., in a certain time period or for a selected set of user equipment (UE).
[00101] A specific ML task may be sequentially executed in different NIL inference emulators to elevate its level of trust, for example to ensure that the same behaviour observed in the MI inference emulator is also observed in the constrained real communication network. Inference emulation may comprise running (executing) a specific ML task (ML inference) in any one of the above ML inference emulator environments, for example using specific runtime context, data sets, or operating environment.
[00102] A ML model may be an independent managed entity. In this case, MIL model inference emulation may comprise a process of executing the NIL model itself within the ML inference emulator. Alternatively, the ML model may be an entity that is not independently managed, but rather as an attribute of a managed ML-based function or application. In this case inference emulation may comprise implementing means and services to enable the actions generated by the ML-based function to be applied in a safe way. The latter process may correspondingly be referred to as MI inference emulation. However, term ML entity may be used to refer to either an MI model or any entity containing ML capabilities. Example embodiments of the present disclosure provide standard-based means for inference emulation of an ML entity and also the related ML model inference emulation processes.
[00103] After training of an ML entity, validation may be performed to ensure that the training process was completed successfully. Validation may be performed for example by preserving part of the training data set and using it after training to check whether the ML entity has been trained correctly or not. However, even if the ML model were validated during development, performing inference emulation may be beneficial, for example for determining whether the AIL entity containing the ML model is working correctly under certain runtime context or using certain inference emulation data set. Validation and inference emulation may be similar on a functional level: for example, both validation and inference emulation may be used to check the ML performance against given context or data to ensure the MI_ functionality is correct. However, inference emulation may include interaction with third parties, e.g., network operator(s) that use the ML entity or third-party systems that may rely on results provided as output of the ML entity. At least for these reasons it may be beneficial to provide standardized means for enabling multi-vendor interaction among the different systems.
[00104] In general, communication networks may not support inference emulation. For example, there may not be means for a given consumer to request for inference of a specific Al/ML capability to be executed in emulation environment, means tor a given consumer to request a specific ML inference emulator to execute a given AI/ML capability, or means for a managed function to act as an ML inference emulator and to execute AI/ML capabilities in a controlled way with no impact on the production network.
[00105] It may be desired to enable any ML entity to be tested with specific inputs and features that are applicable to a specific use case or deployment environment. The network or its management system may be therefore configured with related capabilities and to provide relevant services to enable the consumer to request inference emulation and receive feedback on the inference emulation of a specific ML entity, or application or function that contains an ML entity. Furthermore, the inference emulation may not be necessarily undertaken by human users, and therefore it may be desired to provide machine implementable interactions in an automated way. Example embodiments of the present disclosure provide means and capabilities for inference emulation of an NIL entity or a function containing an ML entity.
[00106] FIG. 1 illustrates an example of a communication network.
Communication network 100 may comprise a device, represented in this example by user equipment (UE) 110, such as for example a smart phone. UE 110 may communicate with one or more access nodes 120, U2, 124, for example 5th generation access nodes (gNB), over wireless radio channel(s). Access nodes may be also referred to as access points or base stations. Communications between UE 110 and access node(s) 120, 122, 124 may be bidirectional and hence any of these entities may be configured to operate as a transmitter and/or a receiver. An access node may be associated with a cell, which may correspond to a geographical area covered by the access node. In context of handover, an access node of a source cell may be called a source access node and an access node of a target cell may be called a target access node.
[00107] Access nodes 120, 122, 124 may be part of a radio access network (RAN) that enables UE 110 to access various services and functions of core network 130, for example MIL entity 132. As discussed above, ML entity 132 may be or comprise an ML model, for example a neural network, a genetic algorithm, a support vector machine, or the like. The ML entity 132 may be configured (e.g., by training) to perform a task. Examples of tasks to be carried out in communication network 100 may include performing, or assisting in performing, target cell selection for a handover, beam selection, enhanced channel estimation, compression of channel state information, positioning, reference signal reduction, channel prediction, or the like. Multiple NIL entities 132 may be provided within communication network 100. Various entities, such as for example network functions, management functions, or operators may be configured to use ML entity 132 for performing a task within communication network 100. Such entities may be configured to operate as ML emulation consumer 134 for emulating performance of ML entity 132. Core network 130 may comprise an ML inference emulator 136 configured to emulate inference of AI/ML functions or entities, for example MI_, entity 132. Even though particular functions, such as for example ML entity 132, ML emulation consumer 134, or ML inference emulator H6 have been illustrated as part of core network 130, one or more of these function may be provided outside core network 130, for example as, separate function(s) communicatively coupled to core network 130, or at an access node. The radio access network, comprising one or more access nodes 120, 122, 124 and core network 130 may be collectively referred to as network 140.
[00108] FIG. 2 illustrates an example of an apparatus configured to practice one or more example embodiments. Apparatus 200 may comprise a network device (e.g., a device configured to implement one or more network functions), a UE, an access node, an access point, base station, a radio network node, or a split portion thereof, or in general a device configured to implement functionality described herein. Apparatus 200 may be configured to implement network function(s) of core network 130, for example MI emulation consumer 134 or MI_, inference emulator 136. Although apparatus 200 is illustrated as a single device, it is appreciated that, wherever applicable, functions of apparatus 200 may be distributed to a plurality of devices.
[001 0 9] Apparatus 200 may comprise at least one processor 202. The at least one processor 202 may comprise, for example, one or more of various processing devices, such as for example a co-processor, a microprocessor, a controller, a digital signal processor (DSP), a processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASTC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like.
[001 1 0] Apparatus 200 may further comprise at least one memory 204. The 30 memory 204 may be configured to store, for example, computer program code or the like, for example operating system software and application software. The memory 204 may comprise one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination thereof For example, the memory may be embodied as magnetic storage devices (such as hard disk drives_ etc.), optical magnetic storage devices, or semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash RONI, RAM (random access memory), etc.). Memory 204 is provided as an example of a (non-transitory) computer readable medium. The term "non-transitory," as used herein, is a limitation of the medium itself (i.e., tangible, not a signal) as opposed to a limitation on data storage persistency (e.g., RAM vs. ROM).
[00111] Apparatus 200 may comprise a communication interface 208 configured to enable apparatus 200 to transmit and/or receive information. Communication interface 208 may comprise an internal or external communication interface, such as for example a radio interface between UE 110 and an access node 120, 122, 124. Alternatively, or additionally, communication interface 208 may comprise an interface of core network 130, such as for example the service based interface (SBI) bus of 50 core network. Communication interface 208 may comprise a transmitter (TX), for example a wireless radio transmitter such as a 40 or 50 radio transmitter, a Wi-Fi radio transmitter, or the like, configured to transmit radio signals. Communication interface 208 may comprise a receiver, for example a wireless radio receiver such as a 40 or 50 radio received, a Wi-Fi radio receiver, or the like.
Alternatively, or additionally, transmitter or receiver may be configured to transmit/receives signals over wired media, such as for example an optical fiber or a cable. The transmitter and receiver may be combined as a transceiver. The transmitter or the receiver may be coupled to at least one antenna to transmit/receive radio signals. Transmitter or receiver may comprise analog or digital circuitry, such as for example radio frequency circuitry and baseband circuitry. Functionality of the transmitter or receiver may be partially implemented by processor 202 and program code 206. For example, processor 206 may be configured to handle a subset of operations (e.g. modulation or forward error correction coding) of the transmitter or receiver, to provide a partially software-based transmitter or receiver apparatus.
[00112] Apparatus 200 may further comprise other components and/or functions such as for example a user interface (not shown) comprising at least one input device and/or at least one output device. The input device may take various forms such a keyboard, a touch screen, or one or more embedded control buttons, for example to enable a human operator to control apparatus 200. The output device may for example comprise a display, a speaker, or the like.
[00113] When apparatus 200 is configured to implement some functionality, some component and/or components of apparatus 200, such as for example the at least one processor 202 and/or the at least one memory 204, may be configured to implement this functionality. Furthermore, when the at least one processor 202 is configured to implement some functionality, this functionality may be implemented using program code 206 comprised, for example, in the at least one memory 204.
[00114] The functionality described herein may be performed, at least in part, by one or more computer program product components such as software components. According to an example embodiment, apparatus 200 comprises a processor or processor circuitry, such as for example a microcontroller, configured by the program code 206, when executed, to execute the embodiments of the operations and functionality described herein. Program code 206 is provided as an example of instructions which, when executed by the at least one processor 202, cause performance of apparatus 200.
[00115] A ML model, for example a neural network, may be implemented by software. For example, parameters (e.g. weights) of a neural network may be stored at the at least on memory 204 and structured such that flow of input data through layers of the neural network is implemented, when executing associated program instructions. Similarly, transmission of data, for example over a radio interface, may be controlled by software.
[00116] Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), graphics processing units (GPU s), or the like.
[00117] Apparatus 200 may be configured to perform or cause performance of any aspect of the method(s) described herein. Further, a computer program or a computer program product may comprise instructions for causing, when executed by apparatus 200, apparatus 200 to perform any aspect of the method(s) described herein. Further, apparatus 200 may comprise means for performing any aspect of the method(s) described herein. In one example, the means comprises the at least one processor 202, the at least one memory 204 including program code 206 (instructions) configured to, when executed by the at least one processor 202, cause apparatus 200 to perform the method(s). In general, computer program instructions may be executed on means providing generic processing functions. Such means may be embedded for example in a personal computer, a smart phone, a network device, or the like. The method(s) may be thus computer-implemented, for example based algorithm(s) executable by the generic processing functions, an example of which is the at least one processor 202. The means may comprise transmission or reception means, for example one or more radio transmitters or receivers, which may be coupled or be configured to be coupled to one or more antennas, or transmitter(s) or receiver(s) of a wired communication interface.
[00118] FIG. 3 illustrates an example of use, management, and control of a ML inference emulation process. Example embodiments of the present disclosure provide a ML inference network emulator (NIL inference emulator 136) and related services for enabling the NIL inference emulation process. ML inference emulator 136 may be a network function that enables access to a digital twin (or replica) of at least part of communication network 100, e.g., a simulation environment, or access to an instantiation of a controlled scope or part of the real network. The example embodiments also provide functionality and related services for managing and controlling the ML inference emulation process and/or related requests associated with inference emulation, for example for a specific NIL entity 132.
[00119] ML inference emulator 136 may be provided as a management service (MnS). MI inference emulator 136 may for example operate as an MnS producer.
ML emulation consumer 134 may operate as a MnS consumer. ML emulation consumer 134 may be also referred to as 'consumer'. It is however noted that term consumer may include any type of function or entity configured to consume an ML inference emulation service. In a service-oriented approach, which may be described as interaction(s) between an MnS producer and consumer, an MnS consumer may be configured to request services from an MnS producer.
[00120] NIL inference emulator 136 may transmit to NM emulation consumer 134(NInS consumer) infbrination on ML models, 'Unctions, or resources available at ML inference emulator 136. ML emulation consumer 134 may be an authorized consumer that is authorized to access services of ML inference emulator 136. Any suitable method for authorization may be used. NIL inference emulator 136 may be configured to inform NIL emulation consumer 134 about the characteristics of ML inference emulator 136, including for example its configurable aspects like supported AI/NIL capabilities, ML entities, or the like.
[00121] NIL emulation consumer 134 may be configured to instantiate or request AIL inference emulation. MI inference emulation may be instantiated or requested with specific data, data characteristics, emulation environment features, or the like.
For example, NIL inference emulator 136 may request to execute NM inference emulation for a specific NIL entity 132. Accordingly, ML inference emulator 136 may receive a request to execute NIL inference emulation of NIL entity 132. The request may comprise an ML inference emulation request (MLInferenceEmulationRequest). In response to receiving this request, ML inference emulator 136 may instantiate an inference emulation process, also referred to as an NIL inference emulation job (AILInferenceEnndatUndob). The NIL inference emulation job may be associated with the received MIL inference emulation request and the specified NIL entity. The request for NIL inference emulation may be stated with, e.g., comprise, applicable data required for executing the NIL entity at NIL inference emulator. NIL inference emulation requests and ML inference emulation jobs may be modelled as objects belonging to an associated information object class (IOC).
[00122] An authorized consumer may be therefore enabled to create an instance of the NIL inference emulation process (AIL/irferenceEniii/ationJob) at NIL inference emulator 136. Accordingly, ML inference emulator 136 may receive a request to instantiate a ML inference emulation job for ML entity 132. The request may include the applicable data required for executing NIL entity 132 at NIL inference emulator 136 and/or any applicable inference emulation features. The request may comprise ML entity 132, or a ML model thereof, for which ML inference emulation is requested. Alternatively, the request may just identify ML entity 132, which may be already available at NIL inference emulator 136 or accessible by ML inference emulator 136. For example, in response to the request, ML inference emulator 136 may retrieve ML entity 132 or the associated ML model from memory of ML inference emulator 136 or from another function or device, or request another device or function to execute inference of NIL entity 132. A function, such as for example ML inference emulator 136, may be configured to execute ML inference emulation of a specific ML entity using the specified data or data with specifically stated characteristic and inference emulation features.
[00123] ML inference emulator 136 may transmit to ML emulation consumer 134 information about NIL entities under execution in at MIL inference emulator 136. NIL inference emulator 136 may transmit to NIL emulation consumer 134 information about available execution resources on NIL inference emulator 136, optionally with information on when more resources are expected to be available for any subsequent executions of ML inference at NIL inference emulator 136. [00124] ML emulation consumer 134 may be configured to control Winference enmlationrequests or processes. For example, ML emulation consumer 1 34, or in general an authorized consumer (e.g., an operator or the function/entity that generated the NIL inference emulation request) may be configured to manage the NIL inference emulation request, e.g., to suspend, re-activate, or cancel the NIL inference emulation request; or to adjust characteristics of the NIL inference emulation. NIL emulation consumer 134 or an authorized consumer may be configured to manage or control a specific NIL inference emulation job, e.g., to start, suspend, or restart the ML inference emulation, or to adjust inference emulation conditions or characteristics.
[00125] NIL inference emulator 136 may be configured to provide reporting on AIL inference emulation. For example, NIL emulation consumer 134 or an authorized consumer may be configured to request ML inference reporting from ML inference emulator 136. ML inference emulator 136 may be configured to provide a report on a specific NIL inference emulator request, a specific NIL inference emulation job, or outcomes of any such ML inference emulator request or ML inference emulation job. ML emulation consumer 134 or an authorized consumer (e.g., the function/entity that generated the ML inference emulator request) may be configured to define reporting characteristics, for example associated with a specific 1VIL inference emulation job. NIL emulation consumer 134 or any authorized consumer (e.g., an operator, managed network function, or management network function) may be configured to request NIL inference emulator 136 to provide reporting on the state of a specific NIL inference emulator. This request may be referred to as an ML inference emulator status request (MT.InferenceEmulator StatusRequest). ML emulation consumer 134 or any authorized consumer may be configured to receive from ML inference emulator 136 information about the status of running tasks and/or resource consumption in ML inference emulator 136.
[00126] Accordingly, the following features are disclosed: 1) NIL inference emulator 136 may comprise an executable digital twin of communication network 100. The digital twin may be modelled as a managed function contained in either any network related ManagedFunction, ManagementFunction, or a subnetwork. A subnetwork may comprise a logical entity that groups together a set of related network functions or network elements.
ML inference emulator 136 may contain or be associated with appropriate properties and modules for accomplishing NIL inference emulation, including for example one or more of the following: - a list of NIL entities, for example entities under emulated execution or entities considered for execution, -a list of ML inference emulation jobs, or - a list of NIL inference emulator report instances associated with one or more NIL inference emulator requests or NIL inference emulation jobs.
2) ML inference emulator 136 may be configured to instantiate an ML inference emulation job, for example based on job creation requests or instructions received from managed function(s), management function(s), or human operator(s). ML inference emulator 136 may be configured with interfaces for instantiating the ML inference emulation job. ML inference emulator 136 may be configured to execute an ML entity using specified input data for example input data with specific characteristics or specific expected runtime context (expectedRuntitneComen). A mntime context may for example include information on what kind of network functions the AVIVIL solution may be applied, time intervals at which the AI/ML model may activate decisions, or in general any parameters associated with inference of a ML entity or model at the real network.
3) MIL inference emulator 136 may be configured to enable an operator or a management function to configure and manage one or more MIL inference emulation jobs. To enable this, ML inference emulator 136 may be configured with control interface(s), for example to towards the operator or management function (e.g., MIL emulation consumer 134).
4) An1\TL inference emulation process within ML inference emulator may comprise a pipeline of processes that include one or more of the following: getting the input data fitted to a particular use case and ML entity, pre-processing the input data (e.g., to format it according to input dimension(s) of the IML entity), designing or selecting an execution plan (e.g., by batching the data for the executions), running the executions, or compiling data for the inference emulation reporting. Examples of use cases include handover optimization, interference optimization, anomaly detection, or the like. These use case may be associated with different data needs, for example different dimensions of input data for a ML entity.
5) ML inference emulator 136 may be configured to enable, for example via the control interface(s), an operator or a management function to read information on the status of ML inference emulator 136, for example data on resource consumption and/or ML inference emulation jobs being executed at ML inference emulator 136.
[00127] FIG. 4 illustrates examples of implementing an MIL inference function and a network emulator. ML inference emulator 136 may be implemented as a function that is concurrently an ML inference function 402 and an ML inference emulator of the real network, or as a network emulation function 404 (network emulator) capable of ML inference. ML inference emulator 136 may be therefore implemented for example as: (a) A capability of the ML inference function 402 (AITML function). In this case, MIL inference function 402 may comprise network emulator 404. ML inference function 402 may therefore have the capability of emulating the real network conditions and to execute the A I/ML inference on its emulation environment. (b) A capability of a network emulator 404. In this case, network emulator 404 may comprise NIL inference function 402. Network emulator 404 may therefore have the capability to execute ML inference.
In either case, the capabilities may be exposed towards consumers via the same ML inference emulation MnS producer, for example NIL inference emulator 136. [00128] FIG. 5 illustrates an example of messaging and operations for MIL inference emulation at a communication network. Further examples of the messages and operations are provided in FIG. 6 to FIG. 11.
[00129] At operation 501, NIL inference emulation consumer 134, also referred to as a "consumer", may transmit, to NIL inference emulator 136, a request for characteristic(s) of ML inference emulator 136, a request for a status of ML inference emulation request(s) or ML inference emulation job(s) existing at ML inference emulator 136, or a request for information on execution resource(s) available at ML inference emulator 136. Any of these requests may be transmitted as a separate message, combined within a single request message, or embedded in other messages transmitted to MIL inference emulator. The characteristic(s), status, or available resources ML inference emulator may be provided as attribute(s) of a ML inference emulator object. The request may comprise an attribute read request of an MIL inference emulator object.
[00130] Characteristics of MIL inference emulator 136 may include for example a type of device where NIL inference emulator is instantiated (e.g, an access node such as for example a gNB, a remote cloud, or a central cloud) or a type of energy source available for ML inference emulator 136 (e.g., renewable or not).
[00131] The status of the request for instantiating inference emulation of the ML entity may indicate a processing status or phase at ML inference emulator 136. The status may indicate that the request is pending, for example in a state, where the request has been received but no action (e.g., instantiation of a ML inference emulation job) has been taken based on the request. The status of the request for instantiating inference emulation of the ML entity may indicate that the request for instantiating inference emulation of the NIL entity has been suspended. The request may be suspended for example by ML inference emulator 136 or another authorized consumer, e.g., the network operator. In the suspended state, inference emulation of the associated ML entity may be at least temporarily terminated.
[001 32] The status of the ML inference emulation job may indicate that the ML inference emulation job has been triggered. State 'triggered' may refer to a state where the ML inference emulation job has been instantiated and is running, for example inference is being emulated for the associated ML entity. The status of the ML inference emulation job may indicate that the machine learning inference emulation job has been served. State 'served' may correspond to a state where the associated ML inference emulation job, which may be associated the specific ML inference emulation request has been completed. For example, when an emulated output of the ML entity has been obtained [001 33] At operation 502, NIL inference emulator 136 may transmit a notification of the characteristic(s), status(es), or available execution resource(s) to consumer 134. Based on the notification, consumer 134 may determine whether to transmit a request for instantiating ML inference emulation for an ML entity (cf operation 503). For example, consumer 134 may determine to request instantiation of ML inference emulation, if there are enough execution resources available or there is already an instantiated job for the same ML entity.
[001 34] At operation 503, consumer 134 may transmit to ML inference emulator 136 a request for instantiating inference emulation of a NIL entity of communication network 100. The request may comprise an identifier of the NIL entity for which inference emulation is requested. In response to receiving the request, 1\41_, inference emulator 136 may determine whether it already has a ML inference emulation job for the identified NIL entity. If no ML inference emulation job exist at NIL inference emulator 136 For the ML entity, ML inference emulator 136 may cause instantiation of a ML inference emulation job for the ML entity at operation 504. ML inference emulator 136 may cause the instantiation of the NIL inference emulation job by instantiating the job itself, or by requesting another function or device, such as for example a ML inference managed function (MnF), to instantiate the job, for example by transmitting a request to create a ML inference request or ML inference emulation job to the other function or device. Alternatively, ML inference emulator 136 may determine that a ML inference emulation job already exists for the ML entity at ML inference emulator 136.
[00135] At operation 505, ML inference emulator 136 may cause execution of emulated inference of the ML entity, either in association with an existing ML inference emulation job of the NIL entity or the just instantiated NIL inference emulation job for the ML entity, as described with reference to operation 504. ML inference emulator 136 may cause the inference by executing the inference by itself or by requesting another function or device, such as for example the ML inference MnF, to execute the inference, for example by transmitting the request to create a ML inference request or ML inference emulation job to the other function or device.
In either case, execution of the inference may be based on a simulation model (e.g., a digital twin) or a portion (e.g., data associated with selected set of UEs during certain time period) of communication network 100. The simulation model may be embedded within ML inference emulator 136, the ML inference N1nF, or another network function or device that may be configured to execute the simulation model upon request from either ML inference emulator 136 or the NLL inference NInF. [00136] NIL inference function 136 may cause collection of input data for the inference of the ML entity. MIL inference emulator 136 may cause collection of the input data by collecting the input data by itself, or by requesting another function or device, such as for example the ML inference NInF, to collect the input data, for example by transmitting the request to create a ML inference request or ML inference emulation job to the other function or device. Collection of the input data may be based on the simulation model or a portion of the communication network. For example, NIL inference function 136 or MIL inference MnF may perform simulation of communication network 100 to obtain the input data, or to collect the input data from the portion of the real network. Inference of the ML entity may be then performed based on the collected input data to obtain the emulated output of the Mt entity.
[00137] In response to completion of the inference, ML inference emulator 136 may prepare and transmit a ML inference emulation report (cf. operation 511).
However, other operations, such as for example one or more of operations 506 to 510 may be performed before transmission of the ML inference emulation report.
In general, these or other operations of FIG. 5 may be performed in any suitable order and some operations (e.g., operations 506, 507, 508, 509, and/or 510) may not be present in some example embodiments.
[00138] At operation 506, consumer 134 may transmit, to ML inference emulator 136, a request for reading or configuring characteristics of ML inference emulation request(s) of job(s), or deleting ML inference emulation request(s) of job(s). A request for reading characteristics of NIL inference emulation request(s) or job(s) may comprise a read request, for example an attribute read request, of NIL inference emulation request or job object(s). The characteristic(s) may comprise attribute(s) of a NIL inference emulator request object or MIL inference emulator job object.
[00139] The characteristic(s) of the request(s) may for example comprise a number of received, ongoing, or completed requests for instantiating inference emulation at MIL inference emulator 136. The characteristic(s) of the request(s) may comprise a priority of (or among) the request(s) for instantiating inference emulation at NIL inference emulator 136. The characteristics of the job(s) may comprise a status or a completion level job(s) instantiated NLL inference emulator 136. The status of a request may be for example indicative of a request being pending or suspended. The status of a job may be for example indicating of the job having been triggered or server, as described above. The completion level may be indicated for example as a percentage.
[00140] At operation 507, ML inference emulator 136 may configure or delete MIL inference emulation request(s) or job(s), for example in response to receiving a request to configure or delete the request(s) or job(s) from consumer 134 at operation 506. This enables consumer 134 to control execution of requests or jobs.
Consumer 134 may be authorized to delete only its own requests or jobs, e.g. request transmitted by consumer 134 or jobs instantiated based on requests from consumer 134.
[00141] At operation 508, ML inference emulator 136 may transmit to consumer 134 a notification of the characteristic(s) of the request(s) or job(s), their configuration, or deletion of the request(s) or job(s). This enables consumer 134 to be informed about the status of MIL inference emulator 136. A notification about the configuration of the characteristic(s) of the request(s) or job(s) may comprise a notification that particular characteristic(s) have been updated or their updated values.
[00142] At operation 509, consumer 134 may transmit, to NIL inference emulator 136, a request for configuring reporting by ML inference emulator 136, for example for a request for inference emulation of certain NIL entity. This reporting configuration request may comprise an identifier of an associated ML inference emulation request, job, or NIL entity, for which the reporting is to be configured. The reporting configuration request may comprise one or more reporting parameters, for example a reporting period, to be configured.
[00143] At operation 510, NIL inference emulator 136 may configure reporting associated with a NIL inference emulation request or job, or a ML entity. This may be in response to receiving the reporting configuration request at operation 509. For example, if the reporting configuration request comprises a reporting period, NIL inference emulator 136 may determine to transmit to consumer 134 a plurality of ML inference emulation reports based on the reporting period.
[00144] At operation 511, NIL inference emulator 136 may transmit to consumer 134 a MI. inference emulation report. The report may be generated by ML inference emulator 136 based on the emulated output of the ML entity obtained at operation 505. The report may for example include the emulated output as such, or indicate a result of a comparison of the emulated output with reference data, such as for example a desired output of the NIL entity. If the reporting configuration request comprises a reporting period, NIL inference emulator 136 may to transmit to consumer 134 a plurality of ML inference emulation reports based on the reporting period. ML inference emulator 136 may transmit the NIL inference emulation report for enabling consumer 134 to determine validity of the ML entity and/or to determine whether to deploy, enable, or disable the NIL entity at communication network 100.
[00145] At operation 512, consumer 134 may determine validity (e.g., valid, non-valid, or a degree of validity) of the MIL entity based on the MI. inference emulation report. Consumer 134 may determine validity of the ML entity also based on an indication of validity included in the TYE, inference emulation report. Consumer 134 may determine to deploy the ML entity, or keep it deployed, for inference at communication network 100, in response to determining that the ML entity is valid or satisfies a condition for its degree of validity. Consumer 134 may determine not to deploy the ML entity, or to disable it, for inference at communication network 100, in response to determining that the ML entity is non-valid or does not satisfy the condition for its degree of validity.
[00146] FIG. 6 illustrates an example of retrieving information on characteristics, status, or resources of a NIL inference emulator. To support means for consumers, for example consumer 134, to be informed about ML inference emulator 136, different features of ML inference emulator 136 (e.g. the characteristics of ML inference emulator 136, the NIL entities under execution in at NIL inference emulator 136, or resources on the NIL inference emulator) may be modelled as attributes on an object representing ML inference emulator 136 (MLInferenceEmulator object). Consumer 134 may be configured to read attributes of the modelled object or may request for information on the attributes to be delivered. FIG. 6 illustrates three alternatives (alt) retrieving information on characteristics of a MT interference emulator 136, retrieving information on MIL entities, or retrieving information on available resources of ML inference emulator 136. Each of the three alternatives may include to (sub)alternatives for retrieving the relevant information using a read operation or a generic list or request operation.
[00147] At operation 601, consumer 134 may transmit an attribute read request for AILInferenceEnmlator object: e.g. readMOlattributes(AILWerenceEmulator). Field IVILMferenceEmulator may comprise an identifier of the ML inference emulator object. Abbreviation 'NIOI' may refer to a managed object instance.
[00148] At operation 602, ML inference emulator 136 may transmit a notification, e.g. notify (A attributes), compri sing the requested attributes.
[00149] At operation 603, consumer 134 may transmit a list or request operation for the attributes of ML inference emulator 136, e.g. listOILMferenceEmulator.*) [00150] At operation 604, ML inference emulator 136 may transmit a notification comprising the requested data, e.g. notiMMLInferenceEmulator.*). Operations 601 and 602 may belong to a first (sub)alternative of the first alternative. Operations 603 and 604 may belong to a first (sub)alternative of the first alternative.
[001 51] At operation 605, consumer 134 may transmit an attribute read request to request status of a ML entity: e.g. readMOlattributes(MLEntity). Field AILEntity may comprise an identifier of the NIL entity for which the attributes are requested.
[001 52] At operation 606, ML inference emulator 136 may transmit a notification, e.g. notiMAILEnthyStatus), comprising the requested status attribute(s).
[001 53] At operation 607, consumer 134 may transmit a! St or request operat on for the attributes of the MIL entity, e.g. list(AlLEntiO) [00154] At operation 608, NIL inference emulator 136 may transmit a notification comprising the requested data, for example as a list comprising statuses of the ML entities associated with the list or request operation, e.g., notifiilList(MI,EntitiesStatu.$). Operations 605 and 606 may belong to a first (sub)alternative of the second alternative. Operations 607 and 608 may belong to a second (sub)alternative of the second alternative.
[001 55] At operation 609, consumer 134 may transmit an attribute read request to request information about resources available at ML inference emulator 136: e.g. rectd11401attributes(AILInJerenceEmulator.resources).
[001 56] At operation 610, NIL inference emulator 136 may transmit a notification, e.g. notiffiAILInferenceEmulatorresources), comprising the requested attribute(s) indicative of the available resources.
[001 57] At operation 611, consumer 134 may transmit alist or request operation for the information on the available resources, e.g by message I is" (A1LMfe re tweE mul War. resvurce.$).
[00158] At operation 612, ML inference emulator 136 may transmit a notification comprising the requested data, for example as a list comprising the resources available at ML inference emulator 136, e.g., by message notifiyList(AILlifferenceEmulatorre.svurce3)). Operations 609 and 610 may belong to a first (sub)alternative of the third alternative. Operations 611 and 612 may belong to a second (sub)alternative of the third alternative, [00159] FIG. 7 illustrates an example of instantiating and reporting of ML inference emulation jobs. ML inference emulator 136 may instantiate an ML inference emulation job based on requests from consumers to create an NLL inference emulation request or to create an ML inference emulation job. Requests for ML inference emulation may be transmitted/received for example using a ML inference emulator provisioning management service, implemented for example by CRUD (Create, Read, Update, Delete) operations on the respective MLIgferenceErnulutionRequest or AILInfrrenceEmulationIob obj ects.
[00160] At operation 701, consumer 134 may transmit, to ML inference emulator 136, a request for inference emulation, for example for instantiating a AlligferenceEnntlationRequest or IVILIgferenceEmulatiorklob at ML inference emulator 136. The request may comprise an indication of the associated ML entity, e.g., an identifier of the specific NIL entity that consumer 134 requests to be validated. The request may alternatively state the identifier of an NIL entity that contains the NIL entity, which the consumer wishes to be validated. The request may be transmitted for example as a cerate request createMOI(AILI*renceEmulationRequest, AILEntity AID!, ...) indicating that it is request to create a ML inference emulation request for NIL entity with identifier AID 1, [00161] At operation 702, ML inference emulator 136 may transmit to consumer 134 a notification indicative of the request of operation 701 having been received. Operations 701 may be performed for instantiating a ML inference emulation request. Alternatively, a NIL inference emulation job may be instantiated directly (cf. operation 703).
[00162] At operation 703, consumer 134 may transmit to NIL inference emulator 136 a request to instantiate a MIL inference emulation job, for example as a create request create1101(AILInferenceEnntlationJob, AIL Entity...) indicating that it is request to create a ML inference emulation job for ML entity with identifier [00163] At operation 704, ML inference emulator 136 may determine whether appropriate NIL interference emulation job already exists.
[00164] At operation 705, ML inference emulator 136 may instantiate a ML inference emulation job, in this example as AILIqferetweEmulalionfob object. If a request is received to create a AILIqferenceEmulationRequest, ML inference emulator 136 may instantiate the AillgterenceEmitlationReqztest and then match the AILMftrenceEnnelationRequest to an existing MLIpPrenceEnntlationlob. Alternatively, ML inference emulator 136 may instantiate a new AILInferencermulationJob, for example if no appropriate MLIgferenceEnzulutionJob exists (e.g for the requested NIL entity) at ML inference emulator 136 [00165] At operation 706, ML inference emulator 136 may notify consumer 134 (who initiated the request) about the action taken for the request, for example that an AILInferencermulationfoh has been instantiated for the MLIqferencermulationRequest.
[00166] At operation 707, may notify consumer 134about the action taken for the request, in this case for example that requirements of the request have been added to an existing AffInferenceEmulatioalob, for example a job with identifier JET. Operation 707 may be an alternative to operation 706. Operations 705 and 706 may be performed if no fitting MLInferenceEmulationJob is available. Operation 708 may be performed, if ML inference emulator 136 determined to add the requirements to and existing IVILIgferenceEnnilationdob.
[00167] Operations 708, 709, and 710 may comprise inference of the identified ML entity (MD1). For a given AILIqlerenceEnntlationRequest, MI, inference emulator 136 may comprise or execute a function with an executable simulation model (e.g., a digital twin) that instantiates and triggers an MLInferencermulationJob as a process, through which the specific inference for the specific MI_ entity is to be executed on the simulation model.
[00168] At operation 708, ML inference emulator 136 may collect inference data. ML inference emulator 136 may for example identify and obtaining the required data, for example as specified in the AILIttlitrenceEnnikttionRequest or as identified as the applicable data of the NIL entity, for example by metadata associated with the NIL entity.
[00169] At operation 709, ML inference emulator 136 may perform inference of the NIL entity (Aff)l). Inference may be performed with the data collected at operation 708. Operation 709 may comprise determining an execution plan and executing the inference to realize the outcomes, e.g., the emulated output of ML entity AID].
[00170] At operation 710, ML inference emulator 136 may collect inference report data. The inference report data may for example comprise the emulated output of NIL entity MD] or data derived based on the output, for example any suitable performance indicator or validity indicator od ML entity AID]. Collecting the report data may comprise compiling reports on the inference and/or on the inference emulation process. This data may be then shared with consumer 134 as part of ML inference emulation report.
[00171] At operation '711, may transmit a ML inference emulation report to consumer 114, for example as a notification NotifratILIqftrenceEnzulationReport).
Subsequent to instantiating the inference emulation process (cf. operation 705) or after completing the inference emulation (cf. operation 709), ML inference emulator, which may be embodied as a function, may configure reporting for the corresponding AILMferenceEnudatiortRequest. If there exists a 1141,InferenceEtnulationlob instance (object) with the same characteristics as those stated in the VILWerenceEmulationRequest, NIL inference emulator 136 may append the new reporting requirements on to the existing 1141,InferenceEmulationlob instance. Otherwise, the requirements as defined in the IIILWerenceEtnulationRequest may be added to the newly instantiated MLIqferenceEmulationJoh. Subsequent to a completed AILMferencermidationJob, MIL inference emulator 136 may ensure that consumer 134 gets the outcomes of the inference emulation process, for example by means of the notify process of operation 711.
[00172] FIG. 8 illustrates an example of reading characteristics of ML inference emulation requests of jobs. Mi. inference emulator 136 may be configured with a control interface to enable consumer 134 (e.g., network operator) to configure and manage one or more Alf,WerenceEmulationRequests and/or NILMferenceEmulationJobs. Operations described with reference to FIG. 8 to FIG. 10 may be preceded by ML inference emulation request and/or job instantiation, for example as described above with reference to FIG. 6 or FIG. 7. [001 73] At operation 801, consumer 134 may read the characteristics of submitted AILWerencermulationRequests or the instantiated AILIqlerenceEmulationIobs. This may be achieved using for example a read request (e.g. read/JO]), for example by message reactI101(List(MLInferenceEtnulationRequests, VILInferenceEmulatioalabs)), which may be used to request MIL inference emulator 136 to list the requests and jobs instantiated at MIL inference emulator 136. For example, consumer 134 may be configured to read one or more of the following: - the number of submitted, ongoing or completed NILMferenceEmulationRequests, - features (e.g. inferences, priorities, etc.) of different submitted or ongoing AILWerenceEnnilalionRequests, -status or completion levels of different WInferenceEmulationRequests, or - status or completion level of the MLInferenceEmulationlob or read the outcomes of the j ob.
[001 74] At operation 802, ML inference emulator 802 may report the requested data, for example by means of file or streaming reporting. A report may for example 20 comprise an identifier of an AILMferenceEmulationlob or MLlqfèrenceEinuiationRequ cm's and data associated with that.
[001 75] FIG. 9 illustrates an example of modifying characteristics of ML inference emulation requests or jobs. ML inference emulator 136 may be configured with a control interface to enable consumer 134 to configure the AlLInferencelinulationRequests Or the new or ongoing MLInfereiteekmulationJohs. For example, a network operator (one type of consumer) may assign priorities to one or more VILInferenceEmitlatioaTobs, for example to indicate that in case of resource constraints some particular AILIgferenceEtnulalionlobs with higher priority are configured to be executed first.
Similarly, the network operator may change the priorities of one or more AfilpferenceEmtt/cttionRequests to indicate those that are to be prioritized regarding the instantiation of the related AILInferenceEmulationJobs. In general, this enables a consumer to update a priority of a AILIttferenceEntulatiouRequest submitted earlier by that consumer to ML inference emulator 136.
[00176] At operation 901, consumer 136 may transmit a modification request for one or more attributes of iVILIttlerencermulationRequestls) or AILIttlerenceEntulationlob(s), for example as message modt64101Attributes(ust(MLInferenceEmulationRequests, MlinferencermulationJoblD), attributes). The modification request may indicate the attributes to be updated for particular NILIttlerenceEtnulationRequests or AILIttferenceLthulationtioNDs., for example by means of a list of requests or jobs associated with the updated attributes.
[00177] At operation 902, in response to receiving the modification request, the ML inference emulator 136 may notify consumer 134 about successfully updated attributes of the associated AILInferenceEmulationRequests or All,Inferencelinulationlob(s), for example by message Notifr(changed AILWeretweEtnulationtioND, Data).
[00178] FIG. 10 illustrates an example of deleting NIL inference emulation requests of jobs. NIL inference emulator 116 may enable consumer 134 to delete unwanted AlLIttferenceEinulationRequests. or MiltierenceEnntlationJobs, including both ongoing and already completed jobs, e.g., jobs associated with status ' Served' . [00179] At operation 1001, consumer 134 may transmit a request to delete AffittlerencermulationRequest(s) or AILItVerenceEmulation,loh(s), for example using a deleleA401 procedure and transmitting a message dekteA/1011List(AMLInferenceEtnulationRequests, AILlitlerenceEtnulationfohlD)).
The delete request may therefore comprise identifiers of NIL inference emulation request(s) or job(s), for example as a list.
[00180] At operation 1002, in response to receiving a delete request, ML inference emulator 136 may notify consumer 134 about successfully deleted 1171LIttlerenceEmulationRequests or AILlitterenceEmulationfobs, for example by message Nottb)("AILInferenceEnntlationJobs ListIMLWerenceEnzulatioaloblD) deleted').
[00181] FIG. 11 illustrates an example of a ML inference emulation process. As described above, for example with reference to F1G.6, Mt inference emulator 136 may be configured to control a managed function (MnF), in this example NIL inference MnF 138, to at least partially perform the ML inference emulation job and/or the instantiation thereof. FIG. 11 illustrates an example of an end-to-end flow of action for NIL inference emulation. This process may be performed on the assumption of that creation of an AILInferenceEmulationJob has been triggered for NIL inference emulation, which may be performed by a network emulator that contains a MIL inference function or a NIL inference function with access to a network emulator, as described with reference to FIG. 6 [00182] To initiate the procedure, consumer 134 may transmit to NIL inference emulator 136 a request to create a VITIqprenceEmulationRequest or MLIqferenceEmulationJob. The request may comprise an identifier of a task for NIL inference (Tskl). The request may comprise an identifier of the NIL entity (MD1). Performance of the identified task may be configured to be emulated for the identified ML entity. the request may comprise an indication of a reporting period for the IVILMferenceEmulationRequest or a AILInjerence.16b. AILIgferetweJob may be configured for performance of actual inference. This may include executing the actions on the real network under actual production conditions. By contrast, AILIqlerenceEmulationIob may be configured for performance of inference in a controlled environment that is not the actual/entire production network (e.g., the digital twin or a controlled portion of the real network). The request may be transmitted as a createMOI message. NIL inference emulator 136 may forward the request to NIL inference NInF 138. Operations 1101 to 1110 (first alternative) may be performed for example when NIL inference is hosted on a network emulator, such as for example in case of FIG. 4(b) [00183] At operation 1101, once a AILIqferenceEmulationlob has been instantiated, NIL inference emulator 136 may identify a corresponding management function configured for the requested MIL inference, e.g., a specific MT function for the ML entity identifier in the request. ML inference emulator 136 may then instantiates the AlIThferenceEmulatioidob.
[00184] At operation 1102, ML inference emulator 136 may transmit to ML inference MnF 138 a request to create a AILWerenceDmilalionRequest or AILWeretweJob. Creation of an AILIqferenceJob on the identified ML inference MnF 138 may be triggered by the inference emulation request at initiation of the procedure (e.g., the createMOI request). Creation of the AILMference.Job may be performed according to ML inference MnF requirements. For example, the used attributes in creating the AILInferenceJob may be equal to the attributes specified in the AILIqlerenceEmulationRequest or MLItlferenceDnulationkb. ML inference emulator may configure the reporting period of the A4LItierenceJob according to the reporting period specified in the AILIqferenceEmulationRequest or IVILItUerenceEmulationlob. ML inference NInF 138 may be instructed, for example as part of the MLInfrrenceEnnelationRequest or AILInferenceDnulationlob, to execute MI, inference on NIL inference emulator 136 instead of the normal network environment. (e.g., a defined ML inference emulator environment for execution of the ML inference).
[00185] At operation 1103, NIL inference MnF 138 may instantiate the MLInference.Iob as configured at operation 1102.
[00186] At operation 1104, NIL inference MnF 138 may acknowledge the job instantiation by transmitting a notification of the job instantiation to ML inference emulator 136. The notification may comprise an identifier of the task (Tsk 1) associated with the instantiated MLIttference.Iob.
[00187] At operation 1105, ML inference emulator 136 may notify consumer 134 about the creation of the complete AILlefferenceEnnilationJob with the related Aillqference,16b. This notification may comprise an identifier (JB1) of the ML inference emulation job. The notifications of operations 1104 or 1105 may be transmitted by a Aroi6MOICreation message.
[00188] At operation 1106, ML inference emulator 136 may perform data collection of inference data. For example, in order to perform the requested ML inference, ML inference MnF 138 may run local data collection jobs or may consume other data collection services produced by MIL inference emulator 136.
These services may be similar or identical to data collection services offered by a live network.
[00189] At operation 1107, ML inference MnF 138 may execute the task to emulate the inference as requested.
[00190] At operation 1108, AlthierenceJob or ML inference MnF 138 may transmit ML inference report (411InferenceReport) towards ML inference emulator 136. This report may for example comprise the emulated output of the ML entity, data derived based on the emulated output, or data related to performance of the inference process.
[00191] At operation 1109, ML inference emulator 136 may compile a ML inference emulation report. For example, AILbrenceReports received by ML inference emulator 136 from MIL inference MnF 138 may be compiled into an AlLinferenceEmulationReport on the corresponding MLIpferenceEmztlationlob, where it is made available to the owner or consumer of the lqferenceEmitlationJob.
[00192] At operation 1110, ML inference emulator 136 may transmit the ML inference emulation report to consumer 134, for example as a notification comprising the AlLinferenceEmulationReport.
[00193] Operations 1111 to 1115 (second alternative) may be performed for example when ML inference function has a network emulator, such as for example in case of FIG. 4(a). In this case, ML inference MnF 138 may act as a core function 20 comprising the network emulator.
[00194] At operation 1111, ML inference MnF 138 may instantiates the AILWeretweJob, which may be configured to concurrently act as the MLInferenceEmulationJob.
[00195] At operation 1112, ML inference MnF 138 may acknowledge the job instantiation via a notification to consumer, for example similar to operation 1104.
However, ML inference MnF 138 may transmit the notification to consumer 134 directly and not via ML inference emulator 136. The notification may comprise an identifier (Tsk1) of the AILlpferenceEmulationJob instead of the MLInferencelob. [00196] At operation 1113, ML inference emulator 136 may perform data collection of inference data, for example similar to operation 1106. Data collection may comprise collecting data from the network emulator (network emulation function) contained within ML inference MnF 138.
[00197] At operation 1114, ML inference MnF 138 may execute the task, for example similar to operation 1107.
[00198] At operation 1115, ML inference MnF 138 may transmit the MIL inference emulation report to consumer 134, for example as a notification 5 comprising the Win/ere/iceEmu/ationReport.
[00199] At operation 1116, NIL inference emulator 136 may delete the AILItiferenceEtnidal Ionia& For example, after completion of the NIL inference, ML inference MnF 138 may be decommissioned and the associated resources may be released. At this point, the AILMIerenceEnntlationlob may be marked as completed. For example, the status of the MillierenceEnntlationJob may be changed to 'served'.
[00200] The procedure of FIG. 11 enables instantiation and controlling of MIL inference emulation requests or jobs when MIL inference is performed by a MnF of ML inference emulator 136. Note that, even if not illustrated in FIG. 11, consumer 134 may be configured to modify the AILItlference.lob via the AILInferencerimulationJob.
[00201] Some use cases, requirements, and example solutions are provided below for ML inference emulation: [00202] A trained WEAN)) may be used for inference within a stated scope, for 20 example on a managed function or in a management function. Accordingly, there may be an NIL inference MnS producer that may be responsible for executing the inference.
[00203] After training of an/VLF/tilt); validation may be performed to ensure the training process has bee completed successfully. Validation may be done for example by preserving part of the training data set and using it after training to check whether the /11LEtitity has been trained correctly or not. However, even after the AILEntity is validated during development, inference emulation may be performed to check if the MLEJitily containing the 4./Lbitity is working correctly under certain runtime context or using certain inference emulation data set.
Validation and inference emulation may be similar on a functional level; for example, both of them check the ML performance against given context or data to ensure the ML functionality is functioning correctly. But, inference emulation may involve interaction with third parties, e.g., the consumers (e.g., operators) that plan to use the MI,Entity or third-party systems, which may rely on the results computed by the MLEnnty. At least for these reasons, example embodiments of the present disclosure enable to support at least one of the following: -a consumer to request for a specific Al/ML capability to be executed in ML inference emulator environment, - a consumer to request a specific ML inference emulator to execute a given AI/ML capability - a managed function to act as a NIL inference emulator and execute AI/ML capabilities in a controlled way.
[00204] The network or its management system may be therefore configured with the capabilities and to provide the services needed to enable the consumer to request inference emulation, and also to receive feedback on the inference emulation of a specific MLEnthy or of an application or function that contains an MLEntity.
[00205] One or more of the following features, described below as example requirements for a network management system, may be applied: - REQ-AINILENIUL-1: A network management system (e.g., a 3GPP management system) may be configured to inform authorized AI/NIL NInS consumer about the characteristics of a NIL Inference Emulator.
-REQ-Al/MLEMUL-2 The network management system may be configured to inform authorized Al/ML MnS consumer about the available execution resources on the NIL inference emulator including information on when extra resources may be availed for any new executions on that ML inference emulator.
- REQ-AI/MLEMUL-3 The network management system may be configured to allow an authorized consumer to request an ML Inference Emulator to execute NEL inference emulation for a specific NILEntity.
- REQ-Al/MLEMUL-4 The network management system may be configured to allow an authorized consumer to request for ML Inference Emulation for a specific MLEntity using specified data or data with specifically stated characteristic and inference emulation features.
- REQ-Al/MLEMUL-5 The network management may be configured to allow an authorized Al/ML MnS consumer to create an instance of the ML inference emulation process.
- REQ-AI/MLEMUL-6 The network management may be configured to inform 5 authorized AI/ML MnS consumer about the AILEntities under execution in a specific ML inference emulator - REQ-AI/MILEMUL-7 The network management may be configured to allow an authorized AI/ML MnS consumer (e.g., an operator) to manage or control a specific MIL Inference Emulator Request or MIL Inference Emulation process, e.g. to start, suspend or restart the inference emulation; or to adjust the inference emulation conditions or characteristics.
- REQ-AI/MILEMUL-8 The network management may be configured to allow an authorized AI/ML MnS consumer to request reporting, and for the MI_ Inference Emulator to report, on a specific, on a specific ML Inference emulation process or on the outcomes of any such ML Inference Emulation request or process.
- REQ-Al/MLEMUL-9 The network management may be configured to inform authorized AI/MIL MnS consumer (e.g. an operator, managed, or management network function) about the state of a specific MIL inference Emulator; including information about the status of running tasks as well as resource consumption in the specified ML Inference Emulator [00206] Some examples for implementing the above features, or to generally provide support for MI inference emulation, are provided below: - Providing an information object class (IOC) with properties of ML inference emulator 136. This IOC be called an AILMferenceEmulator and it may be name-25 contained in either a Subnetwork, a ManagedFunction or a ManagementFunction.
- Providing an IOC for the request for ML inference emulation, the request including consumer's requirements for inference emulation. This ICO may be called an AITInferenceEmulationReque.st and it may be name-contained on the AILWerenceEmulaior. AfflifferenceEmulationRequest object may be associated with at least one AlLEnthy for which the inference emulation is configured to be executed.
- Providing an IOC for the process of NIL inference emulation from the objects for which inference emulation is configured to be instantiated. This IOC may be called an AILIqlerencermulatiorkloh and it may be name-contained on the AlLInferenceEmulator. AILIgferenceEnntlationlob may be associated with at least one AILEtriily for which the inference emulation is configured to be executed.
- Providing a datatype for a report on NIL inference emulation. The data type may be used for reporting on NIL emulation for one or more AILWerenceEmulalionReques.tv or MilqferenceEnntlationlobs. This datatype may be called an VILInferenceEmulationReport and it may be name-contained on the AILInferencerimulator - Providing IOCs and datatypes for request, process, and reporting on NIL inference. Updating these IOCs with the features of the NIL inference emulation, for example as provided above.
Relationships of the various 10Cs are further described below with reference to FIG. 12 to FIG. 15.
[00207] FIG. 12 illustrates an example of an object diagram associated with a NIL inference emulator. A proxy class object ManagedEntity may represent one of the following 10Cs: a Subnetwork, a NI anagedFuncti on, or a ManagementFuncti on. An object of class AILItilerenceEmulator may be associated with one or more AILEnthy obj ects, IVILItiferenceEmulationRequest obj ects, AILInfrrenceEnntlationfob objects, or IVILWerenceEnntlalionReport objects (cardinality 1...* or *... *). A1LEntity object may be associated with one AILIrfferenceEntulationRequest object, and vice versa (cardinality 1...1). AILEnlity object may be associated with one AILInferenceEnnilationJob object, and vice versa. AILIgferenceEmulationRequest object may be associated with one AILIgferenceEmulationJob object, and vice versa. A AfflqferenceEmulationIob object may be associated with one or more AILWerenceEmulalionRepon objects, and vice versa (cardinality * *) AILWerenceEmulaior object may be associated with one or more MLInferenceEmulationReport object, and vice versa (cardinality *...*). Note that AILItIkrenceEnzulationlob object may be directly instantiated instead of separating the information provided by consumer 134 into a separate AILIgferenceEmulalionl?eque,s1 object. Subsequently, the il4LEniity object may be associated with either the AiLlifferenceEnnilationRequest or the AlligferenceEnntlationJob objects. Inheritance relations of the different 10Cs are illustrated in FIG. 13. IOC "Top" may be a logical class that collects the generic information that may be applicable to any other class. FIG. 13 illustrates an example of how the information classes for ML inference emulator, NIL inference emulation job, and ML inference emulation request may be related to (e.g., re-use) the information (e.g., attributes) applicable to classes "Top" and "ManagedFunction" [00208] FIG. 14 illustrates an example of an information model for NIL inference emulation control for an NIL inference emulator contained within a MIL inference function. When a ML inference function supports capability to emulate network behaviour, requesting for ML inference emulation of a given AILEntity may imply that consumer 134 instantiates the NIL inference with an indication to execute the NIL inference in an emulation environment. Correspondingly, the information model for the ML inference may be extended with the capability to support inference emulation such that the NIL inference may (besides other classes) contain one or more AILInferenceEmulator objects (each with an IOC which can be used to instantiate one or more MLInferenceEmulationJobs), or such that MLInferencermulationJob object(s) are associated with the given IVILM,ferenceErnulationRequest or AILInferencelob object(s).
[00209] A proxy class object ItanagedEntity may again represent one of the following IOCs: a Subnetwork, a ManagedFunction, or a ManagementFunction. An object of class AILIqferencellunction may be associated with one or more:EV/LEM/0i obj ects, IVILInferenceEmulationRequest obj ects, AILIttferenceEmulationIob objects, AILInferenceEmitlationReport objects, or AlLktferencerundator objects.
However, a 1141,Enilly, AILMferenceJob, MLIqferenceEmulalionJob, Min erenceEmulationReport, or VIIInferenceE,mulator object may be associated with one AffhiferenceFunction object (cardinality 1 *) AILEntity object may be associated with one or more AltlegeerenceEnntlationJob objects, but MLInferencermulationJob object may be associated with one A41,EntiO) object (cardinality 1...*). VILInferenceJob object may be associated with one AILIgferenceErnulationlob objects, the IVILIgferenceEnntlatiorgob object may be associated with one AlthiferenceJob object (cardinality *...1).
AlligferenceEnntlationJob object may be associated with one or more AILIgferenceEnntlationReport objects, but a AlligferenceEnnilationReport object may be associated with one Aff.lifferenceEmulationfob object (cardinality 1 *) AlLInferenceEmulator object may be associated with one or more AILWerenceEmulalionJob or MLIgferenceEmulationRequest objects, but a MLInferenceEmulationJob MLInferenceEmulator object (cardinality 1.. Inheritance relations of the different IOC s are illustrated in FIG. 15.
[00210] Examples of IOC class definitions are provided below: [00211] MLInferenceEnnualor This object class may represent properties of ML inference emulator 136. Each AILInferenceEnmlator may be a managed object instantiable from the MLItiferenceEmulator information object class and name-contained in either a Subnetwork, a ManagedFunction or a ManagementFunction MLIqferenceEmulator may be a type of managedFunction. For example, AlLInferenceEmulator may be a subclass of a manctgedb'unction and inherit the capabilities of the managecThindion [00212] As described above, AILInfrrenceErnulator may be associated with one or more MLEtitity objects. For example, the MLEntity objects associated with AILIgferenceEmulator may be associated via a list of ML entity identifiers (MLEntityklettlifer). The AILEtithylarenlifen may identify the MLEtilily and optionally a version thereof.
[00213] VIIIiVerenceEmulator obj ect may contain one or more MLIqferenceEmulationJobs, where each AlLblerenceEmulationJob may be instantiated following or in response to a received request to create the AillqferenceEmitlationJob klanagedObjectInstance (MOO. Alternatively, MLInferencermulator may allow to instantiate one or more AILInfrrenceEmulaiionRequests and as such it may name-contain one or more AILInferencerimulationRequests. WInferenceEmulator may be associated with one or more AILIqlerenceEnnilationReports generated for any one or more of the AILIgferenceEmulationJobs. Each AILItVerenceEtnulator may state the amount of resources consumed by ongoing MLIiierenceEnntlationJohs. Alternatively, this may be indicated by a percentage / amount of available resources (e.g. CPU 30% / 3 GB available).
[00214] AILWerenceEnzuktior object may include one or more of the following attributes: Attribute name Su pp ort Quail fier isRead able isWrit able isInva riant isNotif yable MLEntitys CM T F F F MLInference EmulationJob M T F F F MLInferenceEmulation Resourc State M T F F T MLInferenceEmulation Request 0 T T F T MLInferenceEmulation Reports 0 T F F F The first column may indicate whether the attribute is mandatory (M), optional (0), or conditionally mandatory (CM), e.g. mandatory under certain conditions. The second, third, fourth, and fifth column may indicate whether the associated attribute is readable, writable, invariant, or notifyable, respectively (T = True, F = False). Note that the values of the tables are provided as examples and other values may be alternatively used.
[00215] AILItrenceEmulationRequest: This object class may represent properties of a NIL inference emulation request. For each request to undertake inference emulation, a consumer may create a new AlWerenceEnntlationRequest on the AILliVerenceEmulator. AILInfrrenceDnulationRequest may be an information object class that is instantiated for each request for inference emulation [00216] Each IffibilerenceEnnelationRequest may be associated with exactly one AILEntity. Each AffinferenceEmulationRequest may contain specific reporting requirements that define how the AllinferenceEmulator is configured to report about the AILInjerenceEmulationRequest or the associated AILIgferenceEmulalionJob. Such requirements may e.g., include a reporting period 20 (reportingPeriod).
[00217] AlLft?ferenceEmidationReque,s1 may be associated with an indication of its source to identify where the request is coming from. The source may be used to prioritize among different AII,WerenceRmulationReque.sis from different sources. The sources may for example be an enumeration defined for network functions, operator roles, or other functional differentiations [00218] MLInferencelinttlationRequest may have a 1?equestStatzes field that may be used to track the status of the specific AILMferenceEmulationRequest or the associated AffInferenceEmulationkb. The RequestStatus may be written by AILInferenceEmulator for example when there is a change in the status of the ML inference emulation progress. The Reque-stSytatus field may comprise an enumeration, for example with the following values: -"Pending": This value may indicate that the AlliqferenceEmulationRequest has been received but no action has been taken thereof When other actions are taken regarding the MaqterenceEmulatiotiRequest, the RequestStatus may be changed to one of the following values: - "Triggered": when a corresponding AILIefferenceEmulationfob has been instantiated and is running, - "Suspended": the status when for one reason or another the AILWerenceEmulalionRequest has been suspended by the A41.11?ferenceEmitlattor or another authorized consumer, e.g., the operator. "Sewed": the status when an MLInferenceEmulationJob associated to the specific AILIgferenceErnulationRequest has run to completion.
The reporting requirements contained in the1MM./ erenceEmulati jot/Request may be mapped to an existing IfflizferenceEintilationJob instance or a new instance may be created as needed.
[00219] A4Lblerencelinttlationl?equest object may include one or more of the following attributes: Attribute name Supp isRead able isWrit isInvar iant isNotify able ort able Quail fier MLInferenceEmulatio nRequestID Ni T F F F MLEntity M T F F F MLinference M T F F F Source M T T F T RequestStatus M T T F T reportingPeriod 0 T T F T [00220] AILInjerenceEmulationJob: This object class may represent properties of a ML inference emulation job. For each AILEntio, to be executed under Mt inference emulation, a AiLlifferencernittlalionJob may be provided.
AILIgferenceEnntlationJoh may be instantiated for each ML Entity or may apply to multiple AILEntitys. AllierferenceEintilationJob may be therefore associated with one or more AILEntllys.
[00221] AILIqferetweEmulalionJoh may be instantiated directly by an authorized consumer. Alternatively, /14LbfferenceEnniationJob may be instantiated by MLInferenceEmulator, for example in response to a AILInferenceEmulationRequest instantiated directly by an authorized consumer. In the latter case, the MLInferenceEmulationJob may be associated with one or more AILIgferenceEmulalionRequeslc. Correspondingly, the AillpferenceEmulationRequests and the AILInferenceEnntlatioalobs may be conditionally mandatory such that at least one of them may be required to be associated with an instance of /1411//ferencernm/ationJob. Each All IrfferenceEnntlationJob may have a status attribute used to indicate the level of success of the AlLIqferenceEnntlationJob. For example, in the case where a AlligferenceEnntlationJob is instantiated for exactly one AIL Entity and one Al/ML inference process, the status may reflect the status of the internal job (e.g., training) executed for that AlliEntity.
[00222] IVILInfrrenceEmulationJob may have a source to identify the entity which instantiated it and which may be used to prioritize among different AILIgferenceE mulalionJobs from different sources. The sources may for example be an enumeration defined for network functions, operator roles, or other functional differentiations. Each AllitfferenceEmulatioalob may have attributes specifying the reporting period (e.g., periodically with certain time interval, after completion, etc.). The reporting requirements contained in the All,WerenceEnntlationlob may 5 be mapped to an existing AILItrferenceEmulationJob instance or a new instance may be created when needed. The AffinferenceEmulationlith object may also represent the capability of compiling and delivering reports and notifications about AILWerenceEmulaior or its associated Alt1egeerenceEnntlalionRequest5 and/or AILInferencermulationJobs. Ail InferenceEmulator may be associated with one or 10 more instances of AILInferencermulationJobs.
[00223] AILIttferenceEmulationlob may be configured to report on one or more NILMferenceEmulationRequests and/or one or more MLIttlerenceEmulationlobs. AlLInferenceEnnilationlob may include a ML inference emulation reporting matrix (17ffIqferenceEmulationReporling)llafrix) that may define the frequencies (e.g., time intervals) at which reports are configured to be sent, the specific entities for which reports are configured to be sent, for example at specific time instants, and/or the objects for which reporting is configured to be performed in different reports. MLInferencermulationJob may be associated with one or more MLInferenceEmulationReports.
[00224] AILItlferenceEmulationlob object may have one or more of the following attributes: Attribute name Supp ort Qual ifier isRea isWri islova isNotif dable table riant yable MLInferenceEmulation JoblD Ni T F F F MLEntityID M T F F F MLInferenceEmulation Request 0 T T F T Source Ni T T F T ProgressStatus M T T F T reportingPeriod 0 T T F T MLInferenceEmulationR eportingMatrix M T F F F MLInferenceEmulation Reports 0 T F F F [00225] AIL/nferenceRequest: This object class may represent properties of a ML inference request. To support a case where an inference function is configured to emulate network behaviour, the i nform all on object class for the AILIqferenceRequest may be structured to include one or more of the following attributes: Attribute name Supp is Read isWrit islovar is Notify ort able able iant able Quali fier MLInferenceRequest ID Ni T F F F m1Model M T F F F Source M T T F T RequestStatus M T T F T MLInferenceData 0 T T F T expectedRuntime Context 0 T T F T MLInferenceEmulati on M T F F F Attributes related to role MLInferenceJobRef M T F F F MLInferenceEmulati onJobRef 0 T T F T [00226] AILliVerence.lob: This object class may represent properties of a Mt inference job. To support a case where a ML inference function is configured to emulate network behaviour, the information Object class for the AILInferencelob may be structured to include one or more of the following attributes: Attribute name Supp isRead isWrit isInvar is Notify ort able able iant able Quali fier MLInferenceJoblD M T F F F mlEntitylD M T F F F Source Ni T T F T MLInferenceData 0 T T F T expectedRuntime Context 0 T T F T ProgressStatus M T T F T MLTesting Reporting M T F F T MLTesting 0 T T F T ReportingPeriod Attributes related to role MLInferenceRequet Ref Ni T T F T MLInferenceEmulat ionJoblD M T F F F [00227] Examples of datatype definitions for MLEnlity, AILIgferenceEmulationRepori, and AlthrferenceReport are provided below: [00228] AILEntity: This datatype may represent properties of a ML entity. For each AILEntity under inference emulation, a AlllifferenceEinulationJob may be instantiated. The AlLInferenceEmulator may instantiate an AlligferenceEnntlationJob for each IVILIgferencermulationReque.s1 Alternatively, authorized consumers may directly instantiate AILWerenceEmulationJobs to execute ML inference emulation for specific AH,EntiO)s. MIEntity may include attributes inherited from a top IOC.
[00229] AILInferenceEmulcrtionReport: This datatype may represent the properties of a ML inference emulation report. AffinferenceEmulator may generate one or more MLInferenceEmulationReports. Each AILMfirenceEmulationReport may be associated with one or more AILEntitys AILIqferenceEmulationReports may be associated with the illiltierenceEnntiationJob instance. For example, AILIgferenceEnntlationJoh may provide/instantiate reports about AlliEntilys or about the AlLhaferenceEmulationkb that is associated with the AILEntitys for which inference emulation is requested and/or executed.
AILIqlerenceEmulationJoh may al so provide reporting on specific AlLInferencennulatioitheeptests.
[00230] MilifferetweEmulalionl?epori may include one or more of the following attributes: Attribute name Supp isRead isWrit isInvar iant isNotify able ort able able Quail tier MLInferenceEmulatio nReportID M T F F F MLEntitys M T F F F Attributes related to role MLInferenceEmulatio nRequests 0 T F F F MLInferenceEmulatio nJobRef M T F F F [00231] WInferenceReport: This datatype may represent the properties of ML inference report. To support a case where an inference function is configured to emulate network behaviour, the information object class for the Mt inference reporting may be structured to include one or more of the following attributes: Attribute name Supp isRead isWrit isl nva isNotif ort able able riant yable Quali fier MLInferenceReportID M T F F F MLInferenceRequests CM T T F T MLInferenceJobs CM T T F T MLInferenceReporting Matrix M T F F F Attributes related to role MLInferenceEmulatio n RequestRef M T F F F MLInferenceEmulatio nJobRef M T F F F [00232] Examples of attribute definitions are provided in the following table. The first column indicates the name of the attribute. The second column provides a definition of the attribute. Optionally, examples of allowed values for the attribute are provided. The third column provides examples of properties of the attribute, for example the type or multiplicity of the attribute, whether the attribute is ordered, unique, nullable, or has a particular default value. The properties are however provided merely as examples and therefore the information indicated in the third column may be optional.
Attribute name Documentation and Properties allowed values MLEntitysList Indicates the list of ML type: String enabled functions and models multiplicity: available at the 1 IVILluferenceEnntlator isOrdered: function N/A i sUni que: N/A defaultValue : None isNullable: True Indicates an identifier for a type: String MLInferenceEmul ati onJob ID specific instantiated multiplicity: AILInferenceEmulatiordob 1 Attribute name Documentation and Properties allowed values isOrdered: N/A i sUni que: N/A defaultValue: None isNullable: True MILEntityID Indicates an identifier for a specific A4LEntity or A1LEntity. It may include the version identifiers of any such MLEntity or AILEntity type: String multiplicity: * isOrdered: False isUnique: True defaultValue: None isNullable: True Source Indicates a managed object that initiates the request for AILIpterenceEnntlator type: String multiplicity: 1 isOrdered: N/A isUnique: N/A default Value: None isNullable: True MLinference Indicates the AI/ML task needed to be executed in the NIL inference emulator type: Enum multiplicity: 1 Allowed values: Training, Validation, Testing, Inference isOrdered: N/A isUnique: N/A defaultValue: None isNullable: True Attribute name Documentation and Properties allowed values ProgressStatus Indicates the status of the ilifilizterenceEnullator process as evaluated by the AILInferenceEmulator function. type: DN (see TS 32.156 [12]) multiplicity: * i sOrdered: False isUnique: True defaultValue: None isNullable: True reportingPeriod Defines how the type: DN (see TS 32.156 [12]) multiplicity: AltWerencermitlator may report about the AILMferenceEmulalionReques I or the associated MilifferenceEnullationfob 1 i sOrdered: N/A isUnique: N/A default Value: None isNullable: True MLInferenceEmulationReportingI D Indicates an identifier for a specific IVILIderenceEnntlatiordob instance type: integer multiplicity: 1 isOrdered: N/A isUnique: N/A defaultValue: None isNullable: False IVILInferenceEmulationReportID Indicates an identifier for a specific MLInferenceEmulator report type: integer multiplicity: 1 i sOrdered: N/A isUnique: N/A Attribute name Documentation and Properties allowed values default Value : None isNullable: False emulatorResourceConsumption Indicates the amount of type: string / resources used by ongoing integer NILInferenceEmulationJobs, multiplicity: or the percentage / amount of 1 available resources e.g. CPU isOrdered: N/A isUnique: N/A defaultValue :0 isNullable: False [00233] Example embodiments of the present disclosure enable an authorized consumer to control and manage ML inference emulation processes, including post-training inference emulation and pre-deployment inference emulation. This enables validating a ML entity against a specified data set or given expected runtime context. Moreover management functions are enabled to interact with ML inference emulation functions, to control the NIL inference emulation processes, or read relevant information about the status of MT inference emulation functions. This improves validation of ML-based functions at communication network 100 and thereby provides more efficient testing of the MT-based functions before their deployment at the network. Erroneous behaviour of NIL-based functions may be therefore more efficiently avoided.
[00234] FIG. 16 illustrates an example of a method for inference emulation. The method may be performed by a network device.
[00235] At 1601, the method may comprise receiving, by a machine learning inference emulator from a machine learning emulation consumer, a request for instantiating inference emulation of a machine learning entity of a communication network, wherein the request for instantiating the inference emulation comprises an identifier of the machine learning entity.
[00236] At 1602, the method may comprise causing execution of inference of the machine learning entity based on a simulation model or a portion of the communication network to obtain an emulated output of the machine learning entity.
[00237] At 1603, the method may comprise transmitting, to the machine learning emulation consumer, a machine learning inference emulation report based on the emulated output of the machine learning entity.
[00238] FIG. 17 illustrates an example of a method for requesting inference emulation. The method may be performed by a network device [00239] At 1701, the method may comprise transmitting, by a machine learning emulation consumer to a machine learning inference emulator, a request for instantiating inference emulation of a machine learning entity of a communication network, wherein the request for instantiating the inference emulation comprises an identifier of the machine learning entity.At 1702, the method may comprise receiving, from the machine learning inference emulator, a machine learning inference emulation report based on an emulated output of the machine learning entity.
[00240] Examples of the methods are explained above, for example with regard to functionalities of NIL inference emulator 136 or NIL inference emulation consumer 134, or in general apparatus 200, and are not repeated here. It should be understood that embodiments described may be combined in different ways unless explicitly disallowed.
[00241] Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.
[00242] It will be understood that the benefits and advantages described above 30 may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to 'an' item may refer to one or more of those items. [00243] The steps or operations of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the example embodiments described above may be combined with aspects of any of the other example embodiments described to form further example embodiments without losing the effect sought.
[00244] The term 'comprising' is used herein to mean including the method, blocks, or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements. [00245] As used herein, "at least one of the following: <a list of two or more elements>" and "at least one of <a list of two or more elements>" and similar wording, where the list of two or more elements are joined by "and" or "or", mean at least any one of the elements, or at least any two or more of the elements, or at least all the elements. Term "or" may be understood to cover also a case where both of the items separated by "or" are included. Hence, "or" may be understood as an inclusive "or' rather than an exclusive "or".
[00246] Although subjects may be referred to as 'first' or 'second' subjects, this does not necessarily indicate any order or importance of the subjects. Instead, such attributes may be used solely for the purpose of making a difference between subjects.
[00247] As used in this application, the term 'circuitry' may refer to one or more or all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) combinations of hardware circuits and software, such as (as applicable):fi) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation. This definition of circuitry applies to all uses of this term in this application, including in any claims.
[00248] As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
[00249] It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from scope of this specification.
Claims (25)
- CLAIMS1. A method, comprising: receiving, by a machine learning inference emulator from a machine learning emulation consumer, a request for instantiating inference emulation of a machine learning entity of a communication network, wherein the request for instantiating the inference emulation comprises an identifier of the machine learning entity; causing execution of inference of the machine learning entity based on a simulation model or a portion of the communication network to obtain an emulated output of the machine learning entity; and transmitting, to the machine learning emulation consumer, a machine learning inference emulation report based on the emulated output of the machine learning entity.
- 2. The method according to claim 1, further comprising: causing instantiation of a machine learning inference emulation job for the inference emulation of the machine learning entity, in response to determining that no machine learning inference emulation job exist at the machine learning inference emulator for the machine learning entity.
- 3. The method according to claim 1, further comprising: determining that a machine learning inference emulation job exists at the machine learning inference emulator for the machine learning entity; and causing the execution of the inference of the machine learning entity based on the simulation model or the portion of the communication network in association with the machine learning inference emulation job.
- 4. The method according to any preceding claim, further comprising: causing collection of input data for the inference of the machine learning entity, wherein the collection of the input data is based on the simulation model or the portion of the communication network; and causing execution of the inference of the machine learning entity based on the input data to obtain the emulated output of the machine learning entity.
- 5. The method according to any preceding claim, further comprising: receiving, from the machine learning emulation consumer, a request for at least one characteristic of the machine learning inference emulator; and transmitting, to the machine learning emulation consumer, a notification of the at least one characteristic of the machine learning inference emulator.
- 6. The method according to any preceding claim, further comprising: receiving, from the machine learning emulation consumer, a request for information on at least one execution resource available at the machine learning inference emulator; and transmitting, to the machine learning emulation consumer, a notification of the at least one execution resource available at the machine learning inference emulator.
- 7. The method according to any preceding claim, further comprising: receiving, from the machine learning emulation consumer, a request for a status of the request for instantiating inference emulation of the machine learning entity or a status of the machine learning inference emulation job; and transmitting, to the machine learning emulation consumer, a notification of the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job.
- 8. The method according to claim 7, wherein the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job is indicative of at least one of the following: that the request for instantiating inference emulation of the machine learning entity is pending, that the machine learning inference emulation job has been triggered, that the request for instantiating inference emulation of the machine learning entity has been suspended, or that the machine learning inference emulation job has been served
- 9. The method according to any of claims 5 to 8, wherein the at least one characteristic of the machine learning inference emulator comprises at least one attribute of a machine learning inference emulator object, wherein the status of the request for instantiating inference emulation of the machine learning entity comprises an attribute of a machine learning inference emulation request object, wherein the status of the machine learning inference emulation job comprises an attribute of a machine learning inference emulation job object, or wherein the information on the at least one execution resource available at the machine learning inference emulator comprises an attribute of the machine learning inference emulator object.
- 10. The method according to claim 9, wherein the request for at least one characteristic of the machine learning inference emulator, the request for information on the at least one execution resource available at the machine learning inference emulator, or the request for the status of the request for instantiating inference emulation of the machine learning entity or the status of the machine learning inference emulation job comprises an attribute read request.
- 11. The method according to any preceding claim, further comprising: receiving, from the machine learning emulation consumer, a request for reading at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator; and transmitting, to the machine learning emulation consumer, a notification of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator,
- 12. The method according to claim 11, wherein the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises at least one of the following: a number of received, ongoing, or completed requests for instantiating inference emulation at the machine learning inference emulator, a priority of the one or more requests for instantiating inference emulation at the machine learning inference emulator, or a status or a completion level of the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
- 13. The method according to any preceding claim, further comprising: receiving, from the machine learning emulation consumer, a request for configuring at least one characteristic of one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator; and configuring the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
- 14. The method according to claim 13, wherein the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator comprises a priority of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator,
- 15. The method according to any preceding claim, further comprising: receiving, from the machine learning emulation consumer, a request for deleting one or more requests for instantiating inference emulation at the machine learning inference emulator or one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator; and deleting the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
- 16. The method according to claim 14 or 15, further comprising: transmitting, to the machine learning emulation consumer, a notification of the configuration of the at least one characteristic of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator, or a notification of the deletion of the one or more requests for instantiating inference emulation at the machine learning inference emulator or the one or more machine learning inference emulation jobs instantiated at the machine learning inference emulator.
- 17. The method according to any of claims 2 to 16, further comprising: configuring reporting of the machine learning inference emulation job, in response to receiving, from the machine learning emulation consumer, a request for configuring reporting associated with the request for instantiating inference emulation of the machine learning entity.
- 18. The method according to claim 17, wherein the request for configuring reporting associated with the request for instantiating the inference emulation of the machine learning entity comprises a reporting period, and wherein the method further comprises: transmitting, to the machine learning inference emulation consumer, a plurality of the machine learning inference emulation reports based on the reporting period.
- 19. A method, comprising: transmitting, by a machine learning emulation consumer to a machine learning inference emulator, a request for instantiating inference emulation of a machine learning entity of a communication network, wherein the request for instantiating the inference emulation comprises an identifier of the machine learning entity, and receiving, from the machine learning inference emulator, a machine learning inference emulation report based on an emulated output of the machine learning entity.
- 20. The method according to claim 19, further comprising: determining validity of the machine learning entity based on the machine learning inference emulation report; and determining to enable or disable deployment of the machine learning entity based on the validity of the machine learning entity.
- 21. The method according to claim 19 or 20, further comprising: transmitting, to the machine learning inference emulator, a request for at least one characteristic of the machine learning inference emulator; and receiving, from the machine learning inference emulator, a notification of the at least one characteristic of the machine learning inference emulator.
- 22. An apparatus comprising means for performing the method according to any of claims 1 to 18.
- 23. An apparatus comprising means for performing the method according to any of claims 19 to 21.
- 24. A computer program comprising instnactions, which when executed by an apparatus, cause the apparatus perform the method according to any of claims 1 to 18.
- 25. A computer program comprising instructions, which when executed by an apparatus, cause the apparatus perform the method according to any of claims 19 to 21.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2216346.3A GB2623992A (en) | 2022-11-03 | 2022-11-03 | Machine learning inference emulation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2216346.3A GB2623992A (en) | 2022-11-03 | 2022-11-03 | Machine learning inference emulation |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202216346D0 GB202216346D0 (en) | 2022-12-21 |
GB2623992A true GB2623992A (en) | 2024-05-08 |
Family
ID=84839772
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2216346.3A Pending GB2623992A (en) | 2022-11-03 | 2022-11-03 | Machine learning inference emulation |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2623992A (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210092026A1 (en) * | 2019-09-23 | 2021-03-25 | Cisco Technology, Inc. | Model training for on-premise execution in a network assurance system |
-
2022
- 2022-11-03 GB GB2216346.3A patent/GB2623992A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210092026A1 (en) * | 2019-09-23 | 2021-03-25 | Cisco Technology, Inc. | Model training for on-premise execution in a network assurance system |
Also Published As
Publication number | Publication date |
---|---|
GB202216346D0 (en) | 2022-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11349725B2 (en) | Method and apparatus for providing cognitive functions and facilitating management in cognitive network management systems | |
US10523569B2 (en) | Dynamic creation and management of ephemeral coordinated feedback instances | |
US20160055077A1 (en) | Method, device, and program storage device for autonomous software product testing | |
US10536348B2 (en) | Operational micro-services design, development, deployment | |
EP4042636B1 (en) | Orchestrating sandboxing of cognitive network management functions | |
US11381463B2 (en) | System and method for a generic key performance indicator platform | |
US11189100B2 (en) | Systems and methods for optimizing extended reality experiences | |
US10608907B2 (en) | Open-loop control assistant to guide human-machine interaction | |
US20220345568A1 (en) | User-based chaos system | |
US20200104123A1 (en) | Intelligent agent framework | |
GB2623992A (en) | Machine learning inference emulation | |
EP3002681A1 (en) | Methods and apparatus for customizing and using a reusable database framework for fault processing applications | |
US12086833B2 (en) | Apparatuses and methods for facilitating a generation and use of models | |
EP4293519A1 (en) | Machine learning model testing | |
US20240251254A1 (en) | System and method for o-cloud node reconfiguration in a telecommunications system | |
US12143263B2 (en) | System and method for cordon of O-cloud node | |
US20240107442A1 (en) | System and method for o-cloud node shutdown in idle times to save energy consumption | |
US20240354609A1 (en) | Modeling of ai rule-engine behavior in quantum computers | |
US20240250878A1 (en) | System and method for providing a cloud resource optimization policy in telecommunications system | |
WO2024205618A1 (en) | System and method for managing status of o-cloud resource | |
US20240250879A1 (en) | Non-real-time ric architecture supporting coordinated ran and core information sharing and control | |
WO2024144758A1 (en) | Intent based optimization of ran and o-cloud resources in smo o-ran framework | |
EP4385223A1 (en) | Apparatus, method, and computer program | |
Mwanje et al. | Towards Actualizing Network Autonomy | |
KR20240065271A (en) | Data analysis model management methods, electronic equipment and storage media |