WO2024145925A1 - Model performance monitor mechanism for ai/ml positioning - Google Patents
Model performance monitor mechanism for ai/ml positioning Download PDFInfo
- Publication number
- WO2024145925A1 WO2024145925A1 PCT/CN2023/071056 CN2023071056W WO2024145925A1 WO 2024145925 A1 WO2024145925 A1 WO 2024145925A1 CN 2023071056 W CN2023071056 W CN 2023071056W WO 2024145925 A1 WO2024145925 A1 WO 2024145925A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- pru
- positioning
- assistance data
- location
- Prior art date
Links
- 230000007246 mechanism Effects 0.000 title abstract description 5
- 238000000034 method Methods 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 5
- 238000004891 communication Methods 0.000 claims description 2
- 238000005259 measurement Methods 0.000 claims 2
- 238000002372 labelling Methods 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 238000010801 machine learning Methods 0.000 description 17
- 238000013473 artificial intelligence Methods 0.000 description 15
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
- G01S5/0205—Details
- G01S5/0236—Assistance data, e.g. base station almanac
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
Definitions
- This present disclosure relates generally to wireless communications, and more specifically, to techniques of positioning a user equipment (UE) with Artificial Intelligence (AI) /Machine Learning (ML) .
- UE user equipment
- AI Artificial Intelligence
- ML Machine Learning
- the one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims.
- the following description and the annexed figures set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.
- Figure 1 illustrates indoor factory deployment scenario.
- Figure 1 shows an example of indoor factory deployment scenario.
- UEs are random distributed in the factory.
- PRUs are deployed in known location for AI/ML assistance data. There are also some clutters in the factory.
- the high-level flow of this invention is shown in figure 2.
- the first step is network notify the PRU to monitor performance of one model. Network will send the model to PRU. If the model is not trained, network will send assistance data to PRU in the second step. PRU will train the model with received assistance data.
- the third step is PRU measure model input and inference model to get model output.
- the 4th step is PRU calculate the model loss. The last step is PRU reports the loss to network.
- Network don’t train model M1 or trained M1 with assistance data from PRU1, network will send un-trained model M1 and assistance data D2/D3/...to PRU1. These assistance data should not include assistance data D1 from PRU1.
- PRU1 train M1 based on received assistance data.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
This disclosure describes a mechanism to monitor the performance of AI/ML positioning model. Model monitoring is an important method to improve the AI/ML generalization capability. In this invention, we introduce a method to monitor model with PRU. With the known location PRU, the loss of AI/ML model can be gotten. PRU can provides assistance data for AI/ML model training. Assistance data include model input data and model output label. Network can send a model to be monitored to PRU. This model maybe trained or not. If the model is not trained, network will send assistance data to PRU and PRU will train the model. After that, this PRU will inference the model to get estimated location and calculate the difference between known location and estimated location.
Description
This present disclosure relates generally to wireless communications, and more specifically, to techniques of positioning a user equipment (UE) with Artificial Intelligence (AI) /Machine Learning (ML) .
3GPP (The 3rd Generation Partnership Project) approved a study item on AI/ML for positioning accuracy enhancement. The performance of AI/ML model will change with the environment. If performance of model degrades, the system will adjust model architecture, parameters, and so on to improve the performance. The AI/ML model performance monitoring is an important topic in AI/ML positioning.
SUMMARY
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
This invention introduced a PRU-based (Positioning Reference Units) AI/ML model performance monitor mechanism. PRU is a known location equipment that can provides assistance data for AI/ML model training. Assistance data include model input data and model output label. Network can send a model to be monitored to PRU. This model maybe trained or not. If the model is not trained, network will send assistance data to PRU and PRU will train the model. After that, this PRU will inference the model to get estimated location and calculate the difference between known location and estimated location. This difference will be reported as model loss.
To the accomplishment of the foregoing and related ends, the one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed figures set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.
Figure 1 illustrates indoor factory deployment scenario.
Figure 2 illustrates high-level flow of this invention.
The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts.
Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, etc. (collectively referred to as “elements” ) . These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
Figure 1 shows an example of indoor factory deployment scenario. There are 18 gNBs in indoor factory and the location of all gNBs are fixed. UEs are random distributed in the factory. PRUs are deployed in known location for AI/ML assistance data. There are also some clutters in the factory.
Assistance data include AI/ML model input data and model output label. For example, gNB send positioning reference signal to PRU and PRU will estimate the channel delay profiles based on received reference signal. These estimated channel delay profiles can be used for AI/ML model input. Model output label may be the location of PRU, or TOA (time of arrival) of the channel path between gNB and PRU. Model output label also may be NLOS (non-line of sight) or LOS (line of sight) indicator of the channel path between gNB and PRU. Assistance data was used for AI/ML model training.
The high-level flow of this invention is shown in figure 2. The first step is network notify the PRU to monitor performance of one model. Network will send the model to PRU. If the model is not trained, network will send assistance data to PRU in the second step. PRU will train the model with received assistance data. The third step is PRU measure model input and inference model to get model output. The 4th step is PRU calculate the model loss. The last step is PRU reports the loss to network.
There are some different AI/ML models in network side for positioning UE. Different models will get different performance in different environment. For example, model M1, model M2, model M3, …; and assistance data D1 from PRU1, assistance data D2 from PRU2, assistance data D3 from PRU3, and so on. Different PRU can provide different assistance data to network. In this example, PRU1 will be used to monitor M1 performance.
If Network don’t train model M1 or trained M1 with assistance data from PRU1, network will send un-trained model M1 and assistance data D2/D3/…to PRU1. These assistance data should not include assistance data D1 from PRU1. During training stage, PRU1 train M1 based on received assistance data.
If Network trained model M1 with assistance data from PRU2/3/… (not PRU1) , PRU1 don’t need to train M1 again. Network also don’t send assistance data to PRU1.
PRU1 measure the reference signal for positioning from network to get model input. With the model input and trained model, PRU1 inference model and get model output.
If the monitored model is AI/ML assisted positioning model, the model output isn’t an estimated PRU1 location. PRU1 will calculates PRU1 location based on the model output. For example, if AI/ML model output is TOA, PRU1 can use TDoA (Time Difference of Arrival) positioning algorithm to estimate location based on TOA. If the monitored model is direct AI/ML positioning model, the model output is PRU1 location. PRU1 use the location directly to calculates the model loss. Model loss is the difference between estimated location and known location.
Where (x
k, y
k) is known location; (x
e, y
e) is estimated location.
PRU1 send the loss to network. Network will choose a small loss model to positioning a UE.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more. ” The word “exemplary” is used herein to mean “serving as an example, instance, or illustration. ” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as “at least one of A, B, or C, ” “one or more of A, B, or C, ” “at least one of A, B, and C, ” “one or more of A, B, and C, ” and “A, B, C, or any combination thereof” include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as “at least one of A, B, or C, ” “one or more of A, B, or C, ” “at least one of A, B, and C, ” “one or more of A, B, and C, ” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words “module, ” “mechanism, ” “element, ” “UE, ” and the like may not be a substitute for the word “means. ” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for. ”
While aspects of the present disclosure have been described in conjunction with the specific embodiments thereof that are proposed as examples, alternatives, modifications, and variations to the examples may be made. Accordingly, embodiments as set forth herein are intended to be illustrative and not limiting. There are changes that may be made without departing from the scope of the claims set forth below.
Claims (8)
- A method of wireless communication of a positioning reference unit (PRU) , comprising:receiving the configuration of the reference signal for positioning from network;receiving the AI/ML model to be monitored from a network;receiving the assistance data from a network if AI/ML model is not trained;training the AI/ML model based on the received assistance data if model is not trained;measuring the received reference signal for positioning from a base station;inferencing, to get model output based on the measurement results and the trained model;determining, based on model output to get an estimated location of PRU;calculating the model loss based on the known PRU location and estimated PRU location; andsending, to the network, the measurement loss.
- The method of claim 1, wherein the AI/ML model contains:direct AI/ML positioning model, AI/ML assisted positioning model.
- The method of claim 1, wherein the assistance data contains:a set of delay profiles of the channel, a set of the labels labelling each delay profile in the set.
- The method of claim 1, wherein the assistance data don’ t contain data from current PRU.
- The method of claim 1, wherein the AI/ML model is a trained model or un-trained model.
- The method of claim 2, wherein the model output of direct AI/ML positioning is estimated PRU location.
- The method of claim 2, wherein the model output of AI/ML assisted positioning is not estimated PRU location.
- The method of claim 7, wherein PRU use conventional non-AI/ML method to get estimated PRU location based on model output.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2023/071056 WO2024145925A1 (en) | 2023-01-06 | 2023-01-06 | Model performance monitor mechanism for ai/ml positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2023/071056 WO2024145925A1 (en) | 2023-01-06 | 2023-01-06 | Model performance monitor mechanism for ai/ml positioning |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024145925A1 true WO2024145925A1 (en) | 2024-07-11 |
Family
ID=91803542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/071056 WO2024145925A1 (en) | 2023-01-06 | 2023-01-06 | Model performance monitor mechanism for ai/ml positioning |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024145925A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022155244A2 (en) * | 2021-01-12 | 2022-07-21 | Idac Holdings, Inc. | Methods and apparatus for training based positioning in wireless communication systems |
US20220317230A1 (en) * | 2021-04-01 | 2022-10-06 | Qualcomm Incorporated | Positioning reference signal (prs) processing window for low latency positioning measurement reporting |
-
2023
- 2023-01-06 WO PCT/CN2023/071056 patent/WO2024145925A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022155244A2 (en) * | 2021-01-12 | 2022-07-21 | Idac Holdings, Inc. | Methods and apparatus for training based positioning in wireless communication systems |
US20220317230A1 (en) * | 2021-04-01 | 2022-10-06 | Qualcomm Incorporated | Positioning reference signal (prs) processing window for low latency positioning measurement reporting |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8818288B2 (en) | Statistical inversion method and system for device-free localization in RF sensor networks | |
EP2222053B1 (en) | System and method for predicting future meetings of wireless users | |
US8340022B2 (en) | Wireless location determination system and method | |
US9664770B2 (en) | Method and system for simultaneous receiver calibration and object localisation for multilateration | |
CN105738922B (en) | The service reliability analysis method and system of aeronautical satellite constellation systems | |
US20180047174A1 (en) | Target monitoring system and target monitoring method | |
US20180283861A1 (en) | Altitude measurement system and altitude measurement method | |
WO2003001236A3 (en) | Method and apparatus for providing accurate position estimates in instances of severe dilution of precision | |
CN110557191B (en) | Terminal positioning method and device in low-earth-orbit satellite mobile communication system | |
CN109506647B (en) | A combined positioning method of INS and magnetometer based on neural network | |
US20190317221A1 (en) | Systems and methods for satellite-based navigation | |
Nguyen et al. | Applying time series analysis and neighbourhood voting in a decentralised approach for fault detection and classification in WSNs | |
CN110673168B (en) | Asynchronous multi-user joint deception signal detection method and device | |
JP2008014938A (en) | System and method for enhancing performance of satellite navigation receiver | |
US20160291121A1 (en) | Backtracking indoor trajectories using mobile sensors | |
WO2024145925A1 (en) | Model performance monitor mechanism for ai/ml positioning | |
Sheppard et al. | Bayesian diagnosis and prognosis using instrument uncertainty | |
CN115702590A (en) | Position estimation | |
WO2024148576A1 (en) | Scenario monitor mechanism for ai/ml positioning | |
WO2024197735A1 (en) | Model performance monitor mechanism for ai/ml assisted positioning | |
US20240097811A1 (en) | Method for identifying and diagnosing failures in pairwise time synchronization and frequency calibration in a mesh network | |
KR20190111587A (en) | Apparatus and method for estimating location of user terminal based on deep learning | |
US12015966B2 (en) | Method and apparatus for sensor selection for localization and tracking | |
WO2024164116A1 (en) | Auxiliary positioning reference units mechanism for ai/ml positioning | |
Im et al. | Deep LSTM-based multimode pedestrian dead reckoning system for indoor localization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23914082 Country of ref document: EP Kind code of ref document: A1 |