Nothing Special   »   [go: up one dir, main page]

CN110493710B - Trajectory reconstruction method and apparatus, computer device and storage medium - Google Patents

Trajectory reconstruction method and apparatus, computer device and storage medium Download PDF

Info

Publication number
CN110493710B
CN110493710B CN201910660363.0A CN201910660363A CN110493710B CN 110493710 B CN110493710 B CN 110493710B CN 201910660363 A CN201910660363 A CN 201910660363A CN 110493710 B CN110493710 B CN 110493710B
Authority
CN
China
Prior art keywords
model
time point
model parameter
positioning
time points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910660363.0A
Other languages
Chinese (zh)
Other versions
CN110493710A (en
Inventor
尹峰
谢昂
崔曙光
艾渤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute of Big Data SRIBD
Chinese University of Hong Kong CUHK
Original Assignee
Shenzhen Research Institute of Big Data SRIBD
Chinese University of Hong Kong CUHK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute of Big Data SRIBD, Chinese University of Hong Kong CUHK filed Critical Shenzhen Research Institute of Big Data SRIBD
Priority to CN201910660363.0A priority Critical patent/CN110493710B/en
Publication of CN110493710A publication Critical patent/CN110493710A/en
Priority to PCT/CN2020/098555 priority patent/WO2021012879A1/en
Application granted granted Critical
Publication of CN110493710B publication Critical patent/CN110493710B/en
Priority to ZA2022/01092A priority patent/ZA202201092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/02Systems for determining distance or velocity not using reflection or reradiation using radio waves
    • G01S11/06Systems for determining distance or velocity not using reflection or reradiation using radio waves using intensity measurements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)
  • Navigation (AREA)

Abstract

The application relates to a track reconstruction method, a track reconstruction device, computer equipment and a storage medium. The method comprises the following steps: acquiring inertial sensor data of terminal equipment at a plurality of time points and signal intensity data of a wireless fidelity network; generating initial positioning coordinates of each time point according to the signal intensity data of the wireless fidelity network of each time point; generating a displacement vector of each time point according to the inertial sensor data of each time point; inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points; and reconstructing the track according to each final positioning coordinate. By adopting the method, more accurate position estimation can be obtained under a complex object motion mode and a wireless propagation environment, and the precision of track reconstruction is improved.

Description

Trajectory reconstruction method and apparatus, computer device and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a trajectory reconstruction method and apparatus, a computer device, and a storage medium.
Background
With the rapid development of wireless communication network technology, mobile intelligent terminals are gradually going deep into various layers of life, navigation and mobile intelligent terminals are also closely combined, Global Positioning System (GPS) satellite Positioning is the most common way to acquire position information, but since satellite signals are easily shielded by various buildings or interfered by other factors, the GPS Positioning technology is not suitable for indoor occasions or complex environments of high-rise forests. With the development of wireless electronic mobile communication technology, indoor positioning technology has become a hot research field that researchers attach increasing importance to.
Traditional indoor navigation technique all carries out indoor location and orbit through collecting single data message and rebuilds, for example traditional bluetooth indoor location, wiFi location, zigBee location technique etc. all adopt single measured data, if indoor wireless propagation environment is comparatively complicated and when having great interference to single measured data, can directly lead to the great deviation of location result to appear, and then cause the influence to the removal orbit of rebuilding.
Therefore, the conventional indoor navigation technology cannot obtain higher positioning accuracy due to the difficulty in effectively representing observation data collected in a complex wireless propagation environment, and further cannot restore a more accurate indoor motion trajectory.
Disclosure of Invention
In view of the above, it is necessary to provide a trajectory reconstruction method, a device, a computer device and a storage medium with high positioning accuracy and high trajectory reconstruction accuracy.
In a first aspect, an embodiment of the present invention provides a track reconstruction method, where the method includes:
acquiring inertial sensor data of terminal equipment at a plurality of time points and signal intensity data of a wireless fidelity network;
generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network of each time point;
generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point;
inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points;
and reconstructing the track according to each final positioning coordinate.
In a second aspect, an embodiment of the present invention provides a trajectory reconstruction apparatus, where the apparatus includes:
the acquisition module is used for acquiring inertial sensor data of the terminal equipment at a plurality of time points and signal intensity data of the wireless fidelity network;
the wireless fidelity network positioning module is used for generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network at the time points;
the displacement vector determination module is used for generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point;
the model positioning module is used for inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points;
and the track reconstruction module is used for reconstructing a track according to each final positioning coordinate.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring inertial sensor data of terminal equipment at a plurality of time points and signal intensity data of a wireless fidelity network;
generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network of each time point;
generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point;
inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points;
and reconstructing the track according to each final positioning coordinate.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring inertial sensor data of terminal equipment at a plurality of time points and signal intensity data of a wireless fidelity network;
generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network of each time point;
generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point;
inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points;
and reconstructing the track according to each final positioning coordinate.
The track reconstruction method, the device, the computer equipment and the storage medium acquire inertial sensor data of the terminal equipment at a plurality of time points and signal intensity data of the wireless fidelity network; generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network of each time point; generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point; inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points; and reconstructing the track according to each final positioning coordinate. According to the track reconstruction method provided by the embodiment of the application, the displacement vector and the initial positioning coordinate corresponding to each time point are generated according to the inertial sensor data of each time point and the signal intensity data of the wireless fidelity network, the initial positioning coordinate, the displacement vector and the final positioning coordinate corresponding to each time point are modeled through the global positioning model, more accurate position estimation can be obtained in a complex object motion mode and in a complex wireless propagation environment, and the track reconstruction precision is improved.
Drawings
Fig. 1 is an implementation environment diagram of a trajectory reconstruction method provided in an embodiment of the present application;
fig. 2 is a flowchart of a track reconstruction method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of another track reconstruction method provided in an embodiment of the present application;
fig. 4 is a flowchart of another track reconstruction method provided in an embodiment of the present application;
fig. 5 is a flowchart of another track reconstruction method provided in an embodiment of the present application;
fig. 6 is a flowchart of another track reconstruction method provided in an embodiment of the present application;
fig. 7 is a flowchart of another track reconstruction method provided in an embodiment of the present application;
fig. 8 is a flowchart of another track reconstruction method provided in an embodiment of the present application;
fig. 9 is a flowchart of another track reconstruction method provided in an embodiment of the present application;
fig. 10 is a block diagram of a trajectory reconstruction apparatus according to an embodiment of the present application;
fig. 11 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The trajectory reconstruction method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 and the server 104 communicate via a network, and in one embodiment, the terminal 102 and the server 104 communicate wirelessly (4G/5G or WiFi) based on MQTT protocol. The data received by the server 104 is firstly forwarded through the Mosquitto proxy software running the MQTT protocol, and is further received and processed by the JavaScript script running on the nodess platform, and the result is returned to the terminal 102. In an embodiment, the terminal 102 is internally provided with an inertial sensor and a wireless access module such as an accelerometer, a gyroscope, a magnetometer, and a barometer, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
Referring to fig. 2, it shows a track reconstruction method provided in this embodiment, which is described by taking the method applied to the terminal in fig. 1 as an example, and includes the following steps:
step 202, acquiring inertial sensor data of the terminal device at a plurality of time points and signal intensity data of the wireless fidelity network.
In one embodiment of the present application, the terminal device is provided with an inertial sensor such as an accelerometer, a gyroscope, a magnetometer and a barometer, and in another embodiment of the present application, the terminal device is provided with a linear acceleration sensor, the acceleration sensor is a composite sensor based on an accelerometer and a gyroscope, and the terminal device is further provided with a rotation vector sensor, the rotation vector sensor is a composite sensor based on an accelerometer, a gyroscope and a magnetometer. Specifically, inertial sensor data of the terminal device at a plurality of time points is acquired by the inertial sensor.
In one embodiment of the application, a WiFi module is built in the terminal device. Specifically, the WiFi module may acquire signal strength data of the WiFi networks of the terminal device at multiple time points, where the signal strength data of the WiFi networks at each time point includes signal strength data of the WiFi networks of multiple wireless access points received at the current time point.
And step 204, generating initial positioning coordinates corresponding to each time point according to the signal intensity data of the wireless fidelity network at each time point.
In an embodiment of the application, the distance from the terminal device to each wireless access point at the current time point is calculated according to the WiFi signal strength values of the multiple wireless access points corresponding to each time point, and the initial positioning coordinates of the terminal device at the current time point can be obtained through a geometric algorithm according to the actual positions of the wireless access points.
And step 206, generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point.
In an embodiment of the present application, a moving direction of the terminal device corresponding to each time point is calculated according to an acceleration value, an angular velocity value, and a direction value of the terminal device corresponding to each time point, and a displacement vector corresponding to each time point can be obtained by combining a preset unit step length, where the displacement vector includes a step length from a current time point to a next time point and a moving direction.
And 208, inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into the global positioning model to obtain final positioning coordinates of the time points.
In one embodiment of the present application, the global positioning model represents a mapping relationship between input parameters (initial positioning coordinates and displacement vectors corresponding to each time point) and output parameters (final positioning coordinates of each time point).
In an embodiment of the application, the global positioning model may be one of a gaussian process state space model, a variational gaussian process state space model, a Convolutional Neural Network (CNN) model, a Deep Belief network (DBF) model, a constrained Boltzmann Machine (RBM) model, and an automatic encoder (AutoEncoder).
And step 210, reconstructing a track according to each final positioning coordinate.
In an embodiment of the present application, each final positioning coordinate reflects a positioning coordinate of the terminal device at each time, and the movement track of the terminal device may be reconstructed in a manner of sequentially connecting each final positioning coordinate in time order.
In the track reconstruction method provided by the embodiment of the application, inertial sensor data of terminal equipment at a plurality of time points and signal intensity data of a wireless fidelity network are collected; generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network of each time point; generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point; inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points; and reconstructing the track according to each final positioning coordinate. According to the track reconstruction method provided by the embodiment of the application, the displacement vector and the initial positioning coordinate corresponding to each time point are generated according to the inertial sensor data of each time point and the signal intensity data of the wireless fidelity network, the initial positioning coordinate, the displacement vector and the final positioning coordinate corresponding to each time point are modeled through the global positioning model, more accurate position estimation can be obtained in a complex object motion mode and in a complex wireless propagation environment, and the track reconstruction precision is improved.
If a plurality of mobile terminals exist in the implementation environment, the global positioning model can be determined according to a plurality of local positioning models established by the plurality of mobile terminals. Therefore, please refer to fig. 3, which shows a flowchart of another trajectory reconstruction method provided in the present embodiment, which can be applied to the terminal 102 in the implementation environment described above. On the basis of the embodiment shown in fig. 2, the step 208 may specifically include the following steps:
step 302, training an initial positioning model according to the initial positioning coordinates and the displacement vectors corresponding to the time points to obtain a local positioning model; the local positioning model comprises a plurality of model parameters.
In an embodiment of the present application, after the initial positioning model is trained according to the initial positioning coordinates and the displacement vectors corresponding to the time points, a plurality of model parameters of the local positioning model can be obtained, and according to the plurality of model parameters, the local positioning model of the terminal device can be directly determined. The positioning model is determined by a mean function and a covariance function, wherein a part of model parameters are used for representing model parameters of the covariance function, and a part of model parameters are used for representing a covariance matrix of noise.
And 304, uploading the plurality of model parameters to the central node, so that the central node performs balancing operation on each model parameter of the plurality of local positioning models, and obtains a plurality of balanced model parameters.
In an embodiment of the present application, the central node is the server 104 in the implementation environment of fig. 1, the central node is in communication connection with the terminal device 102, and also in communication connection with other terminal devices in the current scene, after the local positioning model is established in the terminal device 102, a plurality of model parameters included in the model are uploaded to the central node, the central node also receives and stores a plurality of model parameters uploaded by other terminal devices, in order to improve the positioning accuracy of the global positioning model, the central node performs balanced coordination on the plurality of model parameters of the local positioning model uploaded by each terminal device, and obtains a plurality of balanced model parameters of the global positioning model after the balanced coordination. In an example of the present application, each time a terminal device connected to a central node uploads a plurality of model parameters of a local positioning model, the central node updates the plurality of model parameters of the global positioning model and sends the plurality of updated model parameters of the global positioning model to all terminal devices connected to the central node.
And step 306, receiving the equalized model parameters returned by the central node.
And 308, establishing a global positioning model according to the equalized model parameters.
In an embodiment of the present application, the number of the received equalized model parameters is the same as the number of the model parameters uploaded to the central node, and a single global positioning model can be determined by the equalized model parameters.
And 310, inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points.
In an embodiment of the present application, after receiving the equalized model parameters, the terminal device may directly use the model parameters to re-establish the global positioning model, and perform an operation on the input initial positioning coordinates and the input displacement vectors corresponding to the time points according to the global positioning model to obtain final positioning coordinates of the time points.
In the track reconstruction method provided by the embodiment of the application, an initial positioning model is trained according to initial positioning coordinates and displacement vectors corresponding to all time points to obtain a local positioning model; the local positioning model comprises a plurality of model parameters; uploading the plurality of model parameters to a central node, so that the central node performs balancing operation on each model parameter of the plurality of local positioning models, and obtains a plurality of balanced model parameters; receiving a plurality of equalized model parameters returned by the central node; establishing a global positioning model according to the equalized model parameters; and inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into the global positioning model to obtain final positioning coordinates of the time points. According to the track reconstruction method provided by the embodiment of the application, the center node performs balancing operation according to each model parameter uploaded by each terminal device through the uploaded multiple model parameters of the local positioning model, and obtains the balanced model parameters of the global positioning model. By the aid of the distributed model building process, the model scale is enlarged, accuracy of positioning based on the global positioning model is improved, model parameters of the local model are only transmitted, calculation related to model training and track reconstruction sinks to a mobile phone end, and privacy of mobile phone data of a user is guaranteed.
Referring to fig. 4, a flowchart of another trajectory reconstruction method provided in this embodiment is shown, which can be applied to the terminal 102 in the implementation environment described above. Based on the embodiment shown in fig. 2, the local positioning model is a local variational gaussian process state space model, the plurality of model parameters include a first model parameter, a second model parameter, a third model parameter and a fourth model parameter, and the step 302 may specifically include the following steps:
step 402, modeling the mapping relation between the initial positioning coordinates corresponding to each time point and the real position coordinates by using a Gaussian process to obtain an observation quantity sub-model, wherein the observation quantity sub-model comprises a first model parameter and a second model parameter.
In one embodiment of the present application, WiFi received signal strength based WiFi positioning technology can only provide a rough location estimate, and the accuracy of the location estimate depends on the specific location of the pedestrian indoors, the distribution of wireless access points, and the complex wave propagation environment indoors. The method utilizes the Gaussian process to model the mapping relation between the initial positioning coordinate and the real position coordinate provided by the WiFi positioning technology.
Specifically, the gaussian process describing the mapping relationship between the initial positioning coordinates and the real position coordinates may be defined as:
y=g(x)+r
wherein g (x) is a first model parameter θgIs subject to a Gaussian distribution
Figure BDA0002138338040000081
R is a covariance matrix of the observation noise, and the mean function of the Gaussian process is mg(x)=x。
In one embodiment of the present application, only one observation quantity model can be determined by the first model parameter and the second model parameter.
And 404, modeling the mapping relation between the displacement vector corresponding to each time point and the real position coordinate by using a Gaussian process to obtain a state evolution sub-model, wherein the state evolution sub-model comprises a third model parameter and a fourth model parameter.
In one embodiment of the present application, the state evolution submodel is also represented by a gaussian process, and its mean function can be characterized by the pedestrian dead reckoning technique, i.e.:
mf(xt-1,ut)=xt-1+ut
wherein x istReal position coordinates, u, representing the t time pointtRepresenting the displacement vector at the t time point.
In one embodiment of the present application, only one state evolution submodel may be determined by the third model parameter and the fourth model parameter.
And 406, obtaining a local variation Gaussian process state space model according to the observation quantity submodel and the state evolution submodel.
In one embodiment of the present application, the local variational gaussian process state space model can be represented by the observation quantity submodel and the state evolution submodel together. In an alternative embodiment, the local variational Gaussian process state space model further includes an initial trueThe distribution probability of the location coordinates is expressed as:
Figure BDA0002138338040000092
according to the track reconstruction method provided by the embodiment of the application, the local variational Gaussian process state space model is divided into the state evolution sub-model and the observed quantity sub-model, and the state evolution sub-model and the observed quantity sub-model are respectively learned according to the sequence, so that the learning complexity of the local variational Gaussian process state space model is simplified, and the track reconstruction efficiency is improved.
The embodiment of the present application further provides another trajectory reconstruction method, which can be applied to the terminal 102 in the implementation environment described above. On the basis of the above embodiment, the step 402 may specifically further include the following steps:
uploading the first model parameter, the second model parameter, the third model parameter and the fourth model parameter to the central node, so that the central node performs balancing operation on each model parameter of the plurality of local variational Gaussian process state space models, and obtains the balanced first model parameter, second model parameter, third model parameter and fourth model parameter.
The embodiment of the present application further provides another trajectory reconstruction method, which can be applied to the terminal 102 in the implementation environment described above. On the basis of the above-described embodiment, the observation quantity model includes:
Figure BDA0002138338040000091
gt=g(xt)
Figure BDA0002138338040000102
in the formula, mg(x) Mean function, k, representing the Gaussian process g (x)g(x, x') represents the covariance function of the Gaussian process g (x), kg(x, x') is determined from the first model parameter θgDetermination of gtRepresenting the probability distribution yt|gtR represents the probability distribution yt|gtR is the second model parameter.
Referring to fig. 5, a flowchart of another trajectory reconstruction method provided in this embodiment is shown, which can be applied to the terminal 102 in the implementation environment described above. On the basis of the embodiment shown in fig. 4, the step 402 may specifically include the following steps:
step 502, according to the training data set { x1:N,y1:NAnd generating a corresponding observed quantity log-marginal likelihood function.
In one embodiment of the present application, the observed quantity log marginal likelihood function is expressed as:
Figure BDA0002138338040000101
in the formula INAs an identity matrix, the training data set { x1:N,y1:NAnd describing the mapping relation between the initial positioning coordinates corresponding to each time point and the real position.
And 504, obtaining a first model parameter and a second model parameter by maximizing the observed quantity log marginal likelihood function.
In an embodiment of the present application, the observed quantity log marginal likelihood function may be maximized by a Conjugate Gradient (CG) method and an LBFGS algorithm, and a first model parameter θ is obtainedgAnd a second model parameter R.
Step 506, determining an observation quantity sub-model according to the first model parameter and the second model parameter.
In one embodiment of the present application, the first model parameter θ is obtainedgAnd the second model parameter R may determine only one observation quantity model.
By the track reconstruction method provided by the embodiment, the first model parameter and the second model parameter which can determine the observed quantity submodel can be obtained by maximizing the obtained observed quantity log-marginal likelihood function, and then the observed quantity submodel is obtained. The training workload of the local variation Gaussian process state space model can be reduced, and the calculation efficiency is improved.
The embodiment of the present application further provides another trajectory reconstruction method, which can be applied to the terminal 102 in the implementation environment described above. On the basis of the above-described embodiments, the state evolution submodel includes:
Figure BDA0002138338040000111
ft=f(xt-1,ut)
Figure BDA0002138338040000112
in the formula, mf(x, u) represents the mean function of the Gaussian process f (x, u), kf((x, u), (x ', u')) represents the covariance function of the Gaussian process f (x, u), kf((x, u), (x ', u')) by the third model parameter θfDetermination of ftRepresenting a probability distribution xt|ftMean value of (1), Q denotes the probability distribution xt|ftQ is the fourth model parameter.
Referring to fig. 6, a flowchart of another trajectory reconstruction method provided in this embodiment is shown, which can be applied to the terminal 102 in the implementation environment described above. On the basis of the embodiment shown in fig. 4, the step 404 may specifically include the following steps:
step 602, obtaining M induction points { x }1:M,v1:M}。
In one embodiment of the present application, to handle the case where neither input nor output is observable in the state evolution submodel, and to reduce the computational complexity of learning the state evolution submodelThe method adopts a variational sparse Gaussian process framework and introduces M auxiliary induction points v1:MTo reflect the information contained in the observations. These induction points are the corresponding Gaussian processes at the induction input z1:MIs taken from the above value and is compared with the unknown function value f1:TSatisfying the joint gaussian distribution.
Step 604, based on M induction points { z1:M,v1:MObtaining the lower evidence bound of the state space model of the local variational Gaussian process, wherein the lower evidence bound is approximate q (x) to the posterior0:T)、q(v1:M) Third model parameter θfA fourth model parameter Q, and an induced input z1:MA correlation functional.
In an embodiment of the present application, based on the introduced M induction points, an evidence lower bound of the initial local variation gaussian process state space model may be obtained, where the evidence lower bound is expressed as:
Figure BDA0002138338040000113
wherein q (x)0:T,f1:T,v1:M) Is for posterior distribution p (x)0:T,f1:T,v1:M|y1:T) Is similar to the variation of (a). The joint distribution of all variables in the gaussian process state space model can be written as:
Figure BDA0002138338040000121
wherein p (f)1:T|v1:M) Is that
Figure BDA0002138338040000122
The posterior of the gaussian process at the location,
Figure BDA0002138338040000123
is the input of the state evolution submodel at the time t. In order to make the lower bound of the evidence easier to process, the invention selects the following posterior distribution approximate form:
q(x0:T,f1:T,v1:M)=q(x0:T)p(1:T|v1:M)q(v1:M)
thus, the lower evidence bound of the Gaussian process state space model can be written as to the posterior approximation q (x)0:T),q(v1:M) Model parameter θfQ, and an induction input z1:MFunctional of (a):
Figure BDA0002138338040000124
wherein KL (. |) represents a KL divergence,
Figure BDA0002138338040000125
representing the entropy of the information.
Step 606, fix { θ }f,Q,z1:MAnd (6) maximizing the lower bound of the certificate to obtain q (x)0:T) And q (v)1:M)。
In one embodiment of the present application, when fixed, { θ }f,Q,z1:MAt this time, q (x) is optimal0:T) And q (v)1:M) It can be derived by maximizing the lower bound of evidence above using variational methods. Thus, the optimum q (v) obtained1:M) Is a Gaussian distribution
Figure BDA0002138338040000127
And the characteristic parameters (natural parameters) are as follows:
Figure BDA0002138338040000126
Figure BDA0002138338040000131
wherein
Figure BDA0002138338040000132
Figure BDA0002138338040000133
Wherein,
Figure BDA0002138338040000134
the optimal q can be obtained by the formula*(x0:T) Will be optimal q*(x0:T) As a posteriori approximation q (x)0:T)。
Step 608, fix q (x)0:T) And q (v)1:M) Maximizing the lower bound of the evidence to obtain a third model parameter thetafAnd a fourth model parameter Q.
In one embodiment of the present application, the model parameter θ of the modelfQ, and an induced input z1:MCan be obtained by maximizing the lower bound of evidence. The invention adopts an optimization scheme based on gradient, and the lower limit of evidence is about parameters
Figure BDA0002138338040000135
The gradient of (d) can be calculated by:
Figure BDA0002138338040000136
and step 610, determining a state evolution sub-model according to the third model parameter and the fourth model parameter.
In one embodiment of the present application, the third model parameter θ is obtainedfAnd the fourth model parameter Q may determine only one state evolution submodel.
By the trajectory reconstruction method provided by the embodiment, the obtained evidence lower bound of the Gaussian process state space model can be maximized, and the parameters { theta ] are respectively fixedf,Q,z1:MQ and q (x)0:T),q(v1:M) And obtaining a third model parameter and a fourth model parameter which can determine the state evolution sub-model, and further obtaining the state evolution sub-model. Training tool capable of reducing local variation Gaussian process state space modelWorkload, calculation efficiency is improved.
Referring to fig. 7, a flowchart of another trajectory reconstruction method provided in this embodiment is shown, which can be applied to the terminal 102 in the implementation environment described above. On the basis of the embodiment shown in fig. 2, the step 204 may specifically include the following steps:
step 702, establishing a corresponding positioning log-likelihood function according to the signal intensity data of the wireless fidelity network at each time point.
In an embodiment of the present application, the signal strength data of the wireless fidelity network at each time point includes the signal strength of the wireless fidelity network of the plurality of wireless access points received by the terminal device at the current time point, and for each wireless access point, the mapping relationship between the signal strength data of the wireless fidelity network and the scanning distance may be represented by a linear logarithm model:
Figure BDA0002138338040000141
in the formula, riIs the signal strength of the wireless fidelity network of the ith wireless access point obtained by multiple acquisition, A and B are signal path loss parameters, diIs a reference distance, d0Is the scanning distance, which represents the distance between the wireless access point and the terminal device location. The noise distribution is assumed to be independent and identically distributed Gaussian noise with mean 0 and variance σ2. The unknown parameters A, B and σ for the linear logarithmic model for each wireless access point may be estimated by a linear least squares method.
In an embodiment of the present application, after obtaining parameters of linear logarithmic models of all wireless access points, the present invention obtains initial positioning coordinates based on Maximum Likelihood Estimation (MLE). Assuming that one scan at an arbitrary position can yield MAPSignal strength of the wireless fidelity network of each wireless access point, wherein the signal strength of the wireless fidelity network of each wireless access point is attributed to an independent wirelessAn access point. The positioning log-likelihood function can be written as:
Figure BDA0002138338040000151
where erf (.) represents the standard Gaussian error function, PdecIs the cutoff limit for the signal strength of the wireless fidelity network,
Figure BDA0002138338040000152
and 704, obtaining initial positioning coordinates corresponding to each time point by maximizing a positioning log-likelihood function.
In one embodiment of the present application, the initial positioning coordinates corresponding to each time point can be obtained by maximizing the log-likelihood function, that is:
y=arg maxxl(x)
in the formula, y represents the initial positioning coordinate.
By the track reconstruction method provided by the embodiment, a positioning log-likelihood function can be established through the linear log model of each wireless access point and the signal intensity data of the wireless fidelity network at each time point, and the initial positioning coordinates of each time point can be obtained by maximizing the positioning log-likelihood function. The accuracy of the description of the initial positioning coordinates on the real position coordinates can be improved, the accuracy of the subsequent final position coordinates is further ensured, and the accuracy of the reconstructed track is improved.
Referring to fig. 8, a flowchart of another trajectory reconstruction method provided in this embodiment is shown, which can be applied to the terminal 102 in the implementation environment described above. On the basis of the embodiment shown in fig. 2, the displacement vector is used to represent the motion direction and the step size of the terminal device at the time point, and the step 206 may specifically include the following steps:
and step 802, acquiring a rotation vector signal of the terminal equipment through the rotation vector sensor.
And step 804, acquiring the motion direction of each time point according to the rotation vector signal, wherein the motion direction is a declination angle between the orientation of the terminal equipment and the north pole of the geomagnetic field.
Step 806, determining the displacement vector of each time point according to the preset step length and the motion direction of each time point.
In an embodiment of the present application, the terminal device is kept in a flat state, the front end of the terminal device faces the walking direction, the rotation vector sensor is a composite sensor based on an accelerometer, a gyroscope, and a magnetometer, and the quaternion output by the rotation vector sensor is processed to obtain the declination angle between the orientation of the terminal device and the north pole of the geomagnetic field at each time point, and the declination angle is used as the moving direction. And according to the preset unit step length, obtaining the motion direction of each time point and determining the displacement vector of each time point.
The displacement vector of each time point obtained by the embodiment can more accurately reflect the movement direction and the movement distance from each time point to the next time point, so that the accuracy of the subsequent final position coordinates is ensured, and the precision of the reconstructed track is improved.
Referring to fig. 9, a flowchart of another trajectory reconstruction method provided in this embodiment is shown, which can be applied to the terminal 102 in the implementation environment described above. On the basis of the embodiment shown in fig. 2, the method may further include the following steps:
and step 902, acquiring a linear acceleration signal of the terminal device relative to the gravity direction through a linear acceleration sensor.
Step 904, a moving average process is performed on the linear acceleration signal.
Step 906, performing local peak detection on the linear acceleration signal after the moving average processing to obtain a plurality of time points, wherein each time point is a time point when the linear acceleration signal is at a local peak.
In an embodiment of the present application, the terminal device is kept in a flat state, the front end of the terminal device faces the walking direction, the linear acceleration sensor is a composite sensor based on an accelerometer and a gyroscope, a linear acceleration signal of the terminal device output by the linear acceleration sensor relative to the gravity direction is subjected to a sliding average processing, and a local peak value of the linear acceleration signal subjected to the sliding average processing is detected to obtain a plurality of time points, each time point is a time point when the linear acceleration signal is at the local peak value, and the time points are a plurality of time points when the foot falls.
Through a plurality of time points that this embodiment obtained, a plurality of moments that can more accurate reflection footstep fell, and then guaranteed the degree of accuracy of follow-up final position coordinate, promoted the precision of rebuilding the orbit.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the above-described flowcharts may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
Referring to fig. 10, a block diagram of a trajectory reconstruction apparatus 1000 according to an embodiment of the present application is shown. As shown in fig. 10, the trajectory reconstruction apparatus 1000 may include: an acquisition module 1001, a wireless fidelity network positioning module 1002, a displacement vector determination module 1003, a model positioning module 1004, and a trajectory reconstruction module 1005, wherein:
the acquisition module 1001 is configured to acquire inertial sensor data of the terminal device at multiple time points and signal strength data of the wireless fidelity network.
The wifi network positioning module 1002 is configured to generate an initial positioning coordinate corresponding to each time point according to the signal intensity data of the wifi network at each time point.
The displacement vector determination module 1003 is configured to generate a displacement vector corresponding to each time point according to the inertial sensor data of each time point.
The model positioning module 1004 is configured to input the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model, so as to obtain final positioning coordinates of the time points.
The track reconstructing module 1005 is configured to perform track reconstruction according to each final positioning coordinate.
In an embodiment of the present application, the model positioning module 1004 is specifically configured to: training an initial positioning model according to the initial positioning coordinates and the displacement vectors corresponding to the time points to obtain a local positioning model; the local positioning model comprises a plurality of model parameters; uploading the plurality of model parameters to a central node, so that the central node performs balancing operation on each model parameter of the plurality of local positioning models, and obtains a plurality of balanced model parameters; receiving the equalized model parameters returned by the central node; establishing the global positioning model according to the equalized model parameters; and inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into the global positioning model to obtain final positioning coordinates of the time points.
In an embodiment of the present application, the local positioning model is a local variational gaussian process state space model, the plurality of model parameters include a first model parameter, a second model parameter, a third model parameter and a fourth model parameter, and the model positioning module 1004 is further specifically configured to: modeling the mapping relation between the initial positioning coordinates corresponding to each time point and the real position coordinates by utilizing a Gaussian process to obtain an observation quantity sub-model, wherein the observation quantity sub-model comprises the first model parameters and the second model parameters; modeling the mapping relation between the displacement vector corresponding to each time point and the real position coordinate by utilizing a Gaussian process to obtain a state evolution sub-model, wherein the state evolution sub-model comprises the third model parameter and the fourth model parameter; and obtaining the local variation Gaussian process state space model according to the observation quantity sub-model and the state evolution sub-model.
In an embodiment of the present application, the plurality of model parameters include a first model parameter, a second model parameter, a third model parameter and a fourth model parameter, and the model positioning module 1004 is further specifically configured to: uploading the first model parameter, the second model parameter, the third model parameter and the fourth model parameter to a central node, so that the central node performs a balancing operation on each model parameter of the plurality of local variational Gaussian process state space models, and obtains the balanced first model parameter, the balanced second model parameter, the balanced third model parameter and the balanced fourth model parameter.
In one embodiment of the present application, the observation quantity model includes:
Figure BDA0002138338040000181
gt=g(xt)
Figure BDA0002138338040000182
in the formula, mg(x) Mean function, k, representing the Gaussian process g (x)g(x, x') represents the covariance function of the Gaussian process g (x), kg(x, x') is determined from the first model parameter θgDetermination of gtRepresenting the probability distribution yt|gtR represents the probability distribution yt|gtR is the second model parameter.
In an embodiment of the present application, the model positioning module 1004 is further specifically configured to: from a training data set { x1:N,y1:NAnd generating a corresponding observed quantity log-marginal likelihood function:
Figure BDA0002138338040000183
in the formula INAs an identity matrix, the training data set { x1:N,y1:NDescribing the mapping relation between the initial positioning coordinates corresponding to each time point and the real position; obtaining the first model parameter and the second model parameter by maximizing the observed quantity log marginal likelihood function; and determining the observation quantity sub-model according to the first model parameter and the second model parameter.
In one embodiment of the present application, the state evolution submodel includes:
Figure BDA0002138338040000191
ft=f(xt-1,ut)
Figure BDA0002138338040000192
in the formula, mf(x, u) represents the mean function of the Gaussian process f (x, u), kf((x, u), (x ', u')) represents the covariance function of the Gaussian process f (x, u), kf((x, u), (x ', u')) by the third model parameter θfDetermination of ftRepresenting a probability distribution xt|ftMean value of (1), Q denotes the probability distribution xt|ftQ is the fourth model parameter.
In an embodiment of the present application, the model positioning module 1004 is further specifically configured to: obtaining M induction points { z1:M,v1:M}; based on the M induction points { z1:M,v1:MObtaining an evidence lower bound of the local variational Gaussian process state space model, wherein the evidence lower bound is approximate q (x) to a posterior0:T)、q(v1:M) The third model parameter θfThe fourth model parameter Q, and an induced input z1:MA related functional; fixed { theta }f,Q,z1:MAnd performing maximization treatment on the lower evidence bound to obtain q (x)0:T) And q (v)1:M) (ii) a Fixing q (x)0:T) And q (v)1:M) Maximizing said lower evidence boundPerforming chemical treatment to obtain the third model parameter thetafThe fourth model parameter Q; and determining the state evolution submodel according to the third model parameter and the fourth model parameter.
In an embodiment of the present application, the wifi network positioning module 1002 is specifically configured to: establishing a corresponding positioning log-likelihood function according to the signal intensity data of the wireless fidelity network at each time point; and obtaining initial positioning coordinates corresponding to each time point by maximizing the positioning log-likelihood function.
In an embodiment of the present application, the displacement vector determining module 1003 is specifically configured to: acquiring a rotation vector signal of the terminal equipment through a rotation vector sensor; obtaining the motion direction of each time point according to the rotation vector signal, wherein the motion direction is a declination angle between the orientation of the terminal equipment and the north pole of the geomagnetic field; and determining the displacement vector of each time point according to the preset step length and the motion direction of each time point.
In an embodiment of the present application, the displacement vector determining module 1003 is further specifically configured to: acquiring a linear acceleration signal of the terminal equipment relative to the gravity direction through a linear acceleration sensor; performing moving average processing on the linear acceleration signal; and carrying out local peak detection on the linear acceleration signal subjected to the moving average processing to obtain a plurality of time points, wherein each time point is the time when the linear acceleration signal is at a local peak value.
For specific limitations of the trajectory reconstruction device, reference may be made to the above limitations of the trajectory reconstruction method, which are not described herein again. The modules in the trajectory reconstruction device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a trajectory reconstruction method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring inertial sensor data of terminal equipment at a plurality of time points and signal intensity data of a wireless fidelity network;
generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network of each time point;
generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point;
inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points;
and reconstructing the track according to each final positioning coordinate.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring inertial sensor data of terminal equipment at a plurality of time points and signal intensity data of a wireless fidelity network;
generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network of each time point;
generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point;
inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into a global positioning model to obtain final positioning coordinates of the time points;
and reconstructing the track according to each final positioning coordinate.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (13)

1. A method for trajectory reconstruction, the method comprising:
acquiring inertial sensor data of terminal equipment at a plurality of time points and signal intensity data of a wireless fidelity network;
generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network of each time point;
generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point;
training an initial positioning model according to the initial positioning coordinates and the displacement vectors corresponding to the time points to obtain a local positioning model; the local positioning model comprises a plurality of model parameters;
uploading the plurality of model parameters to a central node, so that the central node performs balancing operation on each model parameter of the plurality of local positioning models, and obtains a plurality of balanced model parameters;
receiving the equalized model parameters returned by the central node;
establishing a global positioning model according to the equalized model parameters;
inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into the global positioning model to obtain final positioning coordinates of the time points;
and reconstructing a track according to each final positioning coordinate.
2. The method according to claim 1, wherein the local positioning model is a local variational gaussian process state space model, the plurality of model parameters include a first model parameter, a second model parameter, a third model parameter and a fourth model parameter, and the training of the initial positioning model according to the initial positioning coordinates and the displacement vectors corresponding to the time points to obtain the local positioning model comprises:
modeling the mapping relation between the initial positioning coordinates corresponding to each time point and the real position coordinates by utilizing a Gaussian process to obtain an observation quantity submodel, wherein the observation quantity submodel comprises the first model parameters and the second model parameters, and the second model parameters represent a covariance matrix of probability distribution;
modeling the mapping relation between the displacement vector corresponding to each time point and the real position coordinate by utilizing a Gaussian process to obtain a state evolution sub-model, wherein the state evolution sub-model comprises the third model parameter and the fourth model parameter, and the fourth model parameter represents a covariance matrix of probability distribution;
and obtaining the local variation Gaussian process state space model according to the observation quantity sub-model and the state evolution sub-model.
3. The method of claim 2, wherein uploading the plurality of model parameters to a central node to enable the central node to perform a balancing operation on each model parameter of the plurality of local positioning models and obtain a plurality of balanced model parameters comprises:
uploading the first model parameter, the second model parameter, the third model parameter and the fourth model parameter to a central node, so that the central node performs a balancing operation on each model parameter of the plurality of local variational Gaussian process state space models, and obtains the balanced first model parameter, the balanced second model parameter, the balanced third model parameter and the balanced fourth model parameter.
4. The method of claim 3, wherein the observation metric model comprises:
Figure DEST_PATH_IMAGE001
Figure 497300DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
in the formula,
Figure 684305DEST_PATH_IMAGE004
representing a Gaussian process
Figure DEST_PATH_IMAGE005
The average value function of (a) is,
Figure 930479DEST_PATH_IMAGE006
representing a Gaussian process
Figure 831439DEST_PATH_IMAGE005
The covariance function of (a) of (b),
Figure 20237DEST_PATH_IMAGE006
from the first model parameters
Figure 348450DEST_PATH_IMAGE007
It is determined that,
Figure 754024DEST_PATH_IMAGE008
representing a probability distribution
Figure 724254DEST_PATH_IMAGE009
The average value of (a) of (b),
Figure 266093DEST_PATH_IMAGE010
representing a probability distribution
Figure 765208DEST_PATH_IMAGE009
The covariance matrix of (a) is determined,
Figure 658078DEST_PATH_IMAGE010
for the purpose of said second model parameters,
Figure 405235DEST_PATH_IMAGE011
representing the true position coordinates at the t time point.
5. The method of claim 4, wherein the modeling the mapping relationship between the initial positioning coordinates corresponding to each time point and the real position by using the Gaussian process to obtain the observation quantity model comprises:
from a training data set
Figure 801581DEST_PATH_IMAGE012
And generating a corresponding observed quantity log marginal likelihood function:
Figure 513394DEST_PATH_IMAGE001
in the formula,
Figure 851762DEST_PATH_IMAGE014
as an identity matrix, the training data set
Figure 632637DEST_PATH_IMAGE012
The mapping relation between the initial positioning coordinates corresponding to each time point and the real position is described;
obtaining the first model parameter and the second model parameter by maximizing the observed quantity log marginal likelihood function;
and determining the observation quantity sub-model according to the first model parameter and the second model parameter.
6. The method of claim 2, wherein the state evolution submodel comprises:
Figure DEST_PATH_IMAGE015
Figure 680227DEST_PATH_IMAGE016
Figure 288188DEST_PATH_IMAGE017
in the formula,
Figure 890071DEST_PATH_IMAGE018
representing a Gaussian process
Figure 474636DEST_PATH_IMAGE019
The average value function of (a) is,
Figure 845574DEST_PATH_IMAGE020
representing a Gaussian process
Figure 857393DEST_PATH_IMAGE021
The covariance function of (a) of (b),
Figure 946571DEST_PATH_IMAGE020
from the third model parameters
Figure 334827DEST_PATH_IMAGE022
It is determined that,
Figure 58808DEST_PATH_IMAGE023
representing a probability distribution
Figure 507106DEST_PATH_IMAGE024
The average value of (a) of (b),
Figure 83581DEST_PATH_IMAGE025
representing a probability distribution
Figure 9949DEST_PATH_IMAGE024
The covariance matrix of (a) is determined,
Figure 89900DEST_PATH_IMAGE025
for the purpose of the fourth model parameter or parameters,
Figure 974680DEST_PATH_IMAGE011
the true position coordinates representing the t time point,
Figure 772872DEST_PATH_IMAGE026
representing the displacement vector at the t time point.
7. The method according to claim 6, wherein the modeling the mapping relationship between the displacement vector and the real position corresponding to each time point by using the Gaussian process to obtain the state evolution sub-model comprises:
obtaining M induction points
Figure 4395DEST_PATH_IMAGE027
Figure 938853DEST_PATH_IMAGE028
Representing the M induced input values in the corresponding gaussian process,
Figure 728954DEST_PATH_IMAGE029
representing M auxiliary induction points for reflecting information contained in the observed quantity, wherein the M auxiliary induction points are corresponding Gaussian process induced input values
Figure 280021DEST_PATH_IMAGE028
The value of (a);
based on the M induction points
Figure 548192DEST_PATH_IMAGE027
Obtaining the lower evidence bound of the local variation Gaussian process state space model, wherein the lower evidence bound is similar to the posterior
Figure 602735DEST_PATH_IMAGE030
The third model parameter
Figure 563738DEST_PATH_IMAGE031
The fourth model parameter
Figure 70943DEST_PATH_IMAGE025
And inducing input
Figure 901059DEST_PATH_IMAGE028
A related functional;
fixing
Figure 75689DEST_PATH_IMAGE032
And performing maximization treatment on the evidence lower bound to obtain
Figure 942013DEST_PATH_IMAGE033
And
Figure 202093DEST_PATH_IMAGE034
fixing
Figure 812066DEST_PATH_IMAGE033
And
Figure 106781DEST_PATH_IMAGE034
maximizing the evidence lower bound to obtain the third model parameter
Figure 144008DEST_PATH_IMAGE031
Said fourth model parameter
Figure 392849DEST_PATH_IMAGE025
And determining the state evolution submodel according to the third model parameter and the fourth model parameter.
8. The method of claim 1, wherein generating initial positioning coordinates corresponding to each of the time points according to the signal strength data of the wireless fidelity network at each of the time points comprises:
establishing a corresponding positioning log-likelihood function according to the signal intensity data of the wireless fidelity network at each time point;
and obtaining initial positioning coordinates corresponding to each time point by maximizing the positioning log-likelihood function.
9. The method according to claim 1, wherein the displacement vector is used to represent a motion direction and a step size of the terminal device at the time point; generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point, including:
acquiring a rotation vector signal of the terminal equipment through a rotation vector sensor;
obtaining the motion direction of each time point according to the rotation vector signal, wherein the motion direction is a declination angle between the orientation of the terminal equipment and the north pole of the geomagnetic field;
and determining the displacement vector of each time point according to the preset step length and the motion direction of each time point.
10. The method of claim 1, further comprising:
acquiring a linear acceleration signal of the terminal equipment relative to the gravity direction through a linear acceleration sensor;
performing moving average processing on the linear acceleration signal;
and carrying out local peak detection on the linear acceleration signal subjected to the moving average processing to obtain a plurality of time points, wherein each time point is the time when the linear acceleration signal is at a local peak value.
11. A trajectory reconstruction apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring inertial sensor data of the terminal equipment at a plurality of time points and signal intensity data of the wireless fidelity network;
the wireless fidelity network positioning module is used for generating initial positioning coordinates corresponding to the time points according to the signal intensity data of the wireless fidelity network at the time points;
the displacement vector determination module is used for generating a displacement vector corresponding to each time point according to the inertial sensor data of each time point;
the training module is used for training an initial positioning model according to the initial positioning coordinates and the displacement vectors corresponding to the time points to obtain a local positioning model; the local positioning model comprises a plurality of model parameters;
the balance operation module is used for uploading the plurality of model parameters to a central node so that the central node performs balance operation on each model parameter of the plurality of local positioning models to obtain a plurality of balanced model parameters;
a receiving module, configured to receive the equalized plurality of model parameters returned by the central node;
the establishing module is used for establishing a global positioning model according to the equalized model parameters;
the input module is used for inputting the initial positioning coordinates and the displacement vectors corresponding to the time points into the global positioning model to obtain final positioning coordinates of the time points;
and the track reconstruction module is used for reconstructing a track according to each final positioning coordinate.
12. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 10 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
CN201910660363.0A 2019-07-22 2019-07-22 Trajectory reconstruction method and apparatus, computer device and storage medium Active CN110493710B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910660363.0A CN110493710B (en) 2019-07-22 2019-07-22 Trajectory reconstruction method and apparatus, computer device and storage medium
PCT/CN2020/098555 WO2021012879A1 (en) 2019-07-22 2020-06-28 Trajectory reconstruction method and apparatus, computer device and storage medium
ZA2022/01092A ZA202201092B (en) 2019-07-22 2022-01-24 Trajectory reconstruction method and apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910660363.0A CN110493710B (en) 2019-07-22 2019-07-22 Trajectory reconstruction method and apparatus, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN110493710A CN110493710A (en) 2019-11-22
CN110493710B true CN110493710B (en) 2021-01-26

Family

ID=68547551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910660363.0A Active CN110493710B (en) 2019-07-22 2019-07-22 Trajectory reconstruction method and apparatus, computer device and storage medium

Country Status (3)

Country Link
CN (1) CN110493710B (en)
WO (1) WO2021012879A1 (en)
ZA (1) ZA202201092B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110493710B (en) * 2019-07-22 2021-01-26 香港中文大学(深圳) Trajectory reconstruction method and apparatus, computer device and storage medium
CN111028347B (en) * 2019-12-24 2021-06-22 贝壳找房(北京)科技有限公司 Method and system for reconstructing a three-dimensional model of a physical workspace
CN113157844B (en) * 2021-04-26 2022-09-20 上海德衡数据科技有限公司 Agricultural Internet of things method, system and device based on Beidou positioning module
CN113820658B (en) * 2021-08-18 2024-09-20 上海闻泰电子科技有限公司 Wireless positioning method, wireless positioning device, electronic equipment and storage medium
CN113766634B (en) * 2021-08-31 2023-08-04 深圳Tcl新技术有限公司 Positioning method and device based on 5G, computer equipment and storage medium
CN113987700B (en) * 2021-10-22 2024-06-28 中国南方电网有限责任公司超高压输电公司检修试验中心 Mechanical stress calculation method and system for switching core of converter transformer on-load tap-changer
CN115470913B (en) * 2022-09-29 2024-07-26 南京师范大学 Reconstruction method and device of PIR sensor network behavior track based on quantum migration
CN117292085B (en) * 2023-11-27 2024-02-09 浙江大学 Entity interaction control method and device supporting three-dimensional modeling

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650431A (en) * 2009-05-20 2010-02-17 北京派瑞根科技开发有限公司 Method for obtaining self-movement locus of object on a basis of network computing
CN102087109A (en) * 2009-12-04 2011-06-08 财团法人资讯工业策进会 System, device and method for estimating position
CN106162555A (en) * 2016-09-26 2016-11-23 湘潭大学 Indoor orientation method and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009030076A1 (en) * 2009-06-23 2010-12-30 Symeo Gmbh Synthetic aperture imaging method, method for determining a relative velocity between a wave-based sensor and an object or apparatus for performing the methods
CN202904027U (en) * 2012-04-11 2013-04-24 四川中软科技有限公司 Autonomous indoor positioning system
CN103776447B (en) * 2014-01-28 2016-08-17 无锡智感星际科技有限公司 One closely intelligent movable equipment room localization method
CN105163382A (en) * 2015-05-07 2015-12-16 中国科学院信息工程研究所 Indoor region location optimization method and system
CN105157706A (en) * 2015-08-25 2015-12-16 武汉易得路位置科技有限公司 WiFi hotspot position measuring method based on multi-sensor information
CN107302754A (en) * 2017-05-10 2017-10-27 广东工业大学 A kind of indoor positioning simple and easy method based on WiFi and PDR
CN109470238B (en) * 2017-09-08 2023-09-01 中兴通讯股份有限公司 Positioning method and device and mobile terminal
CN108668249B (en) * 2018-07-10 2021-01-22 北京物资学院 Indoor positioning method and device for mobile terminal
CN108761514A (en) * 2018-08-03 2018-11-06 北斗国信智能科技(北京)有限公司 A kind of positioning system and localization method merging the Big Dipper or GPS and sensor
CN110493710B (en) * 2019-07-22 2021-01-26 香港中文大学(深圳) Trajectory reconstruction method and apparatus, computer device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650431A (en) * 2009-05-20 2010-02-17 北京派瑞根科技开发有限公司 Method for obtaining self-movement locus of object on a basis of network computing
CN102087109A (en) * 2009-12-04 2011-06-08 财团法人资讯工业策进会 System, device and method for estimating position
CN106162555A (en) * 2016-09-26 2016-11-23 湘潭大学 Indoor orientation method and system

Also Published As

Publication number Publication date
CN110493710A (en) 2019-11-22
ZA202201092B (en) 2023-11-29
WO2021012879A1 (en) 2021-01-28

Similar Documents

Publication Publication Date Title
CN110493710B (en) Trajectory reconstruction method and apparatus, computer device and storage medium
CN107635204B (en) Indoor fusion positioning method and device assisted by exercise behaviors and storage medium
US20160371394A1 (en) Indoor localization using crowdsourced data
Sweeney et al. Solving for relative pose with a partially known rotation is a quadratic eigenvalue problem
CN105044668A (en) Wifi fingerprint database construction method based on multi-sensor device
CN106471338A (en) Determine position in geographic area for the mobile device
CN111090688B (en) Smoothing processing method and device for time sequence data
WO2023082797A1 (en) Positioning method, positioning apparatus, storage medium, and electronic device
CN111707294B (en) Pedestrian navigation zero-speed interval detection method and device based on optimal interval estimation
CN111028346B (en) Reconstruction method and device of video object
US10733718B1 (en) Corruption detection for digital three-dimensional environment reconstruction
Huang et al. VariFi: Variational Inference for Indoor Pedestrian Localization and Tracking Using IMU and WiFi RSS
JP4921847B2 (en) 3D position estimation device for an object
CN115049769A (en) Character animation generation method and device, computer equipment and storage medium
TWI706295B (en) System and method for determining a trajectory
US20120144916A1 (en) Single gyroscope-based approach to determining spatial gait parameters
JP2020113116A (en) Motion generator, motion generation method, and program
TWI812053B (en) Positioning method, electronic equipment and computer-readable storage medium
TWI764842B (en) Ranging-type positioning system and method based on crowdsourced calibration
CN115294280A (en) Three-dimensional reconstruction method, apparatus, device, storage medium, and program product
CN116597402A (en) Scene perception method and related equipment thereof
CN108680175A (en) Synchronous superposition method and device based on rodent models
CN112880675A (en) Pose smoothing method and device for visual positioning, terminal and mobile robot
US20230354258A1 (en) Data processing method and apparatus
Li et al. An enhanced transition model for unsupervised localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant