CN112733452B - Track prediction method, track prediction device, electronic equipment and readable storage medium - Google Patents
Track prediction method, track prediction device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN112733452B CN112733452B CN202110044510.9A CN202110044510A CN112733452B CN 112733452 B CN112733452 B CN 112733452B CN 202110044510 A CN202110044510 A CN 202110044510A CN 112733452 B CN112733452 B CN 112733452B
- Authority
- CN
- China
- Prior art keywords
- coordinates
- predicted
- time
- long
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 210000002569 neuron Anatomy 0.000 claims description 96
- 230000007787 long-term memory Effects 0.000 claims description 56
- 230000015654 memory Effects 0.000 claims description 53
- 238000004422 calculation algorithm Methods 0.000 claims description 32
- 230000006870 function Effects 0.000 claims description 31
- 230000006403 short-term memory Effects 0.000 claims description 31
- 238000012549 training Methods 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 18
- 210000004027 cell Anatomy 0.000 claims description 15
- 238000007499 fusion processing Methods 0.000 claims description 12
- 238000005457 optimization Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 description 25
- 238000010586 diagram Methods 0.000 description 24
- 239000010410 layer Substances 0.000 description 17
- 238000004088 simulation Methods 0.000 description 17
- 238000012545 processing Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 5
- 238000012886 linear function Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000007774 longterm Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 2
- 238000004880 explosion Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Geometry (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the invention provides a track prediction method, a track prediction device, electronic equipment and a readable storage medium, which relate to the technical field of computers.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a track prediction method, a track prediction apparatus, an electronic device, and a readable storage medium.
Background
At present, in order to provide a better service, a service platform often performs simulation through a simulation system in advance, for example, a platform a is a primary driving service platform, and in the process of providing a primary driving service, the platform a needs to distribute orders to various primary drivers, so that the distribution of the orders is more reasonable, and the distribution of the orders needs to be simulated through the simulation system.
In the simulation process, in order to enable the simulation system to accurately reflect the actual situation, the simulation system needs to predict the travel path of each driver (especially an idle driver), and in the related art, a random probability matrix or thermodynamic diagram is generally used to predict the travel path of the driver, however, with the extension of time, the accuracy of the method is lower and lower, and the simulation effect of the simulation system is poorer and worse.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a track prediction method, apparatus, electronic device, and readable storage medium, so as to improve the accuracy of track prediction.
In a first aspect, a track prediction method is provided, where the method is applied to an electronic device, and the method includes:
And acquiring a target characteristic state, wherein the target characteristic state is generated at least based on the initial coordinates of the target to be predicted.
The target feature states are input into a pre-trained long-term memory model to determine a predicted trajectory comprising a plurality of predicted coordinates.
The long-term and short-term memory model comprises a neuron unit sequence, the neuron unit sequence comprises a plurality of neuron units which are sequentially connected in series, each neuron unit corresponds to different first time nodes respectively, each first time node is provided with a preset first time step, each neuron unit is configured to at least receive first input information and second input information, the first input information comprises a cell state or hidden layer characteristics of a previous neuron unit, and the second input information comprises the target characteristic state or an intermediate characteristic state determined according to the predicted coordinates output by the previous neuron unit in the neuron unit sequence.
In a second aspect, there is provided a trajectory prediction apparatus, the apparatus being applied to an electronic device, the apparatus comprising:
the first acquisition module is used for acquiring a target characteristic state, and the target characteristic state is generated at least based on the initial coordinates of the target to be predicted.
The first determining module is used for inputting the target characteristic state into a pre-trained long-period memory model to determine a predicted track comprising a plurality of predicted coordinates.
The long-term and short-term memory model comprises a neuron unit sequence, the neuron unit sequence comprises a plurality of neuron units which are sequentially connected in series, each neuron unit corresponds to different first time nodes respectively, each first time node is provided with a preset first time step, each neuron unit is configured to at least receive first input information and second input information, the first input information comprises a cell state or hidden layer characteristics of a previous neuron unit, and the second input information comprises the target characteristic state or an intermediate characteristic state determined according to the predicted coordinates output by the previous neuron unit in the neuron unit sequence.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, the memory storing one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method as described in the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method according to the first aspect.
According to the embodiment of the invention, the long-term memory model can predict the predicted coordinates corresponding to each first time node in sequence according to time sequence based on a plurality of neuron units which are sequentially connected in series, in the process, as the long-term memory model can well learn the correlation between time sequence data and relieve the time dependence and other problems, in addition, the predicted coordinates of each first time node are determined by the neuron units based on the predicted coordinates output by the last neuron unit in the embodiment of the invention, that is, the embodiment of the invention can predict the predicted coordinates of a plurality of subsequent first time nodes based on the initial target characteristic state, so that each predicted coordinate has stronger correlation, and further, as each predicted coordinate has stronger correlation, the long-term memory model can still output an accurate predicted track along with the time.
Drawings
The above and other objects, features and advantages of embodiments of the present invention will become more apparent from the following description of embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a conventional trajectory prediction provided by an embodiment of the present invention;
FIG. 2 is a flowchart of a track prediction method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a track prediction process according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another track prediction process according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of determining the closest path coordinates according to an embodiment of the present invention;
FIG. 6 is a flowchart of another track prediction method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of training a long-term and short-term memory model according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of another embodiment of the present invention for training a long-term memory model;
fig. 9 is a schematic structural diagram of a track prediction apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention is described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth in detail. The present invention will be fully understood by those skilled in the art without the details described herein. Well-known methods, procedures, flows, components and circuits have not been described in detail so as not to obscure the nature of the invention.
Moreover, those of ordinary skill in the art will appreciate that the drawings are provided herein for illustrative purposes and that the drawings are not necessarily drawn to scale.
Unless the context clearly requires otherwise, the words "comprise," "comprising," and the like in the description are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, it is the meaning of "including but not limited to".
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
At present, in order to provide a better service, a service platform is often simulated in advance through a simulation system, for example, a certain platform is a primary driving service platform, and in the process of providing a primary driving service, the platform needs to distribute orders to various primary drivers.
In the simulation process, in order to enable the simulation system to accurately reflect the actual situation, the simulation system needs to predict the walk-around track of each driver (especially an idle driver), and in the related art, a random probability matrix or thermodynamic diagram is generally used to predict the walk-around track of the driver, however, with the extension of time, the accuracy of the method is lower and lower, and the simulation effect of the simulation system is poorer and worse.
The walk track is generally used to represent a movement track of an object to be predicted in an idle state (if the object to be predicted is a driver, the idle state is at the expense of the driver being in an unoccupied state), specifically, as shown in fig. 1, fig. 1 is a schematic diagram of conventional track prediction provided in an embodiment of the present invention, where the schematic diagram includes: the method comprises the steps of a map 11, a track prediction algorithm 12, a starting coordinate A and an end coordinate B, wherein the starting coordinate A is the starting coordinate of an object to be predicted, and a dotted line between the starting coordinate A and the end coordinate B is used for representing the track of the object to be predicted, which is walked from the starting coordinate A to the end coordinate B.
In the process of predicting the action track of the target to be predicted, the start coordinate a may be input into the track prediction algorithm 12, and after the track prediction algorithm 12 receives the start coordinate a, the track prediction algorithm may predict the track of the target to be predicted (such as the route shown by the dashed line in fig. 1) within a period of time X, and determine the end coordinate B.
The trajectory prediction algorithm 12 may be an algorithm constructed based on a random probability matrix or thermodynamic diagram, however, the accuracy of the trajectory prediction algorithm in the related art may gradually decrease with the increase of the time X, which may further reduce the simulation effect of the simulation system.
In track prediction, in order to ensure that the track prediction method can have higher accuracy in a longer period of time, the embodiment of the invention provides a track prediction method which can be applied to electronic equipment, wherein the electronic equipment can be a server or terminal equipment, the server can be a single server, a server cluster configured in a distributed mode, a cloud server, the terminal equipment can be a smart phone, a tablet computer or a personal computer (Personal Computer, PC) and the like.
The following will describe a track prediction method provided in the embodiment of the present invention in detail with reference to the specific implementation, as shown in fig. 2, and specifically includes the following steps:
in step 21, a target feature state is acquired.
The target characteristic state is generated at least based on the initial coordinates of the target to be predicted.
In an alternative embodiment, the target feature status may be generated based on the starting coordinates of the target to be predicted and the first preference parameter, and in particular, step 21 may be performed as follows: and acquiring a first preference parameter of the target to be predicted, and carrying out feature fusion processing on the first preference parameter and the initial coordinates to determine the target feature state.
In the embodiment of the present invention, the personalized features may be used to characterize feature data affecting a travel track of the target to be predicted, taking a driving service platform as an example, and if the target to be predicted is a driving driver under the driving service platform, the personalized features may be used to characterize feature data such as a single-receiving area preference, a travel time preference, etc. of the driving driver.
The order taking area preference may be used to characterize that the driver is more preferred to take orders in some areas (such as business circles or office building areas, etc.), the trip time preference may be used to characterize that the driver is more preferred to trip in some time periods (such as 13:00-17:00), after the personalized features of the driver are determined, the personalized features may be preprocessed (numerical normalization or independent heat coding, etc.) to determine the first preference parameters of the driver, and then the electronic device may determine the target feature status of the driver according to the first preference parameters and the initial coordinates of the driver.
By adding the first preference parameter into the target characteristic state, the target characteristic state can represent the current state of the target to be predicted in a finer granularity, and the electronic equipment can further accurately predict the migration track of the target to be predicted.
In another optional implementation manner, information such as the speed, the altitude and the like of the target to be predicted can be added into the target characteristic state, and the information such as the speed, the altitude and the like of the target to be predicted can influence the travelling mileage of the target to be predicted in a period of time, so that the information such as the speed, the altitude and the like of the target to be predicted is added into the target characteristic state, and the travelling track of the target to be predicted can be predicted by the electronic equipment more accurately.
In step 22, the target feature states are input into a pre-trained Long Short-Term Memory (LSTM) model to determine a predicted trajectory comprising a plurality of predicted coordinates.
Among them, LSTM is a time-loop neural network, which is a neural network specifically designed to solve the long-term dependency problem of general loop neural networks.
In an embodiment of the present invention, the long-term and short-term memory model includes a neuron unit sequence, where the neuron unit sequence includes a plurality of neuron units sequentially connected in series, each of the neuron units corresponds to a different first time node, each of the first time nodes is spaced by a preset first time step, each of the neuron units is configured to at least receive first input information and second input information, output predicted coordinates of the corresponding first time node, the first input information includes a cell state or hidden layer characteristics of a preceding neuron unit, and the second input information includes a target feature state, or an intermediate feature state determined according to the predicted coordinates output by a previous neuron unit in the neuron unit sequence.
That is, the long-short-term memory model includes N neuron units connected in series, the first neuron unit receives the first input information and the second input information, outputs a first predicted coordinate after a first time step, and the ith neuron unit outputs an ith predicted coordinate after the first time step according to the first input information and the second input information, wherein i is greater than or equal to 2 and less than or equal to N, and N is greater than or equal to 2.
It should be noted that the first time node and the "first" in the first time step are only different from the second "in the following, and in practical application, the first time step may be a time step with a fixed duration, or may be a variable-length time step that changes according to a preset rule, which is not limited in the embodiment of the present invention.
For example, the long-term and short-term memory model according to the embodiment of the present invention may predict a track of a target to be predicted in one day, and specifically, the number of first time nodes may be 24, that is, each first time node corresponds to one whole point in one day, and the first time step is 1 hour.
For another example, the number of first time nodes is 24, and during a day time (00:00-24:00), a relatively dense first time node may be provided in a peak time period, and correspondingly, a relatively sparse first time node may be provided in a flat peak time period or a late night time period, that is, the embodiments of the present invention may set different numbers of first time nodes in different time periods based on the first time step of the variable-length time step, so that prediction may be focused on the peak time period. Of course, the first time node and the first time step may be set based on other applicable manners, which will not be described in detail in the embodiments of the present invention.
According to the embodiment of the invention, the long-term memory model can predict the predicted coordinates corresponding to each first time node in sequence according to time sequence based on a plurality of neuron units which are sequentially connected in series, in the process, as the long-term memory model can well learn the correlation between time sequence data and relieve the time dependence and other problems, in addition, the predicted coordinates of each first time node are determined by the neuron units based on the predicted coordinates output by the last neuron unit in the embodiment of the invention, that is, the embodiment of the invention can predict the predicted coordinates of a plurality of subsequent first time nodes based on the initial target characteristic state, so that each predicted coordinate has stronger correlation, and further, as each predicted coordinate has stronger correlation, the long-term memory model can still output an accurate predicted track along with the time.
In an alternative embodiment, the long-term memory model may be a model constructed based on a double-layer unidirectional LSTM.
Specifically, as shown in fig. 3, fig. 3 is a schematic diagram of a track prediction process according to an embodiment of the present invention, where the schematic diagram includes: a plurality of first time nodes t1-tn, neuronal units 311-31n, neuronal units 321-32n, and a linear function 33, spaced apart by a first time step.
In practical application, the model shown in fig. 3 is a double-layer unidirectional LSTM provided in the embodiment of the present invention, and the long-term memory model may also be constructed by LSTM with other structures (for example, the long-term memory model is constructed by single-layer LSTM), which is not limited in this embodiment of the present invention.
In connection with what is shown in fig. 2, during trajectory prediction, the neuron unit 311 may receive a feature state x1 and a cell state c1, wherein the feature state x1 is a target feature state generated based on the starting coordinates of the target to be predicted and the first preference parameter.
The cell state is the core for storing key information in the LSTM, is the result obtained by the linear operation of the data stream, and avoids the vanishing problem caused by gradient index change due to long sequence dependence. When the data stream passes through the hidden layer memory unit, the memory unit performs a series of operations on the data to determine which part of the old information is discarded by the cell state, and which part of the information is added.
After the neuron unit 311 calculates the characteristic state x1 and the cell state c1, the neuron unit 311 may transmit the output data to the neuron unit 321 longitudinally, so that the neuron unit 321 may perform further calculation, and at the same time, the neuron unit 311 may also transmit its hidden layer state and the cell state to the neuron unit 312 transversely, where the hidden layer state includes the information obtained after the processing.
After receiving the data transmitted by the neuron unit 311, the neuron unit 321 may determine, according to the data transmitted by the neuron unit 311 and the cell state c2, a predicted coordinate (lat, lng) corresponding to the first time node t2, and at the same time, the neuron unit 321 may transmit the hidden layer feature and the cell state to the neuron unit 322 laterally, where lat is used to represent a longitude coordinate in the longitude and latitude coordinates, and lng is used to represent a latitude coordinate in the longitude and latitude coordinates.
Then, the long-short term memory model may generate an intermediate feature state x2 according to the predicted coordinates of the first time node t2 and the first preference parameter of the target to be predicted.
Based on the same processing as described above, the long-short term memory model shown in fig. 3 can gradually determine the predicted coordinates corresponding to the respective first time nodes, and output the end point coordinates through the linear function 33 in response to satisfaction of the condition for ending the trajectory prediction. Further, the embodiment of the invention can determine the walk track of the target to be predicted according to the predicted coordinates of the first time node t2-t (n+1) and the initial coordinates of the first time node t 1.
According to the embodiment of the invention, the long-short-period memory model constructed based on the double-layer one-way LSTM has better information processing capability compared with the traditional single-layer LSTM, that is, the long-short-period memory model constructed based on the double-layer one-way LSTM can more accurately determine the predicted track of the target to be predicted.
From the view point of the result, the embodiment of the present invention may determine the walk trajectory of the coordinates to be predicted according to the predicted coordinates generated by the long-short-period memory model according to the time sequence, as shown in fig. 4, and fig. 4 is a schematic diagram of another trajectory prediction process provided by the embodiment of the present invention, where the schematic diagram includes: a map 41, a long and short term memory model 42, a start coordinate a, predicted coordinate points 1-5, and an end coordinate B.
As shown in fig. 4, after the long-short-term memory model 42 receives the starting point coordinate a, the predicted coordinates corresponding to each first time node, that is, the predicted coordinates 1-5 and the end point coordinate B, may be determined step by step according to the above-mentioned processing flow shown in fig. 3 in time sequence, and finally, each coordinate output by the long-short-term memory model 42 may form a walk track (that is, a track shown by a dashed line in fig. 4) of the target to be predicted.
As can be seen from fig. 4, the walk track of the object to be predicted is formed by adjacent coordinate lines, and each predicted coordinate is determined by the long-short-term memory model 42 according to the previous coordinate, so that there is a strong correlation between the predicted coordinates, that is, the long-short-term memory model 42 can output an accurate walk track.
As can be seen from fig. 2, the predicted coordinates of each first time node are determined by the neuron unit according to the predicted coordinates output by the previous neuron unit, and in an alternative embodiment, the intermediate feature state may be determined based on the following steps: and determining the path coordinate with the closest predicted coordinate output by the last neuron unit in the sequence of the neuron units, and generating the intermediate characteristic state according to the closest path coordinate.
In practical application, in order to enable the predicted track to be more effectively applied to a simulation system, the path coordinates close to the predicted coordinates can be used as track points, so that the track of the target to be predicted is more consistent with the real track, that is, the predicting effect of the electronic device is better.
Referring specifically to fig. 5, fig. 5 is a schematic diagram of determining a closest path coordinate according to an embodiment of the present invention, where the schematic diagram includes: map 51, predicted coordinates X and path coordinates Y.
After the long-short-term memory model determines the predicted coordinate X, a path coordinate Y nearest to the predicted coordinate X may be determined (if the predicted coordinate X is on a road, the predicted coordinate X is taken as the path coordinate), and further, the long-short-term memory model may generate an intermediate feature state according to the path coordinate Y.
In practical applications, the object to be tested usually runs on a road, so that the track of the object to be predicted can be more consistent with the real track by taking the path coordinate Y as the track point.
Further, in an alternative embodiment, the process of generating the intermediate feature state according to the closest path coordinates may be specifically performed as follows: and acquiring a first preference parameter of the target to be predicted, and carrying out feature fusion processing on the first preference parameter and the nearest path coordinate so as to determine an intermediate feature state.
In the embodiment of the invention, when the intermediate characteristic state is determined by combining the closest path coordinate and the first preference parameter, the real track of the target to be predicted can be represented, and the current state of the target to be predicted can be represented in a finer granularity, namely, the walk track of the target to be predicted can be determined more accurately by combining the closest path coordinate and the intermediate characteristic state determined by the first preference parameter.
In order to make the long-term memory model have a good prediction effect, the embodiment of the invention needs to pretrain the long-term memory model, and specifically, as shown in fig. 6, the long-term memory model may be trained based on the following steps:
at step 61, a training set is acquired.
The training set comprises historical characteristic states corresponding to second time nodes of the sample target in the historical time sequence, the second time nodes are time nodes which are spaced at second time step length in the historical time sequence, and the historical characteristic states are generated at least based on historical coordinates of the sample target in the historical time sequence.
It should be noted that the "second" in the second time node and the second time step is only different from the "first" in the foregoing, and in practical application, the second time step may be a time step with a fixed duration, or may be a variable-length time step that changes according to a preset rule, which is not limited by the embodiment of the present invention.
In practical application, the service platform can acquire the history service data stored by the service platform and construct a training set based on the history service data, wherein the targets in the history service data are sample targets in the training set, and the coordinates of the sample targets at each second time node in the history time sequence are history coordinates.
In an alternative embodiment, the historical feature status corresponding to each second time node is generated based on the following steps: and acquiring a second preference parameter of the sample target, and performing feature fusion processing on the second preference parameter and the historical coordinates to determine a historical feature state.
Wherein the second preference parameter comprises at least a personalized feature of the sample object.
By adding the second preference parameter into the history feature state, the history feature state can be used for representing the state of the sample target in a finer granularity, and compared with the long-period memory model which is trained by using only the history coordinates, the long-period memory model can be trained in a more dimension.
In step 62, the historical time sequence is used as an input sequence, and the second time step is used as a time interval, so that each historical characteristic state is sequentially input into the long-short-period memory model, and the output coordinates of the long-short-period memory model at each second time node are determined.
Specifically, as shown in fig. 7, fig. 7 is a schematic diagram of training a long-term and short-term memory model according to an embodiment of the present invention, where the schematic diagram includes: a plurality of second time nodes t1'-tn' spaced by a second time step, neuron units 711-71n, neuron units 721-72n, and linear function 73.
The long-term memory model shown in fig. 7 has the same structure as the long-term memory model shown in fig. 3, and the symbol' "in fig. 7 is used to distinguish the symbols in fig. 7 and 3.
In connection with what is described in fig. 6, during training, the neuron unit 711 may receive the historical feature state x1 'and the cell state c1', then the neuron unit 711 may transmit the hidden layer state laterally to the neuron unit 712 and the intermediate data longitudinally to the neuron unit 721, and then the neuron unit 721 may receive the cell state c2 'and the intermediate data transmitted by the neuron unit 711 and output the output coordinates corresponding to the second time node t 1'.
Unlike the trajectory prediction process (i.e., the process shown in fig. 3), each of the historical feature states during the training process is a feature state that is predetermined based on actual historical data, i.e., the historical feature states received by the neuron elements 712-71n are not feature states determined based on the output coordinates of the previous neuron element output.
According to the embodiment of the invention, the historical characteristic state corresponding to each second time node is the characteristic state predetermined according to the real historical data, so that the training set established based on the historical characteristic states can effectively train the long-term and short-term memory model.
In step 63, model parameters of the long-term and short-term memory model are adjusted based on the output coordinates and the history coordinates corresponding to the second time nodes.
In the embodiment of the invention, an Adam (Adam) optimizer can be used for adjusting model parameters of a long-term and short-term memory model, wherein Adam is an extension of a random gradient descent method and is widely applied to training of deep learning, and the Adam is different from a classical random gradient descent algorithm and calculates an exponential moving average value of gradient and square gradient, so that the Adam can quickly obtain a better training effect.
In another alternative implementation manner, as shown in fig. 8, fig. 8 is a schematic diagram of another training long-term memory model according to an embodiment of the present invention, where the schematic diagram includes: a plurality of second time nodes t1'-tn' spaced by a second time step, neuron units 711-71n, neuron units 721-72n, and linear function 73.
The model shown in fig. 8 is the same as that shown in fig. 7, except that: in adjusting the model parameters of the long-short-term memory model, the embodiment shown in fig. 8 can adjust the model parameters of the long-short-term memory model based only on the output coordinates output by the neuron unit 72n and the history coordinates corresponding to the second time node tn', so that the training efficiency and the training speed can be effectively improved.
In addition, when the model parameters of the long-term and short-term memory model are adjusted, the adjustment can be performed based on a preset loss function, and specifically, the process of adjusting the model parameters can be performed as follows: and adjusting model parameters of the long-term and short-term memory model through a preset first loss function based on each output coordinate and the historical coordinates corresponding to each second time node.
It should be noted that, in the process of adjusting the model parameters, the adjustment may be performed based on all the output coordinates and the historical coordinates corresponding to each second time node (as in the process shown in fig. 7), or may be performed based on only the output coordinates that are output last and the historical coordinates corresponding to the last second time node (as in the process shown in fig. 8), or may be performed based on only a part of the output coordinates and the historical coordinates corresponding to a part of the second time node, which is not limited in the embodiment of the present invention.
In the process of training the long-term model, the long-term model can be optimized through a preset optimization algorithm, and the process can be executed as follows: based on each output coordinate and the historical coordinates corresponding to each second time node, the model parameters of the long-term and short-term memory model are adjusted through a preset first loss function and an optimization algorithm.
Wherein the optimization algorithm comprises at least one algorithm of a gradient clipping algorithm (Gradient Clipping) and a layer normalization algorithm (Layer Normalization).
Gradient clipping algorithms can be used to solve the gradient explosion problem during model back propagation, which can solve the gradient explosion problem and preserve important information by clipping out parameters that exceed a threshold.
The layer normalization algorithm is a normalization algorithm, can perform normalization processing on different channels of the same sample target, is beneficial to accelerating model convergence, and improves training speed and training efficiency.
In an alternative embodiment, the loss function may be replaced during the training process to further adjust the model parameters of the long-term memory model, and specifically, the process may be performed as follows: and determining a second loss function, and adjusting model parameters of the long-term and short-term memory model through the second loss function based on each output coordinate and the historical coordinates corresponding to each second time node.
The model parameters may not be optimized due to a single loss function, so that the model parameters of the long-term memory model may be further adjusted by replacing the loss function, so that the model parameters of the long-term memory model are optimized.
In addition, in the embodiment of the present invention, the functionality of the model may be optimized by replacing the activation function, and specifically, the process may be performed as follows: and replacing an activation function in the long-period memory model, and adjusting model parameters of the long-period memory model based on each output coordinate and the historical coordinates corresponding to each second time node.
In the neural network model, the activation function may map the inputs of neurons to outputs to increase the nonlinear expression of the neural network model, thereby increasing the expressive power of the neural network model.
Based on the same technical concept, the embodiment of the invention further provides a track prediction device, as shown in fig. 9, where the device includes: a first acquisition module 91 and a first determination module 92.
The first obtaining module 91 is configured to obtain a target feature state, where the target feature state is generated at least based on a start coordinate of a target to be predicted.
A first determination module 92 is configured to input the target feature states into a pre-trained long-term memory model to determine a predicted trajectory comprising a plurality of predicted coordinates.
The long-term and short-term memory model comprises a neuron unit sequence, the neuron unit sequence comprises a plurality of neuron units which are sequentially connected in series, each neuron unit corresponds to different first time nodes respectively, each first time node is provided with a preset first time step, each neuron unit is configured to at least receive first input information and second input information, the first input information comprises a cell state or hidden layer characteristics of a previous neuron unit, and the second input information comprises the target characteristic state or an intermediate characteristic state determined according to the predicted coordinates output by the previous neuron unit in the neuron unit sequence.
Optionally, the intermediate feature state is determined based on the following modules:
and the second determining module is used for determining the path coordinate with the closest predicted coordinate output by the last neuron unit in the neuron unit sequence, and the path coordinate is used for representing the coordinate on the road.
And the generating module is used for generating an intermediate characteristic state according to the closest path coordinates.
Optionally, the generating module is specifically configured to:
and acquiring a first preference parameter of the target to be predicted, wherein the first preference parameter at least comprises personalized characteristics of the target to be predicted.
And carrying out feature fusion processing on the first preference parameter and the closest path coordinate so as to determine an intermediate feature state.
Optionally, the target feature state is determined based on the following modules:
the second acquisition module is used for acquiring first preference parameters of the target to be predicted, wherein the first preference parameters at least comprise personalized features of the target to be predicted.
And the first feature fusion module is used for carrying out feature fusion processing on the first preference parameter and the initial coordinate so as to determine a target feature state.
Optionally, the long-term and short-term memory model is trained based on the following modules:
the third acquisition module is configured to acquire a training set, where the training set includes historical feature states corresponding to second time nodes in a historical time sequence of a sample target, the second time nodes are time nodes spaced by a second time step in the historical time sequence, and the historical feature states are generated at least based on each historical coordinate of the sample target in the historical time sequence.
And the third determining module is used for sequentially inputting each history characteristic state into the long-period memory model by taking the history time sequence as an input sequence and taking the second time step as a time interval, and determining the output coordinates of the long-period memory model at each second time node.
The first adjusting module is used for adjusting the model parameters of the long-term and short-term memory model based on the output coordinates and the historical coordinates corresponding to the second time nodes.
Optionally, the historical feature state corresponding to each second time node is generated based on the following modules:
a fourth obtaining module, configured to obtain a second preference parameter of the sample target, where the second preference parameter includes at least a personalized feature of the sample target.
And the second feature fusion module is used for carrying out feature fusion processing on the second preference parameter and the history coordinates so as to determine the history feature state.
Optionally, the adjusting module is specifically configured to:
and adjusting model parameters of the long-period and short-period memory model through a preset first loss function based on the output coordinates and the historical coordinates corresponding to the second time nodes.
Optionally, the adjusting module is specifically further configured to:
based on the output coordinates and the historical coordinates corresponding to the second time nodes, model parameters of the long-term and short-term memory model are adjusted through a preset first loss function and an optimization algorithm, wherein the optimization algorithm comprises at least one algorithm of a gradient clipping algorithm and a layer standardization algorithm.
Optionally, the apparatus further includes:
and a fourth determining module, configured to determine a second loss function.
And the second adjusting module is used for adjusting the model parameters of the long-period memory model through the second loss function based on the output coordinates and the historical coordinates corresponding to the second time nodes.
Optionally, the apparatus further includes:
and the replacing module is used for replacing the activation function in the long-term and short-term memory model.
And the third adjustment module is used for adjusting the model parameters of the long-term and short-term memory model based on the output coordinates and the historical coordinates corresponding to the second time nodes.
According to the embodiment of the invention, the long-term memory model can predict the predicted coordinates corresponding to each first time node in sequence according to time sequence based on a plurality of neuron units which are sequentially connected in series, in the process, as the long-term memory model can well learn the correlation between time sequence data and relieve the time dependence and other problems, in addition, the predicted coordinates of each first time node are determined by the neuron units based on the predicted coordinates output by the last neuron unit in the embodiment of the invention, that is, the embodiment of the invention can predict the predicted coordinates of a plurality of subsequent first time nodes based on the initial target characteristic state, so that each predicted coordinate has stronger correlation, and further, as each predicted coordinate has stronger correlation, the long-term memory model can still output an accurate predicted track along with the time.
Fig. 10 is a schematic diagram of an electronic device according to an embodiment of the invention. As shown in fig. 10, the electronic device shown in fig. 10 is a general address query device, which includes a general computer hardware structure including at least a processor 101 and a memory 102. The processor 101 and the memory 102 are connected by a bus 103. The memory 102 is adapted to store instructions or programs executable by the processor 101. The processor 101 may be a separate microprocessor or may be a collection of one or more microprocessors. Thus, the processor 101 implements processing of data and control of other devices by executing instructions stored by the memory 102 to perform the method flows of embodiments of the invention as described above. Bus 103 connects the above components together and connects the above components to display controller 104 and display device and input/output (I/O) device 105. Input/output (I/O) device 105 may be a mouse, keyboard, modem, network interface, touch input device, somatosensory input device, printer, and other devices known in the art. Typically, the input/output devices 105 are connected to the system through input/output (I/O) controllers 106.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus (device) or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may employ a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations of methods, apparatus (devices) and computer program products according to embodiments of the invention. It will be understood that each of the flows in the flowchart may be implemented by computer program instructions.
These computer program instructions may be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows.
These computer program instructions may also be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows.
Another embodiment of the present invention is directed to a non-volatile storage medium storing a computer readable program for causing a computer to perform some or all of the method embodiments described above.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by specifying relevant hardware by a program, where the program is stored in a storage medium, and includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments of the invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Another embodiment of the invention relates to a computer program product comprising a computer program/instruction which, when executed by a processor, can implement some or all of the above-described method embodiments.
That is, those skilled in the art will appreciate that embodiments of the invention may be implemented by a processor executing a computer program product (computer program/instructions) to specify associated hardware, including the processor itself, to carry out all or part of the steps of the methods of the embodiments described above.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations may be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (23)
1. A method of trajectory prediction, the method comprising:
acquiring a target characteristic state, wherein the target characteristic state is generated at least based on the initial coordinates of a target to be predicted; and
inputting the target feature state into a pre-trained long-term and short-term memory model to determine a predicted trajectory comprising a plurality of predicted coordinates;
the long-term and short-term memory model comprises a neuron unit sequence, the neuron unit sequence comprises a plurality of neuron units which are sequentially connected in series, each neuron unit corresponds to different first time nodes respectively, each first time node is provided with a preset first time step, each neuron unit is configured to at least receive first input information and second input information, the first input information comprises a cell state or hidden layer characteristics of a previous neuron unit, and the second input information comprises the target characteristic state or an intermediate characteristic state determined according to the predicted coordinates output by the previous neuron unit in the neuron unit sequence;
The predicted coordinates are generated by the long-period memory model according to the time sequence corresponding to the first time node, and the predicted track is determined by connecting the predicted coordinates one by one according to the time sequence corresponding to the first time node.
2. The method of claim 1, wherein the intermediate feature state is determined based on the steps of:
determining a path coordinate closest to a predicted coordinate output by a previous neuron unit in the sequence of neuron units, wherein the path coordinate is used for representing a coordinate on a road; and
and generating an intermediate feature state according to the closest path coordinates.
3. The method of claim 2, wherein generating an intermediate feature state from the closest path coordinates comprises:
acquiring a first preference parameter of the target to be predicted, wherein the first preference parameter at least comprises personalized features of the target to be predicted; and
and carrying out feature fusion processing on the first preference parameter and the closest path coordinate so as to determine an intermediate feature state.
4. The method of claim 1, wherein the target feature status is determined based on the steps of:
Acquiring a first preference parameter of the target to be predicted, wherein the first preference parameter at least comprises personalized features of the target to be predicted; and
and carrying out feature fusion processing on the first preference parameter and the initial coordinates to determine a target feature state.
5. The method of claim 1, wherein the long-term memory model is trained based on the steps of:
acquiring a training set, wherein the training set comprises historical characteristic states corresponding to second time nodes of a sample target in a historical time sequence, the second time nodes are time nodes which are spaced at second time step length in the historical time sequence, and the historical characteristic states are generated at least based on historical coordinates of the sample target in the historical time sequence;
sequentially inputting each history characteristic state into the long-short-period memory model by taking the history time sequence as an input sequence and the second time step as a time interval, and determining the output coordinates of the long-short-period memory model at each second time node; and
and adjusting model parameters of the long-term and short-term memory model based on the output coordinates and the historical coordinates corresponding to the second time nodes.
6. The method of claim 5, wherein the historical feature status for each of the second time nodes is generated based on the steps of:
acquiring a second preference parameter of the sample target, wherein the second preference parameter at least comprises personalized features of the sample target; and
and carrying out feature fusion processing on the second preference parameter and the history coordinates to determine a history feature state.
7. The method of claim 6, wherein adjusting model parameters of the long-short term memory model based on each of the output coordinates and the historical coordinates corresponding to each of the second time nodes comprises:
and adjusting model parameters of the long-period and short-period memory model through a preset first loss function based on the output coordinates and the historical coordinates corresponding to the second time nodes.
8. The method of claim 7, wherein adjusting model parameters of the long-short term memory model by a preset first loss function based on each of the output coordinates and the history coordinates corresponding to each of the second time nodes comprises:
based on the output coordinates and the historical coordinates corresponding to the second time nodes, model parameters of the long-term and short-term memory model are adjusted through a preset first loss function and an optimization algorithm, wherein the optimization algorithm comprises at least one algorithm of a gradient clipping algorithm and a layer standardization algorithm.
9. The method of claim 7, wherein after adjusting the model parameters of the long-short term memory model by a preset first loss function based on each of the output coordinates and the history coordinates corresponding to each of the second time nodes, the method further comprises:
determining a second loss function; and
and adjusting model parameters of the long-period memory model through the second loss function based on the output coordinates and the historical coordinates corresponding to the second time nodes.
10. The method according to any one of claims 5-9, further comprising:
replacing an activation function in the long-term and short-term memory model; and
and adjusting model parameters of the long-term and short-term memory model based on the output coordinates and the historical coordinates corresponding to the second time nodes.
11. A trajectory prediction device, the device comprising:
the first acquisition module is used for acquiring a target characteristic state, and the target characteristic state is generated at least based on the initial coordinates of a target to be predicted; and
a first determining module for inputting the target feature state into a pre-trained long-short-term memory model to determine a predicted trajectory comprising a plurality of predicted coordinates;
The long-term and short-term memory model comprises a neuron unit sequence, the neuron unit sequence comprises a plurality of neuron units which are sequentially connected in series, each neuron unit corresponds to different first time nodes respectively, each first time node is provided with a preset first time step, each neuron unit is configured to at least receive first input information and second input information, the first input information comprises a cell state or hidden layer characteristics of a previous neuron unit, and the second input information comprises the target characteristic state or an intermediate characteristic state determined according to the predicted coordinates output by the previous neuron unit in the neuron unit sequence;
the predicted coordinates are generated by the long-period memory model according to the time sequence corresponding to the first time node, and the predicted track is determined by connecting the predicted coordinates one by one according to the time sequence corresponding to the first time node.
12. The apparatus of claim 11, wherein the intermediate feature state is determined based on the following module:
a second determining module, configured to determine a path coordinate that is closest to a predicted coordinate output by a previous neuron unit in the sequence of neuron units, where the path coordinate is used to characterize a coordinate on a road; and
And the generating module is used for generating an intermediate characteristic state according to the closest path coordinates.
13. The apparatus according to claim 12, wherein the generating module is specifically configured to:
acquiring a first preference parameter of the target to be predicted, wherein the first preference parameter at least comprises personalized features of the target to be predicted; and
and carrying out feature fusion processing on the first preference parameter and the closest path coordinate so as to determine an intermediate feature state.
14. The apparatus of claim 11, wherein the target feature status is determined based on the following module:
the second acquisition module is used for acquiring first preference parameters of the target to be predicted, wherein the first preference parameters at least comprise personalized features of the target to be predicted; and
and the first feature fusion module is used for carrying out feature fusion processing on the first preference parameter and the initial coordinate so as to determine a target feature state.
15. The apparatus of claim 11, wherein the long-term memory model is trained based on the following modules:
the third acquisition module is used for acquiring a training set, wherein the training set comprises historical characteristic states corresponding to second time nodes of a sample target in a historical time sequence, the second time nodes are time nodes which are spaced at second time step intervals in the historical time sequence, and the historical characteristic states are generated at least based on each historical coordinate of the sample target in the historical time sequence;
The third determining module is used for sequentially inputting each history characteristic state into the long-period memory model by taking the history time sequence as an input sequence and taking the second time step as a time interval, and determining the output coordinates of the long-period memory model at each second time node; and
the first adjusting module is used for adjusting the model parameters of the long-term and short-term memory model based on the output coordinates and the historical coordinates corresponding to the second time nodes.
16. The apparatus of claim 15, wherein the historical feature status for each of the second time nodes is generated based on the following modules:
a fourth obtaining module, configured to obtain a second preference parameter of the sample target, where the second preference parameter includes at least a personalized feature of the sample target; and
and the second feature fusion module is used for carrying out feature fusion processing on the second preference parameter and the history coordinates so as to determine the history feature state.
17. The apparatus according to claim 16, wherein the adjustment module is specifically configured to:
and adjusting model parameters of the long-period and short-period memory model through a preset first loss function based on the output coordinates and the historical coordinates corresponding to the second time nodes.
18. The apparatus of claim 17, wherein the adjustment module is further configured to:
based on the output coordinates and the historical coordinates corresponding to the second time nodes, model parameters of the long-term and short-term memory model are adjusted through a preset first loss function and an optimization algorithm, wherein the optimization algorithm comprises at least one algorithm of a gradient clipping algorithm and a layer standardization algorithm.
19. The apparatus of claim 17, wherein the apparatus further comprises:
a fourth determining module, configured to determine a second loss function; and
and the second adjusting module is used for adjusting the model parameters of the long-period memory model through the second loss function based on the output coordinates and the historical coordinates corresponding to the second time nodes.
20. The apparatus according to any one of claims 15-19, wherein the apparatus further comprises:
the replacing module is used for replacing the activation function in the long-period memory model; and
and the third adjustment module is used for adjusting the model parameters of the long-term and short-term memory model based on the output coordinates and the historical coordinates corresponding to the second time nodes.
21. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any of claims 1-10.
22. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the method according to any of claims 1-10.
23. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the method of any of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110044510.9A CN112733452B (en) | 2021-01-13 | 2021-01-13 | Track prediction method, track prediction device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110044510.9A CN112733452B (en) | 2021-01-13 | 2021-01-13 | Track prediction method, track prediction device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112733452A CN112733452A (en) | 2021-04-30 |
CN112733452B true CN112733452B (en) | 2024-03-29 |
Family
ID=75593113
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110044510.9A Active CN112733452B (en) | 2021-01-13 | 2021-01-13 | Track prediction method, track prediction device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112733452B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114936331A (en) * | 2022-04-18 | 2022-08-23 | 北京大学 | Position prediction method, position prediction device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107239859A (en) * | 2017-06-05 | 2017-10-10 | 国网山东省电力公司电力科学研究院 | The heating load forecasting method of Recognition with Recurrent Neural Network is remembered based on series connection shot and long term |
CN107610464A (en) * | 2017-08-11 | 2018-01-19 | 河海大学 | A kind of trajectory predictions method based on Gaussian Mixture time series models |
CN109583151A (en) * | 2019-02-20 | 2019-04-05 | 百度在线网络技术(北京)有限公司 | The driving trace prediction technique and device of vehicle |
CN109910909A (en) * | 2019-02-25 | 2019-06-21 | 清华大学 | A kind of interactive prediction technique of vehicle track net connection of more vehicle motion states |
WO2020009246A1 (en) * | 2018-07-06 | 2020-01-09 | 日本電信電話株式会社 | Time-series learning device, time-series learning method, time-series prediction device, time-series prediction method, and program |
CN111091708A (en) * | 2019-12-13 | 2020-05-01 | 中国科学院深圳先进技术研究院 | Vehicle track prediction method and device |
CN111275225A (en) * | 2018-12-04 | 2020-06-12 | 北京嘀嘀无限科技发展有限公司 | Empty vehicle track prediction method, prediction device, server and readable storage medium |
-
2021
- 2021-01-13 CN CN202110044510.9A patent/CN112733452B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107239859A (en) * | 2017-06-05 | 2017-10-10 | 国网山东省电力公司电力科学研究院 | The heating load forecasting method of Recognition with Recurrent Neural Network is remembered based on series connection shot and long term |
CN107610464A (en) * | 2017-08-11 | 2018-01-19 | 河海大学 | A kind of trajectory predictions method based on Gaussian Mixture time series models |
WO2020009246A1 (en) * | 2018-07-06 | 2020-01-09 | 日本電信電話株式会社 | Time-series learning device, time-series learning method, time-series prediction device, time-series prediction method, and program |
CN111275225A (en) * | 2018-12-04 | 2020-06-12 | 北京嘀嘀无限科技发展有限公司 | Empty vehicle track prediction method, prediction device, server and readable storage medium |
CN109583151A (en) * | 2019-02-20 | 2019-04-05 | 百度在线网络技术(北京)有限公司 | The driving trace prediction technique and device of vehicle |
CN109910909A (en) * | 2019-02-25 | 2019-06-21 | 清华大学 | A kind of interactive prediction technique of vehicle track net connection of more vehicle motion states |
CN111091708A (en) * | 2019-12-13 | 2020-05-01 | 中国科学院深圳先进技术研究院 | Vehicle track prediction method and device |
Also Published As
Publication number | Publication date |
---|---|
CN112733452A (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cai et al. | PSO-ELM: A hybrid learning model for short-term traffic flow forecasting | |
CN110474815B (en) | Bandwidth prediction method and device, electronic equipment and storage medium | |
El-Tantawy et al. | Design of reinforcement learning parameters for seamless application of adaptive traffic signal control | |
CN107945507B (en) | Travel time prediction method and device | |
CN107346464B (en) | Service index prediction method and device | |
EP3035314A1 (en) | A traffic data fusion system and the related method for providing a traffic state for a network of roads | |
CN109887284B (en) | Smart city traffic signal control recommendation method, system and device | |
CN106850289B (en) | Service combination method combining Gaussian process and reinforcement learning | |
CN114780739B (en) | Time sequence knowledge graph completion method and system based on time graph convolution network | |
CN112907970A (en) | Variable lane steering control method based on vehicle queuing length change rate | |
CN117094535B (en) | Artificial intelligence-based energy supply management method and system | |
Zhao et al. | Adaptive swarm intelligent offloading based on digital twin-assisted prediction in VEC | |
Ivanjko et al. | Ramp metering control based on the Q-learning algorithm | |
CN112733452B (en) | Track prediction method, track prediction device, electronic equipment and readable storage medium | |
CN113506445A (en) | Real-time traffic guidance system and method considering long-term behavior change compliance of travelers | |
CN115311860B (en) | Online federal learning method of traffic flow prediction model | |
CN117037479A (en) | Signal transmission system for measuring traffic state by using road network sensor | |
Wang et al. | A new traffic speed forecasting method based on bi-pattern recognition | |
CN116662815B (en) | Training method of time prediction model and related equipment | |
CN116151478A (en) | Short-time traffic flow prediction method, device and medium for improving sparrow search algorithm | |
Pang et al. | Scalable reinforcement learning framework for traffic signal control under communication delays | |
CN111813881B (en) | Method, apparatus, device and storage medium for journey information processing | |
Migawa et al. | Simulation of the model of technical object availability control | |
Vu et al. | Bus running time prediction using a statistical pattern recognition technique | |
Tréca et al. | Fast bootstrapping for reinforcement learning-based traffic signal control systems using queueing theory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |