Nothing Special   »   [go: up one dir, main page]

CN110610271B - Multi-vehicle track prediction method based on long and short memory network - Google Patents

Multi-vehicle track prediction method based on long and short memory network Download PDF

Info

Publication number
CN110610271B
CN110610271B CN201910876478.3A CN201910876478A CN110610271B CN 110610271 B CN110610271 B CN 110610271B CN 201910876478 A CN201910876478 A CN 201910876478A CN 110610271 B CN110610271 B CN 110610271B
Authority
CN
China
Prior art keywords
vehicle
network
vehicles
probability
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910876478.3A
Other languages
Chinese (zh)
Other versions
CN110610271A (en
Inventor
付梦印
张婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201910876478.3A priority Critical patent/CN110610271B/en
Publication of CN110610271A publication Critical patent/CN110610271A/en
Application granted granted Critical
Publication of CN110610271B publication Critical patent/CN110610271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a track prediction method of multiple vehicles based on a long and short memory network, which takes the characteristics of historical tracks of a main vehicle and adjacent vehicles as input, and fully considers the mutual influence of the positions of the vehicles and the driving behaviors; after the network coding and decoding, the output future track of the adjacent vehicle is further input into the mixed density network, the probability distribution of the vehicle position is estimated, the error of the mixed density network, the root mean square error of the track result and the parameter regularization term of the coder decoder network are combined in each training to form a loss function and guide the updating of the network parameters, so that the prediction accuracy of the applied network can be improved, the trained neural network can predict the position information with probability of the adjacent vehicle, the position information forms a continuous track according to the time sequence, and the main vehicle can be assisted to make decision and plan.

Description

Multi-vehicle track prediction method based on long and short memory network
Technical Field
The invention relates to the technical field of automatic control, in particular to a track prediction method for multiple vehicles based on a long and short memory network.
Background
Autopilot and related research has advanced greatly over the past few decades. However, achieving high levels of autopilot in complex urban environments such as highways still presents significant challenges. This is because in such an environment the driving behaviour of unmanned vehicles is in a highly dynamic variation and the driving strategies and the locations of the different road participants may be influenced by each other, resulting in driving behaviour that is too aggressive or conservative if the estimates of the locations of other vehicles in the surroundings are lost. For example, a vehicle in an adjacent lane suddenly changes lanes, which may cause a vehicle rear-end collision without predictive precautions.
Under the condition, the future tracks and speeds of other vehicles in the road need to be predicted, so that the predicted information and the state information of the main vehicle are combined, the decision and layout path planning of the unmanned vehicle are realized, and the unmanned vehicle is helped to efficiently avoid obstacles and drive more comfortably. In the field of trajectory prediction of unmanned vehicles, most of the existing methods randomly select a target vehicle from numerous data based on internet of vehicles information, and then predict the trajectory of a certain vehicle around the target vehicle. Such a system would lead to the following problems: 1. the condition that the sensor view angle of the unmanned vehicle is limited in the actual driving process is not met, and generally speaking, the main vehicle running on the road can only obtain the position information of the adjacent vehicles around the main vehicle through the sensor; 2. the primary prediction result only contains the future track of one vehicle, so that the efficiency is low; 3. the mutual influence among the driving behaviors of the vehicles in a certain range of vehicle clusters is ignored, so that the predicted result cannot be applied to the actual situation.
Disclosure of Invention
In view of this, the invention provides a method for predicting a track of multiple vehicles based on a long and short memory network, which can accurately predict the positions of adjacent vehicles.
A multiple vehicle trajectory prediction method includes the following steps:
step 0, sampling the position of the main vehicle, the lane where the main vehicle is located, the distance between the main vehicle and the front vehicle and the positions of adjacent vehicles around the main vehicle within a set time S1 to obtain historical data; sampling the positions of adjacent vehicles around the main vehicle in a set time period S2 after the time period S1 to obtain prediction data serving as a label of historical data during neural network training; the historical data and the corresponding labels form a piece of training data;
by analogy, obtaining a plurality of pieces of training data;
step 1, constructing a plurality of encoders using a deep long and short memory network as a basic unit and a plurality of decoders using the deep long and short memory network as a basic unit, thereby forming a neural network;
step 2, randomly selecting training data with a set number from the training data obtained in the step 0, and sequentially sending each piece of training data into an encoder according to the time sequence; the encoder converts input data into a coding vector after encoding, and then sends the encoded vector into the decoder, wherein the coding vector is used as an initial hidden state of the decoder; the decoder outputs the predicted value of the position of the adjacent vehicle at each sampling moment in the time period S2;
step 3, inputting the predicted value output by the decoder in the step 2 into a full-connection layer network, and mapping the predicted value into characteristic data; the characteristic data is sent into a mixed density network, and 6 groups of description current are output through trainingTime tiPrevious time t ofi-1Parameters of the temporal Gaussian distribution
Figure BDA0002204453750000021
Figure BDA0002204453750000022
Wherein, the upper corner mark (j) represents the mark number of the vehicle, k represents the serial number of the Gaussian distribution, and the lower corner mark represents the time;
Figure BDA0002204453750000023
coefficients for weighting among the parameters representing the k-th set of gaussian distributions,
Figure BDA0002204453750000024
the physical quantities describing the mean values of the x-direction and y-direction positions in the parameters respectively representing the k-th group of gaussian distributions,
Figure BDA0002204453750000025
representing the standard deviation in the x-direction and the y-direction,
Figure BDA0002204453750000026
correlation coefficients in x direction and y direction in parameters representing the kth group of Gaussian distribution are correlated;
step 4, performing normalization calculation after the weight output in the step 3 is indexed to obtain updated weight; taking an exponential calculation result of the standard deviation output in the step 3, taking a numerical value obtained by calculating a correlation number by a hyperbolic tangent function, and then substituting the numerical value into a formula (1) to calculate the t of the vehicle at the time of (j)iProbability of the location of the moment:
Figure BDA0002204453750000027
step 5, calculating the root mean square error between the position predicted value output by the decoder and the label, and recording as F1
Obtaining the probability of each sampling moment of all adjacent vehicles based on the probability result of each sampling moment of each vehicleSumming the values, calculating the probability sum of all sampling moments, and finally calculating the negative logarithm of the sum to obtain F2
The regularization terms of the neural network parameters are summed by the squares to obtain F3
The final loss function of the neural network is shown as follows:
Figure BDA0002204453750000031
and 6, updating each parameter by adopting an Adam gradient descent method based on the final loss function calculated in the step 5, namely: the loss function calculates gradient of each parameter distribution of the neural network, updates each parameter based on the gradient value, returns to the step 2 based on the updated parameter to continue iteration until the change rate of the loss function is lower than a set threshold value, and stops iteration;
and 7, predicting the position of the adjacent vehicle by using the neural network of the current updated parameters.
Preferably, the number of the adjacent vehicles is within 6.
Preferably, the neighboring position includes x-direction position information and y-direction speed information.
Preferably, the time period S1 is 50S; the time period S2 is less than 30S.
Preferably, the sampling frequency is 10 Hz.
The invention has the following beneficial effects:
the invention relates to a track prediction method of multiple vehicles based on a long and short memory network, which is based on a network architecture of a coder decoder taking the long and short memory network as a unit, and is based on the view angle of an unmanned vehicle, utilizes the historical tracks of surrounding vehicles recorded by a vehicle-mounted sensor, combines the characteristics of the position coordinate information of the vehicle, the lane where the vehicle is located, the anti-collision distance with the front vehicle and the like, and generates future track prediction of 6 vehicles at most at one time through a constructed neural network architecture; the neural network takes the characteristics of the history tracks and the like of the main vehicle and the adjacent vehicles as input, and fully considers the mutual influence of the positions of the vehicles and the driving behaviors; after the network coding and decoding, the output future track of the adjacent vehicle is further input into the mixed density network, the probability distribution of the vehicle position is estimated, the error of the mixed density network, the root mean square error of the track result and the parameter regularization term of the coder decoder network are combined in each training to form a loss function and guide the updating of the network parameters, so that the prediction accuracy of the applied network can be improved, the trained neural network can predict the position information with probability of the adjacent vehicle, the position information forms a continuous track according to the time sequence, and the main vehicle can be assisted to make decision and plan.
Drawings
FIG. 1 is a network framework diagram of the multiple vehicle trajectory prediction method based on long and short memory networks according to the present invention;
FIG. 2 is a schematic diagram of a coordinate system of data used in the present invention;
3-1, 3-2 and 3-3 are track prediction truth values and prediction value results of 6 adjacent vehicles in three different scenes in the embodiment respectively; wherein the scene is a random scene.
FIG. 4 shows the results of the prediction of the trajectory of three vehicles ahead of the host vehicle in the case of lane keeping driving, wherein:
the left diagram in fig. 4-1 is a schematic diagram of a driving scenario, and the right diagram is three predicted trajectories with probability bands.
FIG. 4-2 is a probability graph of spatiotemporal trajectories, with the x-axis, the y-axis representing actual position, and the z-axis representing the time axis, where three spatiotemporal trajectories with probability bands are observed.
Fig. 4-3 are cross-sectional views of spatio-temporal trajectory probability maps respectively intersecting along a plane defined parallel to the x-y axis at times when the z-axis t is 2s and t is 3s, where the instantaneous positions of the three vehicles a, B, and C with probability bands are observed, illustrating the probability distribution of the predicted vehicle positions in the neighborhood within the prediction time, which can be obtained by prediction of the present model.
Fig. 5 shows three predicted vehicles a, B and C ahead of the host vehicle in the case of lane change.
Fig. 5-1 includes a schematic diagram of a driving scenario and three predicted trajectories with probability bands.
FIG. 5-2 is a probability graph of spatiotemporal trajectories, with the x-axis, the y-axis representing actual position, and the z-axis representing the time axis, where three spatiotemporal trajectories with probability bands are observed.
Fig. 5-3 are cross-sectional views of the spatio-temporal trajectory probability map, each along a plane defined parallel to the x-y axis, at times when the z-axis t is 2s and t is 3s, and the instantaneous positions of the probability bands of three vehicles a, B, and C are observed when t is 2s, and the instantaneous positions of the probability bands of only two vehicles a and C are observed when t is 3 s.
Detailed Description
Aiming at the condition that the position and the speed of a vehicle are in a height change state in a high-speed environment and the condition that the vehicles between adjacent lanes influence each other, the invention utilizes a vehicle-mounted sensor laser radar to obtain the relative position information of the adjacent vehicles (or a visual sensor to obtain image data and a target tracking method to obtain the relative positions of surrounding vehicles) based on the visual angle of a main vehicle of an unmanned vehicle, records the track of the surrounding 6 vehicles which directly influence the main vehicle in the history of 5s, and outputs the track of a surrounding vehicle group in the future of 3s and the probability distribution of the track in one step by combining a mixed density network through the structure of a neural network of a coder decoder based on a long and short memory network, thereby assisting the main vehicle to make decision and plan. The specific process comprises the following steps:
step 1, preprocessing data. The coordinate system of the whole system takes the position of the geometric center of the main vehicle at the starting time as the origin of coordinates, the tangential direction of the advancing direction of the road as the positive direction of the y axis, and the direction which is perpendicular to the edge of the road and points to the right as the positive direction of the x axis. One piece of training data input into the neural network is marked as X, wherein the training data comprises 50 frames of feature data with the current time as an end point and the historical 5s frequency of 10Hz, and the feature data of a certain frame
Figure BDA0002204453750000051
Marked by Lane line (Lane) where the main car is locatedID) Distance of collision (Space) between the host vehicle and the preceding vehicleHdwy) Position coordinates of the host vehicle
Figure BDA0002204453750000052
And position coordinates of surrounding 6 vehicles relative to the host vehicle
Figure BDA0002204453750000053
The composition is that the units of other physical quantities except the marks of the lane lines are meters. The trajectory of the surrounding vehicles in the future of 3s starting from the current time is taken as the piece of training data X, the corresponding label is Y, the frequency is also 10Hz, and 30 frames are counted. The output data of the network is the predicted trajectory of 6 vehicles
Figure BDA0002204453750000054
(the format is the same as Y). After the training set and the test set are prepared, data normalization processing is carried out on the training set and the test set, and the sequence is disordered. The input data X and its label Y are shown as follows:
Figure BDA0002204453750000055
Figure BDA0002204453750000056
Figure BDA0002204453750000057
Figure BDA0002204453750000058
Figure BDA0002204453750000059
specifically, in the above formula
Figure BDA00022044537500000510
Represent the velocity in the y-direction, not the distance. This is because during the course of the travel of the vehicle,the speed in the y direction is usually limited within a certain range, and is more suitable for being used as the input of a neural network. And integrating the predicted speed in the y direction to obtain the movement distance in the y direction.
And 2, sorting out the data preprocessed in the step 1 in a batch processing mode. Sequentially sending the data into an encoder which takes a deep-layer long and short memory network as a basic unit according to the time sequence. The encoder converts the input historical track data into a coded vector after encoding, wherein the vector contains high-dimensional features extracted from surrounding vehicle clusters. Then the coded vector is sent to a decoder which takes a deep long and short memory network as a basic unit, the coded vector is used as an initial hidden state of the decoder, and the last frame of data of 50 frames of historical data is used as the initial time input of the decoder, so that the predicted track and the historical track can be naturally connected. The decoder sequentially outputs the position information of 6 vehicles in the future within 3s according to the time sequence. In order to make the network pay more attention to the historical track information of the time adjacent to the current moment, an attention mechanism is used for giving different weights to the input historical track, so that the influence degree of positions at different historical times on future positions is displayed.
And 3, because the future tracks of the vehicles are in multi-modal distribution (the future tracks of the vehicles with the same historical track can be influenced by the driving habits of drivers and the like to generate different results), on the basis of the network architecture of the original coder decoder, a mixed density network is added to estimate the probability distribution of the future tracks, errors generated by the mixed density network are fed back to the original network, and the parameters of the network are guided to be optimized together by combining the original loss function. It is assumed that estimated values of x and y coordinates of the vehicle are respectively gaussian-distributed, and two variables are associated with each other. And (4) further mapping the output result in the step (2) through a full connection layer network to form characteristics. The data is fed into a mixed density network which is trained to output 6 sets of parameters with a Gaussian distribution, assuming the predicted vehicle at t, labeled (j), is considerediProbability of time position coordinates:
first, the mixed density network input is passed through the full linkThe prediction results of the layer processing are connected, and 6 groups of parameters describing Gaussian distribution are output
Figure BDA0002204453750000061
(because the input to the density network is mixed with ti-1The data at a time is correlated, so the lower corner of its output parameter representing time is denoted ti-1) Wherein the upper corner mark (j) represents the number of the vehicle, k represents the number of the gaussian distribution, and the lower corner mark represents the time. Therefore, the temperature of the molten metal is controlled,
Figure BDA0002204453750000062
coefficients for weighting among the parameters representing the k-th set of gaussian distributions,
Figure BDA0002204453750000063
the physical quantities describing the mean values of the x-direction and y-direction positions in the parameters respectively representing the k-th group of gaussian distributions,
Figure BDA0002204453750000064
representing the standard deviation in the x-direction and the y-direction,
Figure BDA0002204453750000065
and the correlation coefficients in the x direction and the y direction are correlated in the parameters representing the k-th group of Gaussian distribution. In order to make the parameters output by the mixed density network meet the requirement of the parameter range, the parameter values directly output by the mixed density network need to be further normalized:
Figure BDA0002204453750000066
Figure BDA0002204453750000067
Figure BDA0002204453750000068
Figure BDA0002204453750000069
wherein, the weight of the Gaussian distribution is subjected to normalization calculation after taking an index; the mean value is directly substituted without processing; taking an index calculation result of the standard deviation; the correlation coefficient is a value calculated by a hyperbolic tangent function.
Using a plurality of Gaussian distribution weights to fit the distribution result of the position coordinates of each vehicle in the x direction and the y direction at each moment, and calculating the position coordinates of the vehicle (j) at t according to a formulaiProbability of the location of the moment:
Figure BDA0002204453750000071
wherein,
Figure BDA0002204453750000072
is the historical track of 6 vehicles.
Figure BDA0002204453750000073
Representing the kth gaussian distribution used to fit the vehicle position, the probability density function of which can be calculated from the parameters output by the mixture density network:
Figure BDA0002204453750000074
Figure BDA0002204453750000075
in conclusion, the probability density functions of 6 Gaussian distributions are subjected to weighted fitting to obtain the t-th vehicle of the mark (j)iProbability of where the time of day is located.
And 4, constructing an error function to calculate a gradient, so as to update parameters of the neural network. On the basis of the loss function of the neural network of the original encoder and decoder, the error of the mixed density network is used as feedback to jointly build the error of the whole neural network, so that the parameter updating and convergence speed of the neural network are accelerated, and the probability distribution corresponding to the predicted track result is calculated. The loss function of the neural network is composed of three parts, namely:
(1) the difference between the predicted vehicle trajectory value and the true trajectory value, i.e., the root mean square error, output by the decoder;
(2) taking a negative logarithm form for the obtained probability result in the mixed density network;
(3) and training the regularization term of the updated network parameters.
The final loss function is shown as follows:
Figure BDA0002204453750000081
wherein T represents a time range and takes a value of 30(3s, 10 Hz); j represents the number of vehicles that fit within the calculation range, and is 6; n represents the number of parameter variables participating in the update in the network, K represents the number of gaussian distributions used to fit the mean and variance, and is 6; w1,W2And respectively representing weights for adjusting parameter regularization and adjusting mixed density network errors, and manually setting according to experience.
And 6, calculating the gradient by adopting an Adam gradient descent algorithm, and updating parameters of the neural network, wherein the specific updating process of the parameters is shown in the following table.
Figure BDA0002204453750000082
When the error convergence speed of the network is slowed down and the error change rate is lower than a certain threshold value delta, the network finishes the task learning, at the moment, the network model is saved, and the training process is terminated. And combining the positions of 6 vehicles in time sequence output by the encoder and the corresponding Gaussian distribution parameters output by the mixed density network to finally form a continuous predicted track with a probability band. And drawing a two-dimensional top view, a three-dimensional view and a cross-sectional view of all time of the track prediction result for the decision and planning of the unmanned vehicle.
Example (b):
in this embodiment, the specific hyper-parameters of the neural network are shown in the following table:
Figure BDA0002204453750000091
as shown in fig. 2, the figure includes an unmanned vehicle, i.e., a host vehicle (vehicle numbered (0)) and 6 surrounding vehicles (6 vehicles numbered (1) - (6)), the origin O of the coordinate system is at the geometric center of the host vehicle, the y-axis is along the tangential direction of the road advancing direction, the x-axis is perpendicular to the y-axis, and the left edge line of the road is directed to the right edge line.
As shown in FIG. 3, the method based on the encoder/decoder structure of the long and short memory network obtains the true value and the predicted value result of the trajectory prediction of the vehicle under 3 scenarios (FIG. 3-1, FIG. 3-2, and FIG. 3-3). Wherein, each test scene contains the test results of the vehicles with the numbers (1) to (6). In order to show the generalization capability of the network, the track graph corresponding to each label is actually formed by connecting the tracks of 16 groups of vehicles with the same label end to end, corresponding to 48s, and comprising 480 frames of data.
As shown in fig. 4, in order to keep the lane driving situation, only three predicted vehicles a, B, and C located in front of the host vehicle are considered in the figure for the sake of observation. Fig. 4-1 includes a schematic diagram of a driving scenario and three predicted trajectories with probability bands. FIG. 4-2 is a probability graph of spatiotemporal trajectories, with the x-axis, the y-axis representing actual position, and the z-axis representing the time axis, where three spatiotemporal trajectories with probability bands are observed. Fig. 4-3 are cross-sectional views of the spatio-temporal trajectory probability map, each along a plane defined parallel to the x-y axis, intersecting at times when the z-axis t is 2s and t is 3s, and the instantaneous positions of the three vehicles a, B, and C with the probability bands are observed when t is 2s and t is 3 s.
As shown in fig. 5, in the case where there is a lane change of the vehicle, only three predicted vehicles a, B, and C located in front of the host vehicle are considered in the figure for the sake of convenience of observation. Fig. 5-1 includes a schematic diagram of a driving scenario and three predicted trajectories with probability bands. FIG. 5-2 is a probability graph of spatiotemporal trajectories, with the x-axis, the y-axis representing actual position, and the z-axis representing the time axis, where three spatiotemporal trajectories with probability bands are observed. Fig. 5-3 are cross-sectional views of the spatio-temporal trajectory probability map, each along a plane defined parallel to the x-y axis, intersecting at the time when the z-axis t is 2s and t is 3s, wherein the instantaneous positions of the probability bands of three vehicles a, B, and C are observed when t is 2s, and the instantaneous positions of the probability bands of only two vehicles a and C are observed when t is 3 s.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A multiple vehicle trajectory prediction method is characterized by comprising the following steps:
step 0, sampling the position of the main vehicle, the lane where the main vehicle is located, the distance between the main vehicle and the front vehicle and the positions of adjacent vehicles around the main vehicle within a set time S1 to obtain historical data; sampling the positions of adjacent vehicles around the main vehicle in a set time period S2 after the time period S1 to obtain prediction data serving as a label of historical data during neural network training; the historical data and the corresponding labels form a piece of training data;
by parity of reasoning, a plurality of pieces of training data are obtained;
step 1, constructing a plurality of encoders taking a deep long and short memory network as a basic unit and a plurality of decoders taking the deep long and short memory network as a basic unit, thereby forming a neural network;
step 2, randomly selecting training data with a set number from the training data obtained in the step 0, and sequentially sending each piece of training data into an encoder according to the time sequence; the encoder converts input data into a coded vector after encoding, and then sends the coded vector into the decoder, wherein the coded vector is used as an initial hidden state of the decoder; the decoder outputs the predicted value of the position of the adjacent vehicle at each sampling moment in the time period S2;
step 3, decoding the decoder in the step 2The output predicted value is input into a full connection layer network and is mapped into characteristic data; the characteristic data is sent into a mixed density network, and 6 groups of description current time t are output through trainingiPrevious time t ofi-1Parameters of the temporal Gaussian distribution
Figure FDA0003536217670000011
Figure FDA0003536217670000012
Wherein, the upper corner mark (j) represents the mark number of the vehicle, k represents the serial number of the Gaussian distribution, and the lower corner mark represents the time;
Figure FDA0003536217670000013
coefficients for weighting among the parameters representing the k-th set of gaussian distributions,
Figure FDA0003536217670000014
the physical quantities describing the mean values of the x-direction and y-direction positions in the parameters respectively representing the k-th group of gaussian distributions,
Figure FDA0003536217670000015
representing the standard deviation in the x-direction and the y-direction,
Figure FDA0003536217670000016
correlation coefficients in x direction and y direction in parameters representing the kth group of Gaussian distribution are correlated;
step 4, performing normalization calculation after the weight output in the step 3 is indexed to obtain updated weight; taking an exponential calculation result of the standard deviation output in the step 3, taking a numerical value obtained by calculating a correlation number by a hyperbolic tangent function, and then substituting the numerical value into a formula (1) to calculate the t of the vehicle at the time of (j)iProbability of the location of the moment:
Figure FDA0003536217670000017
wherein,
Figure FDA0003536217670000021
representing a kth gaussian distribution for fitting the vehicle position;
step 5, calculating the root mean square error between the position predicted value output by the decoder and the label, and recording as F1
Based on the probability result of each sampling time of each vehicle, obtaining the probability and value of each sampling time of all adjacent vehicles, then solving the probability sum of all sampling times, and finally solving the negative logarithm of the sum to obtain F2
The regularization terms of the neural network parameters are summed by square to obtain F3
The final loss function of the neural network is shown as follows:
Figure FDA0003536217670000022
and 6, updating each parameter by adopting an Adam gradient descent method based on the final loss function calculated in the step 5, namely: the loss function calculates gradient of each parameter distribution of the neural network, updates each parameter based on the gradient value, returns to the step 2 based on the updated parameter to continue iteration until the change rate of the loss function is lower than a set threshold value, and stops iteration;
and 7, predicting the position of the adjacent vehicle by using the neural network of the current updated parameters.
2. The method of claim 1, wherein the number of neighboring vehicles is less than 6.
3. The method of claim 1, wherein the neighboring vehicle position comprises x-direction position information and y-direction speed information.
4. The multiple vehicle trajectory prediction method according to claim 1, characterized in that the time period S1 is 50S; the time period S2 is less than 30S.
5. The method of claim 4, wherein the sampling frequency is 10 Hz.
CN201910876478.3A 2019-09-17 2019-09-17 Multi-vehicle track prediction method based on long and short memory network Active CN110610271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910876478.3A CN110610271B (en) 2019-09-17 2019-09-17 Multi-vehicle track prediction method based on long and short memory network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910876478.3A CN110610271B (en) 2019-09-17 2019-09-17 Multi-vehicle track prediction method based on long and short memory network

Publications (2)

Publication Number Publication Date
CN110610271A CN110610271A (en) 2019-12-24
CN110610271B true CN110610271B (en) 2022-05-13

Family

ID=68892800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910876478.3A Active CN110610271B (en) 2019-09-17 2019-09-17 Multi-vehicle track prediction method based on long and short memory network

Country Status (1)

Country Link
CN (1) CN110610271B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11900224B2 (en) 2019-12-26 2024-02-13 Waymo Llc Generating trajectory labels from short-term intention and long-term result
CN111161322B (en) * 2019-12-31 2022-10-18 大连理工大学 LSTM neural network pedestrian trajectory prediction method based on human-vehicle interaction
CN111340543B (en) * 2020-02-25 2024-03-22 上海大学 Fine granularity taxi taking demand prediction method based on attention mechanism
CN111401233A (en) * 2020-03-13 2020-07-10 商汤集团有限公司 Trajectory prediction method, apparatus, electronic device, and medium
CN111767475A (en) * 2020-03-20 2020-10-13 吉利汽车研究院(宁波)有限公司 Target pedestrian trajectory prediction model training method and device, electronic equipment and storage medium
CN111114543B (en) * 2020-03-26 2020-07-03 北京三快在线科技有限公司 Trajectory prediction method and device
CN113496268B (en) * 2020-04-08 2024-07-05 北京图森智途科技有限公司 Track prediction method and device
WO2021212379A1 (en) * 2020-04-22 2021-10-28 华为技术有限公司 Lane line detection method and apparatus
CN111583715B (en) * 2020-04-29 2022-06-03 宁波吉利汽车研究开发有限公司 Vehicle track prediction method, vehicle collision early warning method, device and storage medium
CN114189449B (en) * 2020-08-25 2023-03-24 同济大学 Open type unmanned vehicle group model of expressway scene and vehicle group forming method
US11873006B2 (en) * 2020-09-21 2024-01-16 GM Global Technology Operations LLC Virtual lane estimation using a recursive self-organizing map
CN112215487B (en) * 2020-10-10 2023-05-23 吉林大学 Vehicle running risk prediction method based on neural network model
CN114440908B (en) * 2020-10-31 2023-07-28 华为技术有限公司 Method and device for planning driving path of vehicle, intelligent vehicle and storage medium
CN114543800B (en) * 2020-11-18 2024-08-30 蘑菇车联信息科技有限公司 Processing method and related device for vehicle movement track estimation
CN112885079B (en) * 2021-01-11 2022-11-29 成都语动未来科技有限公司 Vehicle track prediction method based on global attention and state sharing
CN112906793B (en) * 2021-02-22 2023-12-22 深圳市市政设计研究院有限公司 Monitoring data repairing method and system for bridge health monitoring system
CN113076599A (en) * 2021-04-15 2021-07-06 河南大学 Multimode vehicle trajectory prediction method based on long-time and short-time memory network
CN113240115B (en) * 2021-06-08 2023-06-06 深圳数联天下智能科技有限公司 Training method for generating face change image model and related device
CN113435356B (en) * 2021-06-30 2023-02-28 吉林大学 Track prediction method for overcoming observation noise and perception uncertainty
CN113642682B (en) * 2021-10-14 2021-12-31 北京理工大学 Trajectory primitive extraction and analysis method and system under multi-vehicle interaction environment
CN114757355B (en) * 2022-04-08 2024-04-02 中国科学技术大学 Track data set variability measurement method, system, equipment and storage medium
CN115371678A (en) * 2022-08-22 2022-11-22 广州足下科技有限公司 Autonomous navigation method for mobile robot
CN118658304B (en) * 2024-08-19 2024-10-25 青岛理工大学 Speed field-based networked autonomous vehicle position prediction and risk quantification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019023628A1 (en) * 2017-07-27 2019-01-31 Waymo Llc Neural networks for vehicle trajectory planning
CN109583151A (en) * 2019-02-20 2019-04-05 百度在线网络技术(北京)有限公司 The driving trace prediction technique and device of vehicle
CN109910909A (en) * 2019-02-25 2019-06-21 清华大学 A kind of interactive prediction technique of vehicle track net connection of more vehicle motion states

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019023628A1 (en) * 2017-07-27 2019-01-31 Waymo Llc Neural networks for vehicle trajectory planning
CN109583151A (en) * 2019-02-20 2019-04-05 百度在线网络技术(北京)有限公司 The driving trace prediction technique and device of vehicle
CN109910909A (en) * 2019-02-25 2019-06-21 清华大学 A kind of interactive prediction technique of vehicle track net connection of more vehicle motion states

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Mengyin Fu ; Ting Zhang ; Wenjie Song ; Yi Yang ; Meiling Wang."Trajectory Prediction-Based Local Spatio-Temporal Navigation Map for Autonomous Driving in Dynamic Highway Environments".《IEEE Transactions on Intelligent Transportation Systems》.2021, *

Also Published As

Publication number Publication date
CN110610271A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN110610271B (en) Multi-vehicle track prediction method based on long and short memory network
Huang et al. Conditional predictive behavior planning with inverse reinforcement learning for human-like autonomous driving
CN114384920A (en) Dynamic obstacle avoidance method based on real-time construction of local grid map
Grigorescu et al. Neurotrajectory: A neuroevolutionary approach to local state trajectory learning for autonomous vehicles
CN112734808B (en) Trajectory prediction method for vulnerable road users in vehicle driving environment
CN113044064B (en) Vehicle self-adaptive automatic driving decision method and system based on meta reinforcement learning
CN112651374B (en) Future trajectory prediction method based on social information and automatic driving system
CN114005280A (en) Vehicle track prediction method based on uncertainty estimation
CN114399743B (en) Method for generating future track of obstacle
Huang et al. Deductive reinforcement learning for visual autonomous urban driving navigation
CN115147790B (en) Future track prediction method of vehicle based on graph neural network
Cho et al. Deep predictive autonomous driving using multi-agent joint trajectory prediction and traffic rules
CN110986945B (en) Local navigation method and system based on semantic altitude map
CN116540731B (en) Path planning method and system integrating LSTM and SAC algorithms
CN114185339A (en) Mobile robot path planning method in dynamic environment
CN115547040A (en) Driving behavior prediction method based on inner neural network under safety potential field
CN115062202A (en) Method, device, equipment and storage medium for predicting driving behavior intention and track
Khalil et al. Exploiting multi-modal fusion for urban autonomous driving using latent deep reinforcement learning
CN116679711A (en) Robot obstacle avoidance method based on model-based reinforcement learning and model-free reinforcement learning
Doellinger et al. Environment-aware multi-target tracking of pedestrians
CN114516336A (en) Vehicle track prediction method considering road constraint conditions
CN114297529A (en) Moving cluster trajectory prediction method based on space attention network
Ginerica et al. Observenet control: A vision-dynamics learning approach to predictive control in autonomous vehicles
Yoon et al. Trajectory Prediction Using Graph-Based Deep Learning for Longitudinal Control of Autonomous Vehicles: A Proactive Approach for Autonomous Driving in Urban Dynamic Traffic Environments
CN117408406A (en) Planning coupling multi-target vehicle track prediction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant