CN114065870A - Vehicle track generation method and device - Google Patents
Vehicle track generation method and device Download PDFInfo
- Publication number
- CN114065870A CN114065870A CN202111404830.7A CN202111404830A CN114065870A CN 114065870 A CN114065870 A CN 114065870A CN 202111404830 A CN202111404830 A CN 202111404830A CN 114065870 A CN114065870 A CN 114065870A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- track
- vector
- generator
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 108
- 239000013598 vector Substances 0.000 claims abstract description 176
- 238000012549 training Methods 0.000 claims abstract description 104
- 230000008569 process Effects 0.000 claims abstract description 67
- 238000005070 sampling Methods 0.000 claims abstract description 28
- 230000003042 antagnostic effect Effects 0.000 claims abstract description 27
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 14
- 230000006870 function Effects 0.000 claims description 49
- 238000011176 pooling Methods 0.000 claims description 24
- 230000009466 transformation Effects 0.000 claims description 17
- 230000002457 bidirectional effect Effects 0.000 claims description 13
- 230000001131 transforming effect Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 241000288105 Grus Species 0.000 claims description 5
- 238000007476 Maximum Likelihood Methods 0.000 claims description 2
- 230000003993 interaction Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 10
- 238000009826 distribution Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000013144 data compression Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 208000001613 Gambling Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention relates to a vehicle track generation method, which comprises the steps of transmitting a real multi-vehicle track and a generated multi-vehicle track obtained based on multi-vehicle position conditions to a classification discriminator to obtain a first logit vector and a first feature vector, and training the classification discriminator by adopting an antagonistic learning mode based on the first logit vector; transmitting the real multi-vehicle track and the generated multi-vehicle track to a regression discriminator to obtain a second logit vector and a second feature vector, and training the regression discriminator based on the second logit vector; training a generator in a counterstudy mode based on the first and second feature vectors and the first and second logit vectors to obtain a target generator; and transmitting the multi-vehicle position condition and the Gaussian process sampling to a target generator to obtain a target multi-vehicle track. In the process, the generator is trained based on the multi-vehicle position condition and the countermeasure algorithm, the target multi-vehicle track can generate the multi-vehicle track which is distributed with the real data, and the accuracy and the authenticity of the multi-vehicle track generation are improved.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a vehicle track generation method and device.
Background
With the continuous improvement of mechanization and intelligence level, intelligent driving is becoming a research hotspot. In intelligent driving research, a large number of vehicle trajectories are required for design and verification of driving algorithms or analysis and research of driving behaviors. However, the collection of a large number of vehicle trajectories not only requires expensive equipment and cumbersome trajectory processing steps, but also privacy concerns. Therefore, multi-vehicle trajectory generation needs to be implemented.
In the existing vehicle track generation process, only a single vehicle track can be generated, and the interaction between the generated track and surrounding vehicles is not considered, so that the generation of a multi-vehicle track is inaccurate.
Disclosure of Invention
In view of the above, the invention provides a vehicle track generation method and device, which are used for solving the problem that in the existing vehicle track generation process, only a single vehicle track can be generated, and the interaction between the generated track and surrounding vehicles is not considered, so that the generation of multiple vehicle tracks is inaccurate. The specific scheme is as follows:
a vehicle trajectory generation method, comprising:
acquiring multi-vehicle position conditions and Gaussian process samples during training, and transmitting the multi-vehicle position conditions and the Gaussian process samples to a generator to obtain a generated multi-vehicle track;
acquiring a real multi-vehicle track, transmitting the real multi-vehicle track and the generated multi-vehicle track to a classification discriminator to obtain a first logit vector and a first feature vector, and training the classification discriminator in an antagonistic learning mode based on the first logit vector;
when training is carried out, transmitting the real multi-vehicle track and the generated multi-vehicle track to a regression discriminator to obtain a second logit vector and a second feature vector, and training the regression discriminator in an antagonistic learning mode based on the second logit vector;
when training is carried out, training the generator in a counterstudy mode based on the first feature vector, the second feature vector, the first logic vector and the second logic vector to obtain a target generator;
and after training is finished, transmitting the multi-vehicle position condition and the Gaussian process sampling to the target generator to obtain a target multi-vehicle track.
Optionally, the method for generating a multi-vehicle trajectory by transmitting the multi-vehicle position condition and the gaussian process sample to a generator includes:
generating an initial hidden state of forward sequential expression and reverse sequential expression by the multi-vehicle position condition based on MLP coding;
the results of the initial hidden state and the Gaussian process sampling after MLP coding are subjected to bidirectional GRU to obtain forward sequence expression and reverse sequence expression;
transforming the time sequence characteristics of Gaussian process sampling subjected to MLP coding based on the forward sequence expression and the reverse sequence expression to obtain a time sequence coding set;
subtracting the time sequence codes in the time sequence code sets to obtain relative code sets, carrying out MLP (maximum likelihood) transformation on the relative codes in the relative code sets, and then carrying out average pooling operation to obtain target relative code sets;
and connecting the target relative coding set and the time sequence coding set in series, and then obtaining the generated multi-vehicle track through MLP coding.
Optionally, the method includes obtaining a real multi-vehicle track, and transmitting the real multi-vehicle track and the generated multi-vehicle track to a classification discriminator to obtain a first location vector and a first feature vector, where the method includes:
obtaining sequential codes of the real multi-vehicle track and the generated multi-vehicle track based on MLP codes and bidirectional GRUs;
determining a multi-vehicle relative trajectory based on the real multi-vehicle trajectory and the generated multi-vehicle trajectory;
obtaining a spatial relationship code by the multi-vehicle relative track based on MLP coding and average pooling operation;
a first logit vector and a first feature vector are determined based on the sequential encoding and the spatial relationship encoding.
Optionally, the method for obtaining a second location vector and a second feature vector by transferring the real multi-vehicle trajectory and the generated multi-vehicle trajectory to a regression discriminator includes:
selecting a target vehicle, and extracting a historical track and a future track of the target vehicle from the real multi-vehicle track and the generated multi-vehicle track;
obtaining a first code and a second code after the historical track and the future track are coded based on MLP;
calculating relative tracks of the historical tracks and historical tracks of other surrounding vehicles, and obtaining a third code after the relative tracks are based on MLP coding and average pooling operation;
and determining a second location vector and a second feature vector based on an FC layer after the first code, the second code and the third code are connected in series.
Optionally, the method for training the generator based on the first feature vector and the second feature vector in an antagonistic learning manner to obtain the target generator includes:
determining a classification discriminator loss function matching the generator based on the first logit vector;
determining a regression discriminator loss function matching the generator based on the second logit vector;
determining a generator loss function based on the first eigenvector, the second eigenvector, the first logit vector, and the second logit vector;
and enabling the classification discriminator loss function, the regression discriminator loss function and the generator loss function to reach Nash equilibrium based on a back propagation algorithm to obtain the target generator.
A vehicle trajectory generation device comprising:
the first generation module is used for acquiring multi-vehicle position conditions and Gaussian process samples during training, and transmitting the multi-vehicle position conditions and the Gaussian process samples to the generator to obtain a generated multi-vehicle track;
the first training module is used for acquiring a real multi-vehicle track, transmitting the real multi-vehicle track and the generated multi-vehicle track to a classification discriminator to obtain a first logit vector and a first feature vector, and training the classification discriminator in an antagonistic learning mode based on the first logit vector;
the second training module is used for transmitting the real multi-vehicle track and the generated multi-vehicle track to a regression discriminator to obtain a second logit vector and a second feature vector when training is carried out, and training the regression discriminator in an antagonistic learning mode based on the second logit vector;
the third training module is used for training the generator in an antagonistic learning mode based on the first characteristic vector and the second characteristic vector to obtain a target generator when training is carried out;
and the second generation module is used for transmitting the multi-vehicle position condition and the Gaussian process sampling to the target generator to obtain a target multi-vehicle track after the training is finished.
The above apparatus, optionally, the first generating module includes:
a first generating unit, configured to generate initial hidden states of forward sequential expression and reverse sequential expression for the multi-vehicle position condition based on MLP coding;
a first transformation unit, configured to pass results of the initial hidden state and the gaussian process samples after being MLP encoded through a bidirectional GRU to obtain a forward sequential expression and a reverse sequential expression;
the second transformation unit is used for transforming the time sequence characteristics of the Gaussian process sampling subjected to the MLP coding based on the forward sequence expression and the reverse sequence expression to obtain a time sequence coding set;
the transformation and pooling unit is used for subtracting the time sequence codes in the time sequence code sets to obtain relative code sets, carrying out MLP (multi level processing) transformation on the relative codes in the relative code sets and then carrying out average pooling operation to obtain target relative code sets;
and the first coding unit is used for connecting the target relative coding set and the time sequence coding set in series and then obtaining the generated multi-vehicle track through MLP coding.
The above apparatus, optionally, the first training module includes:
a second generating unit, configured to obtain sequential codes from the real multi-vehicle trajectory and the generated multi-vehicle trajectory based on MLP codes and bidirectional GRUs;
a first determining unit for determining a multi-vehicle relative trajectory based on the real multi-vehicle trajectory and the generated multi-vehicle trajectory;
the third generating unit is used for obtaining a spatial relationship code from the multi-vehicle relative track based on MLP coding and average pooling operation;
a second determining unit for determining a first logit vector and a first feature vector based on the sequential encoding and the spatial relationship encoding.
The above apparatus, optionally, the second training module includes:
the extracting unit is used for selecting a target vehicle and extracting a historical track and a future track of the target vehicle from the real multi-vehicle track and the generated multi-vehicle track;
the second coding unit is used for obtaining a first code and a second code after the historical track and the future track are coded based on MLP;
the coding and pooling unit is used for calculating the relative track between the historical track and the historical tracks of other surrounding vehicles, and obtaining a third code after the relative track is based on MLP coding and average pooling operation;
and a third determining unit, configured to determine a second location vector and a second feature vector based on an FC layer after concatenating the first code, the second code, and the third code.
The above apparatus, optionally, the third training module includes:
a fourth determining unit for determining a classification discriminator loss function matched with the generator based on the first logit vector;
a fifth determining unit for determining a regression discriminator loss function matched with the generator based on the second logit vector;
a sixth determining unit for determining a generator loss function based on the first feature vector, the second feature vector, the first logic vector, and the second logic vector;
a seventh determining unit, configured to enable the classification discriminator loss function, the regression discriminator loss function, and the generator loss function to reach nash equilibrium based on a back propagation algorithm, so as to obtain the target generator.
Compared with the prior art, the invention has the following advantages:
the invention discloses a vehicle track generation method and a vehicle track generation device, which comprise the following steps: transmitting the real multi-vehicle track and a generated multi-vehicle track obtained based on the multi-vehicle position condition to a classification discriminator to obtain a first logit vector and a first feature vector, and training the classification discriminator in an antagonistic learning mode based on the first logit vector; transmitting the real multi-vehicle track and the generated multi-vehicle track to a regression discriminator to obtain a second logit vector and a second feature vector, and training the regression discriminator based on the second logit vector; training a generator in an antagonistic learning mode based on the first feature vector, the second feature vector, the first logit vector and the second logit vector to obtain a target generator; and transmitting the multi-vehicle position condition and the Gaussian process sampling to a target generator to obtain a target multi-vehicle track. In the process, the target generator is trained based on the multi-vehicle position condition and the countermeasure algorithm, the target multi-vehicle track can generate the multi-vehicle track with real data distributed in the same way, and the accuracy and the authenticity of the multi-vehicle track generation are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a structural block diagram of a vehicle trajectory generation method disclosed in an embodiment of the present application;
FIG. 2 is a flowchart of a vehicle trajectory generation method disclosed in an embodiment of the present application;
FIG. 3 is a block diagram of a generator according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a classification discriminator disclosed in an embodiment of the present application;
FIG. 5 is a block diagram of a regression discriminator disclosed in an embodiment of the present application;
FIG. 6 is a block diagram of a vehicle trajectory prediction model according to an embodiment of the present disclosure;
fig. 7 is a block diagram of a vehicle trajectory generation device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The invention discloses a vehicle track generation method and a vehicle track generation device, which are applied to the generation process of multi-vehicle tracks, and the track generation method in the prior art belongs to the range of track planning, or fails to combine with real track data, or fails to reflect the space-time interaction characteristics among multiple vehicles. In order to generate a track conforming to multi-vehicle space-time interaction characteristics in real data, the invention provides a generative model trained in a counterstudy mode, wherein the generative model is a neural network model, preset distribution sampling and condition data are input, multi-vehicle tracks in a scene are output, and multi-vehicle space-time interaction characteristics in the real data are learned in the training process, so that the trained generative model can be used for generating multi-vehicle tracks similar to the real data in actual use, and the multi-vehicle track can be applied to vehicle track prediction, target tracking, simulator construction, data compression, data expansion, anomaly detection and other applications.
The method utilizes the existing vehicle track data to train and generate the model, the generated model can generate the multi-vehicle track with a certain time length according to the relative position condition of the multi-vehicle after the training is stable, and the space-time characteristic similar to the training track data can be kept. The method has the advantages that the generative model is utilized, the generation of the multi-vehicle track is realized for the first time, the space-time interaction characteristics in the actual multi-vehicle motion scene are met, the problem that the inter-vehicle interaction characteristics in real data cannot be reflected in the existing vehicle track generation technology is solved, and the method can be used for scientific research and practical application related to model training, algorithm verification, simulation platform construction and the like.
In the embodiment of the invention, the overall structure block diagram of the method is shown in fig. 1, and the model is generated by training with real multi-vehicle track data and an antagonistic learning technology. During training, the generator, the classification discriminator and the regression discriminator are utilized to learn in a countercheck learning mode, wherein the generator, the classification discriminator and the regression discriminator are three neural networks, the countercheck learning is a training method based on a game theory, and a target is realized by designing a proper minimum and maximum game. Based on the counterlearning, researchers have proposed gan (genetic adaptive network) to train the generation model. In GAN, a generator and a discriminator are included. In the invention, the input of the generator is multi-vehicle position condition and Gaussian process sampling, the output of the generator is multi-vehicle track generation, the purpose of the generator is to ensure that the discriminator can not distinguish the generated track from the real track, and the discriminator is responsible for discriminating the multi-vehicle track and the purpose of correctly distinguishing the generated track from the real track. Since the two objectives are mutually contradictory, a minimum problem is formed, namely a game, and when nash balance is achieved, the track generated by the generator is similar to the real track. Here, the classification discriminator and the regression discriminator are both discriminators, but discriminate the trajectories from different aspects, wherein the input of the classification discriminator is the true multi-car trajectory and the generated multi-car trajectory, the output of which is the per-car loci and the per-car features, and the input of the regression discriminator is the generated multi-car trajectory and the true multi-car trajectory, the output of which is the overall loci and the overall features. And after the training is stable, generating a multi-vehicle track according to the vehicle position condition and the Gaussian process sampling by using the generator. In order to realize that the generated multi-vehicle track accords with the space-time interaction characteristic in the real track, the regression discriminator and the classification discriminator express and learn the space-time interaction characteristic in the multi-vehicle track from multiple aspects, so that the multi-vehicle track generated by the generator and the real multi-vehicle track have the same space-time interaction characteristic under the action of the discriminator.
The execution flow of the method is shown in fig. 2, and comprises the following steps:
s101, acquiring multi-vehicle position conditions and Gaussian process samples during training, and transmitting the multi-vehicle position conditions and the Gaussian process samples to a generator to obtain a generated multi-vehicle track;
in the embodiment of the present invention, a schematic diagram of the generator is shown in fig. 3, the input of the generator is two parts, and one part is a preset sampling of a gaussian processThe other part is a multi-vehicle position condition (C)1,C2,…,Cn) The two inputs are first encoded via MLP. In order to perform the time sequence characteristic transformation on the Gaussian process sampling, a bidirectional GRU (gated RecurrentUnit) unit is adopted. The GRU is a neural network structure and can realize the expression of time sequence characteristics. In the bidirectional GRU-cell unit, the coded multi-vehicle position conditions are respectively set as initial hidden states of forward sequential expression and reverse sequential expression, so that the time sequence characteristics of Gaussian process sampling after MLP coding are transformed. Encoding after forward transformAnd coding after backward transformationAverage AVE to obtain time sequence coding setThe coding is formed by integrating a plurality of vehicle position conditions through time sequence transformation in a Gaussian process, and the meaning of the coding is the time sequence coding integrating the relative position conditions of the vehicles. In order to encode the spatial relative relationship for generating the multi-vehicle track, pairwise subtraction is carried out on the time sequence codes to obtain a relative encoding setThe relative encoding sets are transformed by MLP, and since each vehicle performs relative encoding calculation with other vehicles, it needs to go through AVG POOL, i.e. perform averaging operation. The target relative encoding set passing through AVG POOL is connected with the time sequence encoding set in series (CAT), and then the target relative encoding set and the time sequence encoding set can be transformed through MLP to obtain the generated multi-vehicle trackThrough the transformation of the time sequence and the spatial relationship, the relative position relationship of the vehicles in the input conditions is integrated in the generation process, the time sequence and the spatial relationship are transformed on the input Gaussian process samples, and the generated multi-vehicle track and the real multi-vehicle track have consistent time-space interaction characteristics under the supervision training of a loss function. Let G (g) represent the function of GRU network, M (g) represent MLP unit, and the specific calculation flow can be formulated as:
s102, obtaining a real multi-vehicle track, transmitting the real multi-vehicle track and the generated multi-vehicle track to a classification discriminator to obtain a first logit vector and a first feature vector, and training the classification discriminator in an antagonistic learning mode based on the first logit vector;
in the embodiment of the invention, the real multi-vehicle track is obtained, wherein the real multi-vehicle track can be obtained in a preset data set or at a designated position.
Further, the real multi-vehicle track and the generated multi-vehicle track are transmitted to a classification discriminator to obtain a first location vector and a first feature vector, wherein the real multi-vehicle track is processed firstly, and for convenience of statement and without loss of generality, taking vehicle track prediction as an example, a target vehicle to be predicted is taken as a center, and influences of surrounding vehicles are considered. Note that there are n vehicles in the scene, and for vehicle j equal to 1,2, …, n, first, the coordinate sequence is aligned(the coordinate series refers to a sequential series of vehicle coordinates from each of time 1 to time t, i.e.) To perform centralizationThen normalization is performedWhereinFor the target vehicle at the current time thThe coordinates of (a);
the specific processing procedure of the classification discriminator is as follows: firstly, using MLP to encode the real multi-vehicle track and the track of each vehicle in the generated multi-vehicle track, and then using bidirectional GRU to forward encode the time sequence characteristics contained in the real multi-vehicle track and the track of each vehicle in the generated multi-vehicle track GcdfAnd reverse coding GcdbAnd summing the forward code and the backward code to obtainThen averaging is carried out to obtain sequential codes capable of comprehensively expressing time sequence characteristicsWhere j represents j cars. In order to express the spatial interaction relationship between vehicles, firstly, the coordinates of the vehicles in the input multi-vehicle track are subtracted pairwise, then MLP is used for encoding, and since each vehicle carries out relative encoding calculation with other vehicles, AVG POOL (amplitude versus voltage) is needed, namely, averaging operation is carried out, so that the encoding of the spatial relationship between the vehicle j and other vehicles is obtainedThe sequential coding and the space relation coding are connected in series, and a first feature vector of the features for expressing the comprehensive space-time interaction characteristics of the vehicle j can be obtained through an FC layerThe signature again passes through an FC layer, and a first logit output for vehicle j is obtainedThe probability distribution of the vehicle over k +1 classes is characterized. Let G (g) represent the function of GRU network, M (g) represent MLP unit, FC (g) representThe FC layer, the calculation flow, can be formulated as
Further, firstly, training a discriminator, and training real trajectory data of each batch, wherein the batch is a term in neural network training, and because a large amount of memory is consumed for training a complete data set, only a part of data is randomly taken to perform network training when a network is trained each time, and the data is called a batch, and all classification discriminators are trained. In training the classification discriminator, the parameters of the regression discriminator and the generator are fixed. Similarly, in training the regression discriminator, the parameters of the fixed classification discrimination start and the generator are unchanged. In each batch, the generator is trained once for each number of times the regression discriminator and the classification discriminator are trained. When training the generator, the parameters of the other two discriminators are fixed. The specific structure of the classification discriminator is shown in fig. 4. The MLP is a multilayer perceptron, wherein the multilayer perceptron is a neural network structure and comprises a plurality of linear layers and nonlinear activation functions among the layers, so that a nonlinear mapping is formed, the extraction and expression of features are realized, FC is a full-connection layer, AVG POOL is an average POOLCAT is a concatenation operation. The input of the classification discriminator is a real multi-vehicle trackAnd generating a multi-vehicle trajectory(the generated multi-vehicle trajectory is the output of the generator), j ═ 1, 2. Correspondingly, the classification discriminator is used for each trackThe first logit vector and the first feature vector are output, and the regression discriminator outputs the first logit vector and the first feature vector for all the n vehicle trajectories. The classifier loss function is a semi-supervised loss function, specifically if there are k possible classes for each vehicle, and defines the k +1 th class as the class that generates the trajectory. The classification discriminator corresponds to a loss function of
Lcd=Ls+Lus (11)
Wherein,
in the loss function, x, y to pdata(x, y) refers to the joint distribution of input and output in real data obeying input and output, x-G refers to the distribution of x obeying generators, Ex~Glog[p(y=k+1∣x)]Refers to the expectation calculation, which represents the expectation of the probability that the input x, i.e., the trajectory, belongs to the (k + 1) th class, specifically the expectation of the formula, i.e., the probability that the generated trajectory is classified into the generated trajectory. L issExpectation referring to the probability that a true trajectory belongs to the category to which it really belongs, LusMeans that the real track is not the generated track and the generated track is the generated trackExpectation of probability of trajectory. In a network, it is desirable to average the output of the samples. In particular, the method of manufacturing a semiconductor device,
wherein,and the first logit vector corresponding to the real track of the i-car is output by the representing classification discriminator, and the distribution of the real track of the i-car on k real categories is represented.Corresponding to the components in the output logit vector of the generated trajectory. G (z) refers to the output of the generator, D (x) refers to the output of the discriminator. At the minimum of LcdIn the process of (1), LsAnd LusWill be minimized and the classification discriminator will adjust its own network parameters using a back propagation algorithm such that LsMinimization, i.e. the correct classification of the real trajectory into its real category; at the same time make LusMinimize, i.e., not classify the real trajectory into the category of the generated trajectory, and correctly classify the generated trajectory into the generated trajectory. However, due to the existence of the antagonistic part in the generator loss function, when nash equilibrium is reached, the classification discriminator cannot realize correct classification, only can confuse the real track and the generated track, and at the moment, the alignment of the space-time characteristics of the generated track and the real track is realized.
S103, when training is carried out, transmitting the real multi-vehicle track and the generated multi-vehicle track to a regression discriminator to obtain a second logit vector and a second feature vector, and training the regression discriminator in an antagonistic learning mode based on the second logit vector;
in the embodiment of the present invention, after training S102 for each batch data is completed, the real multi-vehicle trajectory and the generated multi-vehicle trajectory are transmitted to a regression discriminator to obtain a second logit vector and a second feature vector, where a structural block diagram of the regression discriminator is shown in fig. 5, and a specific calculation flow of the regression discriminator is as follows: and selecting the target vehicle, wherein the selection can be set based on experience or specific conditions, and in the embodiment of the invention, specific selection rules are not limited.
Firstly, extracting the historical track of a target vehicle from a plurality of vehicle tracksFuture trajectory of target vehicleAnd calculates the difference between the historical trajectories of the target vehicle and the surrounding vehicles i (relative trajectory)i is 1, K, n; the historical track and future of the target vehicle are first MLP coded as the first codeAnd a second codeThe relative trajectory is encoded as a third encoding by another MLP and average pooling operationThree codes are connected in series (CAT), and a second feature vector F of the code representing the space-time dependency relationship in the scene of multiple vehicles can be obtained through an FC layerrdA second logit vector (logits) L is obtained after a FC layer passrdAnd the scene truth degree formed by the multi-vehicle track is shown. According to a similar expression, the process can be formulated as
Further, the regression discriminator training loss function is
The two portions correspond to the distance of the true multi-vehicle trajectory expected location from 1 and the distance of the generated multi-vehicle trajectory expected location from 0 (which may be understood as cross entry loss function, loss of the true trajectory expected location from tag 1 and loss of the generated trajectory expected location from tag 0), respectively. Different from the method that a classification discriminator identifies each track in the generated multi-vehicle tracks, each track needs to be classified into different categories, and a regression discriminator identifies the space-time dependency relationship of the whole traffic scene formed by the tracks, so that the output logit represents the real degree of the whole scene, and the more the situation is, the more the situation is. In minimizing the loss function, the regression discriminator adjusts its own network parameters using a back propagation algorithm in an attempt to minimize the loss functionD (x) in (2) tends to be 1, and Ez~noiseD (g (z)) in log (1-D (g (z))) tends to 0, in other words, the output logit of the true multi-vehicle trajectory is tried to be as close to 1 as possible, and is identified as true, while the output logit of the multi-vehicle trajectory generated is tried to be as close to 0 as possible, and is identified as false. However due to the existence of a competing term-E in the generator loss functionz~noiselogD (G (z)) (the producer minimizes the loss in an attempt to bring D (G (z)) to 1), the producer loss and the loss in the regression discriminator to the optimal result, and when the game reaches Nash equilibrium, D (x) and D (G (z)) will equal 0.5, i.e., the spatio-temporal interaction characteristics contained in the produced trajectory and the real trajectory cannot be distinguished.
S104, training the generator in an antagonistic learning mode based on the first feature vector, the second feature vector, the first logic vector and the second logic vector to obtain a target generator when training is carried out;
in the embodiment of the present invention, after the training S103 for each batch data is completed, the training is performed in an antagonistic learning manner based on the first feature vector, the second feature vector, the first logic vector, and the second logic vector, and a specific training process is as follows:
as shown in FIG. 3, the input to the generator includes a multiple vehicle position condition (C)1,C2,...,Cn) And m Gaussian process samples generated for each of the n vehiclesThe loss function is divided into two parts, corresponding to a classification discriminator and a regression discriminator, respectively, specifically Lg=Lgcd+LgrdWherein
corresponding to the part of the classification discriminator where d (g) represents the output logit of the classification discriminator, i.e. the first logit vector,
corresponding to the part of the regression discriminator, where f (x) represents the output result of the network intermediate layer, i.e. the output first and second feature vectors, see the features in fig. 4 and 5, and the remaining parameters have the same meaning, where d (g) represents the output logit of the regression discriminator, i.e. the second logit vector. In thatdLcgAnd LgrdIn each case, the loss function can be divided into two parts,the distance between the characteristic vector of the multiple vehicle tracks generated by the representative generator and the characteristic vector of the real multiple vehicle tracks obtained by the discriminator is minimized, so that the space-time characteristic of the multiple vehicle tracks generated approaches to the real multiple vehicle tracksTrack, speed up the convergence, stabilize the convergence process; -Ez~noiselogD (G (z)) represents the logit obtained by the multiple vehicle tracks generated by the generator passing through the discriminator, and represents the true degree of the generated tracks, and in the process of minimizing the item by using a back propagation algorithm, the generator will try to adjust the network parameters thereof, so that the discrimination output logD (G (z)) of the generated tracks in the discriminator is maximized, namely, the value is 1z~noiselog (1-D (g (z))), in this section the discriminator tries to adjust its parameters so that the output logitD (g (z)) of the generated trajectory goes to 0, which is the gambling game, minimax game. When the game reaches nash equilibrium, namely finally stable convergence, D (G (z)) is approximate to 0.5, and meanwhile, the output logD (x) of the discriminator on the real multi-vehicle track is also approximate to 0.5, which shows that the space-time interaction characteristics of the multi-vehicle track generated by the generator are consistent with the real multi-vehicle track, and a target generator is obtained, wherein the multi-vehicle track output by the target generator cannot be distinguished by the discriminator.
And S105, after the training is finished, transmitting the multi-vehicle position condition and the Gaussian process sampling to the target generator to obtain a target multi-vehicle track.
In the embodiment of the invention, after the target generator determines, the multi-vehicle position condition and the Gaussian process sampling are transmitted to the target generator to obtain the target multi-vehicle track, wherein the target multi-vehicle track can be applied to vehicle track prediction, target tracking, simulator construction, data compression, data expansion, abnormality detection and the like.
The invention discloses a vehicle track generation method, which comprises the following steps: transmitting the real multi-vehicle track and a generated multi-vehicle track obtained based on the multi-vehicle position condition to a classification discriminator to obtain a first logit vector and a first feature vector, and training the classification discriminator in an antagonistic learning mode based on the first logit vector; transmitting the real multi-vehicle track and the generated multi-vehicle track to a regression discriminator to obtain a second logit vector and a second feature vector, and training the regression discriminator based on the second logit vector; training a generator in an antagonistic learning mode based on the first feature vector, the second feature vector, the first logit vector and the second logit vector to obtain a target generator; and transmitting the multi-vehicle position condition and the Gaussian process sampling to a target generator to obtain a target multi-vehicle track. In the process, the target generator is trained based on the multi-vehicle position condition and the countermeasure algorithm, the target multi-vehicle track can generate the multi-vehicle track with real data distributed in the same way, and the accuracy and the authenticity of the multi-vehicle track generation are improved.
Further, in the training process, the training sequence of the generator, the classification discriminator and the regression discriminator is not limited, and can be any sequence, repeated training among the generator, the classification discriminator and the regression discriminator is required, a target generator meeting requirements is finally obtained, the target generator is used as a generation model, and the target multi-vehicle track meeting the conditions can be generated by taking the multi-vehicle position condition and the Gaussian process sampling as conditions.
One embodiment of the present system is the generation of a vehicle trajectory on the US-101 public data set. The specific implementation process comprises the following steps:
because the US-101 data set has more tracks and does not lose generality, data of a first time period, namely vehicle track data recorded in the morning at 7:50-8:05 are taken, the situations that the number of surrounding vehicles is 3 and 4 are taken, preprocessing is carried out, the continuous length of the vehicle track is divided into 8.0s, a training set, a verification set and a test set are divided according to the proportion of 7:1:2, and data normalization processing is carried out;
the generator is trained as described above. Training was performed using Adam optimizer, batch _ size was 200, and learning rate γ was 0.0001.
In order to verify that the generated track maintains the space-time interaction characteristics between vehicles in the real data, the TSTR and TRTR methods are adopted to carry out verification on the vehicle track prediction application. TRTR (Train on Real, Test on Real) refers to training on Real data on which the trajectory prediction performance is tested. TSTR (Train on Synthetic, Test on Real) refers to training on generated data and testing trace prediction performance on Real data. If the RMSE (root Mean Square error) performance of the TSTR is close to that of the TRTR, the inter-vehicle interaction relation and the real inter-vehicle interaction relation in the generated data can be shownAnd (4) the data are consistent. The model for achieving the prediction of the target vehicle trajectory predicts the future trajectory of the target vehicle using the historical trajectories of the target vehicle and the surrounding vehicles as shown in fig. 6. The model input is the historical track of the target vehicleAnd relative coordinates of the target vehicle and surrounding vehiclesI.e. the trajectory is poor. Characterizing historical tracks using an LSTM layerThe MLP and the AVG POOL are used for extracting the spatial interaction relation contained in the relative trajectory, and the MLP and the AVG POOL are connected in series to represent the space-time interaction characteristic of the target vehicle and surrounding vehicles, namely an encoder stage. The concatenated codes pass through an LSTM network (a time sequence relation expression network, functions similar to GRU, and can be replaced mutually) and an MLP network, and then the predicted future track of the target vehicle for the future track can be outputThis stage is called decoder. Let LSTM (g) represent LSTM network, and the flow can be formulated as
The training loss function of the network is the RMSE error between the predicted track and the real track of the target vehicle, namely if data of BS size batch are trained, the RMSE error is measured at the momentt, predicted trajectory isCorresponding to a true trajectory ofThe loss function at that moment is then
In actual implementation, the vehicle relative position condition CiI is 1, K, n is Ci=(Cxi,Cyi) In which C isxiThere are three possible values, -1 indicates that the vehicle is in the left lane of the target vehicle, 0 indicates in the same lane of the target vehicle, 1 indicates in the right lane of the target vehicle, CyiIndicating the proximity of the vehicle to the target in the y-axis (typically in the direction of travel along the roadway). Sampling in Gaussian process adopts RBF kernel, firstly, the [0,3 ] is divided]Is evenly divided into th+tfStep (d), obtaining a multidimensional mean vector
And calculating an RBF kernel matrix by using the mean vector to serve as a covariance matrix of multidimensional Gaussian distribution, and combining the RBF kernel matrix and the covariance matrix to serve as parameters of the multidimensional Gaussian distribution for sampling to obtain Gaussian process samples. Because the sampling implementation of the Gaussian process is various, and the generator only needs to fix one priori sample, the sampling mode and parameters of the Gaussian process can be used as the input of the generator as long as the sampling mode and parameters of the Gaussian process are fixed.
As shown in table 1, although the TSTR performance is slightly inferior to that of the TRTR, the difference is not large, and it can be considered that the inter-vehicle relationship in the real multi-vehicle trajectory data is maintained by generating the multi-vehicle trajectory data.
TABLE 1 RMSE (m) error comparison of TSTR and TRTR in the US-101 data set
Predicting time(s) | TRTR | TSTR |
1.0 | 0.87 | 0.90 |
2.0 | 2.01 | 2.06 |
3.0 | 3.47 | 3.59 |
4.0 | 5.29 | 5.53 |
5.0 | 7.52 | 7.90 |
Based on the above method for generating a vehicle trajectory, an embodiment of the present invention provides a device for generating a vehicle trajectory, where a structural block diagram of the device is shown in fig. 7, and the device includes:
a first generation module 201, a first training module 202, a second training module 203, a third training module 204, and a second generation module 205.
Wherein,
the first generation module 201 is configured to obtain a multi-vehicle position condition and a gaussian process sample during training, and transmit the multi-vehicle position condition and the gaussian process sample to a generator to obtain a generated multi-vehicle track;
the first training module 202 is configured to obtain a real multi-vehicle trajectory, transmit the real multi-vehicle trajectory and the generated multi-vehicle trajectory to a classification discriminator to obtain a first logit vector and a first feature vector, and train the classification discriminator in an antagonistic learning manner based on the first logit vector;
the second training module 203 is configured to transmit the real multi-vehicle trajectory and the generated multi-vehicle trajectory to a regression discriminator to obtain a second logit vector and a second feature vector during training, and train the regression discriminator in an antagonistic learning manner based on the second logit vector;
the third training module 204 is configured to train the generator in an antagonistic learning manner based on the first feature vector and the second feature vector during training to obtain a target generator;
the second generating module 205 is configured to transmit the multi-vehicle position condition and the gaussian process sample to the target generator to obtain a target multi-vehicle trajectory after the training is completed.
The invention provides a vehicle track generation device, comprising: transmitting the real multi-vehicle track and a generated multi-vehicle track obtained based on the multi-vehicle position condition to a classification discriminator to obtain a first logit vector and a first feature vector, and training the classification discriminator in an antagonistic learning mode based on the first logit vector; transmitting the real multi-vehicle track and the generated multi-vehicle track to a regression discriminator to obtain a second logit vector and a second feature vector, and training the regression discriminator based on the second logit vector; training a generator in an antagonistic learning mode based on the first feature vector, the second feature vector, the first logit vector and the second logit vector to obtain a target generator; and transmitting the multi-vehicle position condition and the Gaussian process sampling to a target generator to obtain a target multi-vehicle track. In the process, the target generator is trained based on the multi-vehicle position condition and the countermeasure algorithm, the target multi-vehicle track can generate the multi-vehicle track with real data distributed in the same way, and the accuracy and the authenticity of the multi-vehicle track generation are improved.
In this embodiment of the present invention, the first generating module 201 includes:
a first generation unit 206, a first transformation unit 207, a second transformation unit 208, a transformation and pooling unit 209 and a first encoding unit 210.
Wherein,
the first generating unit 206, configured to generate initial hidden states of forward sequential expression and reverse sequential expression for the multi-vehicle location condition based on MLP coding;
the first transforming unit 207 is configured to pass the results of the initial hidden state and the gaussian process samples after being MLP encoded through a bidirectional GRU to obtain a forward sequential expression and a reverse sequential expression;
the second transforming unit 208 is configured to transform the time series characteristics of the MLP-coded gaussian process samples based on the forward sequential expression and the reverse sequential expression to obtain a time series coding set;
the transforming and pooling unit 209 is configured to subtract the time-series codes in the time-series code sets to obtain relative code sets, perform MLP transformation on the relative codes in the relative code sets, and then perform average pooling operation to obtain target relative code sets;
the first encoding unit 210 is configured to concatenate the target relative encoding set and the time sequence encoding set, and then obtain the generated multi-vehicle trajectory through MLP encoding.
In this embodiment of the present invention, the first training module 202 includes:
a second generating unit 211, a first determining unit 212, a third generating unit 213, and a second determining unit 214.
Wherein,
the second generating unit 211 is configured to obtain sequential codes from the real multi-vehicle trajectory and the generated multi-vehicle trajectory based on MLP codes and bidirectional GRUs;
the first determining unit 212 is configured to determine a multi-vehicle relative trajectory based on the real multi-vehicle trajectory and the generated multi-vehicle trajectory;
the third generating unit 213 is configured to obtain a spatial relationship code from the multi-vehicle relative trajectory based on MLP coding and average pooling operation;
the second determining unit 214 is configured to determine a first logit vector and a first feature vector based on the sequential encoding and the spatial relationship encoding.
In an embodiment of the present invention, the second training module 203 includes:
an extraction unit 215, a second encoding unit 216, an encoding and pooling unit 217 and a third determination unit 218.
Wherein,
the extracting unit 215 is configured to select a target vehicle, and extract a historical track and a future track of the target vehicle from the actual multi-vehicle track and the generated multi-vehicle track;
the second encoding unit 216 is configured to obtain a first code and a second code after the historical track and the future track are encoded based on MLP;
the encoding and pooling unit 217 is configured to calculate a relative trajectory between the historical trajectory and historical trajectories of other surrounding vehicles, and obtain a third code based on MLP encoding and average pooling operation on the relative trajectory;
the third determining unit 218 is configured to determine a second logic vector and a second feature vector based on an FC layer after concatenating the first code, the second code, and the third code.
In an embodiment of the present invention, the third training module 204 includes:
a fourth determination unit 219, a fifth determination unit 220, a sixth determination unit 221, and a seventh determination unit.
Wherein,
the fourth determining unit 219, configured to determine a classification discriminator loss function matched with the generator based on the first logit vector;
the fifth determining unit 220 is configured to determine a regression discriminator loss function matched with the generator based on the second logit vector;
the sixth determining unit 221, configured to determine a generator loss function based on the first feature vector, the second feature vector, the first logic vector, and the second logic vector;
the seventh determining unit 222 is configured to make the classification discriminator loss function, the regression discriminator loss function, and the generator loss function reach nash equilibrium based on a back propagation algorithm, so as to obtain the target generator.
It should be noted that, the embodiments in this specification are all described in a progressive manner, and each embodiment is mainly described in terms of its apparatus type embodiment, because it is basically similar to the method embodiment, the description is relatively simple, and related points can be referred to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same software and/or hardware or in a plurality of software and/or hardware when implementing the invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The vehicle track generation method and device provided by the invention are described in detail, and the principle and the implementation mode of the invention are explained by applying specific examples, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. A vehicle trajectory generation method, characterized by comprising:
acquiring multi-vehicle position conditions and Gaussian process samples during training, and transmitting the multi-vehicle position conditions and the Gaussian process samples to a generator to obtain a generated multi-vehicle track;
acquiring a real multi-vehicle track, transmitting the real multi-vehicle track and the generated multi-vehicle track to a classification discriminator to obtain a first logit vector and a first feature vector, and training the classification discriminator in an antagonistic learning mode based on the first logit vector;
when training is carried out, transmitting the real multi-vehicle track and the generated multi-vehicle track to a regression discriminator to obtain a second logit vector and a second feature vector, and training the regression discriminator in an antagonistic learning mode based on the second logit vector;
when training is carried out, training the generator in a counterstudy mode based on the first feature vector, the second feature vector, the first logic vector and the second logic vector to obtain a target generator;
and after training is finished, transmitting the multi-vehicle position condition and the Gaussian process sampling to the target generator to obtain a target multi-vehicle track.
2. The method of claim 1, wherein passing the multi-vehicle location condition and the Gaussian process samples to a generator results in generating a multi-vehicle trajectory comprising:
generating an initial hidden state of forward sequential expression and reverse sequential expression by the multi-vehicle position condition based on MLP coding;
the results of the initial hidden state and the Gaussian process sampling after MLP coding are subjected to bidirectional GRU to obtain forward sequence expression and reverse sequence expression;
transforming the time sequence characteristics of Gaussian process sampling subjected to MLP coding based on the forward sequence expression and the reverse sequence expression to obtain a time sequence coding set;
subtracting the time sequence codes in the time sequence code sets to obtain relative code sets, carrying out MLP (maximum likelihood) transformation on the relative codes in the relative code sets, and then carrying out average pooling operation to obtain target relative code sets;
and connecting the target relative coding set and the time sequence coding set in series, and then obtaining the generated multi-vehicle track through MLP coding.
3. The method of claim 1, wherein obtaining a true multi-vehicle trajectory, and passing the true multi-vehicle trajectory and the generated multi-vehicle trajectory to a classification evaluator to obtain a first location vector and a first feature vector comprises:
obtaining sequential codes of the real multi-vehicle track and the generated multi-vehicle track based on MLP codes and bidirectional GRUs;
determining a multi-vehicle relative trajectory based on the real multi-vehicle trajectory and the generated multi-vehicle trajectory;
obtaining a spatial relationship code by the multi-vehicle relative track based on MLP coding and average pooling operation;
a first logit vector and a first feature vector are determined based on the sequential encoding and the spatial relationship encoding.
4. The method of claim 1, wherein passing the true multi-vehicle trajectory and the generated multi-vehicle trajectory to a regression evaluator to obtain a second logit vector and a second feature vector comprises:
selecting a target vehicle, and extracting a historical track and a future track of the target vehicle from the real multi-vehicle track and the generated multi-vehicle track;
obtaining a first code and a second code after the historical track and the future track are coded based on MLP;
calculating relative tracks of the historical tracks and historical tracks of other surrounding vehicles, and obtaining a third code after the relative tracks are based on MLP coding and average pooling operation;
and determining a second location vector and a second feature vector based on an FC layer after the first code, the second code and the third code are connected in series.
5. The method of claim 1, wherein training the generator based on the first feature vector, the second feature vector, the first location vector, and the second location vector in a counterlearning manner results in a target generator, comprising:
determining a classification discriminator loss function matching the generator based on the first logit vector;
determining a regression discriminator loss function matching the generator based on the second logit vector;
determining a generator loss function based on the first eigenvector, the second eigenvector, the first logit vector, and the second logit vector;
and enabling the classification discriminator loss function, the regression discriminator loss function and the generator loss function to reach Nash equilibrium based on a back propagation algorithm to obtain the target generator.
6. A vehicle trajectory generation device characterized by comprising:
the first generation module is used for acquiring multi-vehicle position conditions and Gaussian process samples during training, and transmitting the multi-vehicle position conditions and the Gaussian process samples to the generator to obtain a generated multi-vehicle track;
the first training module is used for acquiring a real multi-vehicle track, transmitting the real multi-vehicle track and the generated multi-vehicle track to a classification discriminator to obtain a first logit vector and a first feature vector, and training the classification discriminator in an antagonistic learning mode based on the first logit vector;
the second training module is used for transmitting the real multi-vehicle track and the generated multi-vehicle track to a regression discriminator to obtain a second logit vector and a second feature vector when training is carried out, and training the regression discriminator in an antagonistic learning mode based on the second logit vector;
the third training module is used for training the generator in an antagonistic learning mode based on the first characteristic vector and the second characteristic vector to obtain a target generator when training is carried out;
and the second generation module is used for transmitting the multi-vehicle position condition and the Gaussian process sampling to the target generator to obtain a target multi-vehicle track after the training is finished.
7. The apparatus of claim 6, wherein the first generating module comprises:
a first generating unit, configured to generate initial hidden states of forward sequential expression and reverse sequential expression for the multi-vehicle position condition based on MLP coding;
a first transformation unit, configured to pass results of the initial hidden state and the gaussian process samples after being MLP encoded through a bidirectional GRU to obtain a forward sequential expression and a reverse sequential expression;
the second transformation unit is used for transforming the time sequence characteristics of the Gaussian process sampling subjected to the MLP coding based on the forward sequence expression and the reverse sequence expression to obtain a time sequence coding set;
the transformation and pooling unit is used for subtracting the time sequence codes in the time sequence code sets to obtain relative code sets, carrying out MLP (multi level processing) transformation on the relative codes in the relative code sets and then carrying out average pooling operation to obtain target relative code sets;
and the first coding unit is used for connecting the target relative coding set and the time sequence coding set in series and then obtaining the generated multi-vehicle track through MLP coding.
8. The apparatus of claim 6, wherein the first training module comprises:
a second generating unit, configured to obtain sequential codes from the real multi-vehicle trajectory and the generated multi-vehicle trajectory based on MLP codes and bidirectional GRUs;
a first determining unit for determining a multi-vehicle relative trajectory based on the real multi-vehicle trajectory and the generated multi-vehicle trajectory;
the third generating unit is used for obtaining a spatial relationship code from the multi-vehicle relative track based on MLP coding and average pooling operation;
a second determining unit for determining a first logit vector and a first feature vector based on the sequential encoding and the spatial relationship encoding.
9. The apparatus of claim 6, wherein the second training module comprises:
the extracting unit is used for selecting a target vehicle and extracting a historical track and a future track of the target vehicle from the real multi-vehicle track and the generated multi-vehicle track;
the second coding unit is used for obtaining a first code and a second code after the historical track and the future track are coded based on MLP;
the coding and pooling unit is used for calculating the relative track between the historical track and the historical tracks of other surrounding vehicles, and obtaining a third code after the relative track is based on MLP coding and average pooling operation;
and a third determining unit, configured to determine a second location vector and a second feature vector based on an FC layer after concatenating the first code, the second code, and the third code.
10. The apparatus of claim 6, wherein the third training module comprises:
a fourth determining unit for determining a classification discriminator loss function matched with the generator based on the first logit vector;
a fifth determining unit for determining a regression discriminator loss function matched with the generator based on the second logit vector;
a sixth determining unit for determining a generator loss function based on the first feature vector, the second feature vector, the first logic vector, and the second logic vector;
a seventh determining unit, configured to enable the classification discriminator loss function, the regression discriminator loss function, and the generator loss function to reach nash equilibrium based on a back propagation algorithm, so as to obtain the target generator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111404830.7A CN114065870A (en) | 2021-11-24 | 2021-11-24 | Vehicle track generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111404830.7A CN114065870A (en) | 2021-11-24 | 2021-11-24 | Vehicle track generation method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114065870A true CN114065870A (en) | 2022-02-18 |
Family
ID=80275929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111404830.7A Pending CN114065870A (en) | 2021-11-24 | 2021-11-24 | Vehicle track generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114065870A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115691134A (en) * | 2022-10-31 | 2023-02-03 | 吉林大学 | Intelligent automobile test scene library construction method based on countermeasure generation network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111339867A (en) * | 2020-02-18 | 2020-06-26 | 广东工业大学 | Pedestrian trajectory prediction method based on generation of countermeasure network |
CN112257850A (en) * | 2020-10-26 | 2021-01-22 | 河南大学 | Vehicle track prediction method based on generation countermeasure network |
US20210163038A1 (en) * | 2019-11-27 | 2021-06-03 | Toyota Research Institute, Inc. | Methods and systems for diversity-aware vehicle motion prediction via latent semantic sampling |
-
2021
- 2021-11-24 CN CN202111404830.7A patent/CN114065870A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210163038A1 (en) * | 2019-11-27 | 2021-06-03 | Toyota Research Institute, Inc. | Methods and systems for diversity-aware vehicle motion prediction via latent semantic sampling |
CN111339867A (en) * | 2020-02-18 | 2020-06-26 | 广东工业大学 | Pedestrian trajectory prediction method based on generation of countermeasure network |
CN112257850A (en) * | 2020-10-26 | 2021-01-22 | 河南大学 | Vehicle track prediction method based on generation countermeasure network |
Non-Patent Citations (3)
Title |
---|
PENG BAO AND ETC: "Lifelong Vehicle Trajectory Prediction Framework Based on Generative Replay", 《ARXIV:2111.07511V1》, 15 November 2021 (2021-11-15), pages 1 - 12 * |
孔德江;汤斯亮;吴飞;: "时空嵌入式生成对抗网络的地点预测方法", 模式识别与人工智能, no. 01, 15 January 2018 (2018-01-15) * |
温惠英;张伟罡;赵胜;: "基于生成对抗网络的车辆换道轨迹预测模型", 华南理工大学学报(自然科学版), no. 05, 15 May 2020 (2020-05-15) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115691134A (en) * | 2022-10-31 | 2023-02-03 | 吉林大学 | Intelligent automobile test scene library construction method based on countermeasure generation network |
CN115691134B (en) * | 2022-10-31 | 2024-10-01 | 吉林大学 | Intelligent automobile test scene library construction method based on countermeasure generation network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Eiffert et al. | Probabilistic crowd GAN: Multimodal pedestrian trajectory prediction using a graph vehicle-pedestrian attention network | |
Huang et al. | DiversityGAN: Diversity-aware vehicle motion prediction via latent semantic sampling | |
CN111931902B (en) | Generating countermeasure network model and vehicle track prediction method using generating countermeasure network model | |
CN111401233A (en) | Trajectory prediction method, apparatus, electronic device, and medium | |
Hu et al. | A framework for probabilistic generic traffic scene prediction | |
US11654934B2 (en) | Methods and systems for diversity-aware vehicle motion prediction via latent semantic sampling | |
CN115829171B (en) | Pedestrian track prediction method combining space-time information and social interaction characteristics | |
Khosravi et al. | Crowd emotion prediction for human-vehicle interaction through modified transfer learning and fuzzy logic ranking | |
Zhou et al. | Grouptron: Dynamic multi-scale graph convolutional networks for group-aware dense crowd trajectory forecasting | |
CN110705600A (en) | Cross-correlation entropy based multi-depth learning model fusion method, terminal device and readable storage medium | |
CN118277770A (en) | Obstacle sensing method and device, electronic equipment and storage medium | |
CN112766339A (en) | Trajectory recognition model training method and trajectory recognition method | |
Li | Image semantic segmentation method based on GAN network and ENet model | |
CN114723784B (en) | Pedestrian motion trail prediction method based on domain adaptation technology | |
Mirus et al. | An investigation of vehicle behavior prediction using a vector power representation to encode spatial positions of multiple objects and neural networks | |
CN116338571A (en) | RSSI fingerprint positioning method based on self-encoder and attention mechanism | |
Sun et al. | Vision-based traffic conflict detection using trajectory learning and prediction | |
Balasubramanian et al. | Traffic scenario clustering by iterative optimisation of self-supervised networks using a random forest activation pattern similarity | |
CN115293237A (en) | Vehicle track prediction method based on deep learning | |
CN114065870A (en) | Vehicle track generation method and device | |
Huang et al. | Diversity-aware vehicle motion prediction via latent semantic sampling | |
CN117437507A (en) | Prejudice evaluation method for evaluating image recognition model | |
CN116523002A (en) | Method and system for predicting dynamic graph generation countermeasure network track of multi-source heterogeneous data | |
Bellazi et al. | Towards an machine learning-based edge computing oriented monitoring system for the desert border surveillance use case | |
CN113360772A (en) | Interpretable recommendation model training method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |