Nothing Special   »   [go: up one dir, main page]

CN111339867B - Pedestrian trajectory prediction method based on generation of countermeasure network - Google Patents

Pedestrian trajectory prediction method based on generation of countermeasure network Download PDF

Info

Publication number
CN111339867B
CN111339867B CN202010098815.3A CN202010098815A CN111339867B CN 111339867 B CN111339867 B CN 111339867B CN 202010098815 A CN202010098815 A CN 202010098815A CN 111339867 B CN111339867 B CN 111339867B
Authority
CN
China
Prior art keywords
pedestrian
model
data
discriminator
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010098815.3A
Other languages
Chinese (zh)
Other versions
CN111339867A (en
Inventor
曾伟良
陈漪皓
姚若愚
朱明洲
黎曦琦
郑宇凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202010098815.3A priority Critical patent/CN111339867B/en
Publication of CN111339867A publication Critical patent/CN111339867A/en
Application granted granted Critical
Publication of CN111339867B publication Critical patent/CN111339867B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0463Neocognitrons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for predicting a pedestrian track, which mainly comprises the following steps: preprocessing the data and converting the data into a matrix of [ the number of pedestrians, 4 ]; inputting the preprocessed data into an encoder to finish the encoding function in the generation countermeasure network; inputting output data after passing through an encoder into a pooling layer, and sharing hidden information of global pedestrians in the same scene; inputting the output vector after the hidden information is pooled into a decoder to finish the decoding function in the generation countermeasure network and preliminarily obtain the predicted pedestrian trajectory data; and taking an output vector of a generation countermeasure network generator consisting of an encoder, a pooling layer and a decoder as an input vector, and entering the generation countermeasure network discriminator for discrimination to finally obtain an optimal model.

Description

Pedestrian trajectory prediction method based on generation of countermeasure network
Technical Field
The invention relates to a pedestrian trajectory prediction method, which is based on generation of a countermeasure network and is suitable for prediction of future trajectories of pedestrians in complex scenes.
Background
With the rise of technology progress and artificial intelligence which are different day by day, the autopilot gradually advances the lives of people. In many front-line cities at home and abroad, new things such as unmanned buses and unmanned school buses gradually appear. And the prediction of the future track of the current pedestrian can play a good auxiliary role in application tools on the unmanned platform, such as an unmanned automobile. This is because pedestrians and vehicles share almost the same motion system — pedestrians can instinctively react to the current environment, such as evasion, passing, etc.; the unmanned vehicle also needs to make a judgment on the obstacle in front of the vehicle through an internal system of the unmanned vehicle, namely whether the obstacle is decelerated, stopped or bypassed … …, so that people can start with a pedestrian with a small target volume in order to develop the unmanned field and widen the application range of the unmanned field. After mastering the pedestrian track prediction method, the model can be generalized to the level of vehicles such as automobiles and the like under the condition of slight adjustment.
At present, the traditional method which has better performance in the field of trajectory prediction is basically proposed in the last century. For example, social forces: the model considers the interaction between the pedestrian and the environment into attraction force and repulsion force, and then the resultant force of the forces is used for driving the pedestrian to move; gauss process: predicting parameters such as speed, acceleration, angular deviation and the like of the pedestrian through combination of random variables in a continuous domain (time or space); ③ kernel correlation filtering: a filter template is designed such that when it acts on a tracked target, the resulting response is maximal, the location of the maximal response value being the location of the target.
However, with the vigorous development of technologies such as machine learning and deep learning based on neural networks in the last decade, the trajectory prediction field has made a great breakthrough due to these new algorithms, which are specifically shown in the following: the average error and the final error of the track prediction of the pedestrians under the complex situation are smaller, the generalization capability of the model is stronger, and the advantages of the internal interaction behavior … … among the pedestrians can be considered, so that the method is a great challenge to the traditional algorithm and also a great contribution to the industry. Specifically, the methods based on neural networks and applied to the field of trajectory prediction to achieve excellent performance roughly include: loop neural network: the training data has memory characteristics; a gate control circulation unit: the reset gate and the update gate analyze the past time sequence characteristics to achieve the purpose of predicting future data.
However, the above method also has disadvantages such as: it is difficult for the recurrent neural network to remember information for long distances. But it is very important for timing issues to access long distance contexts. Thus, this drawback results in a recurrent neural network that is limited in context access range. In addition, the algorithm based on the recurrent neural network and the gated recurrent unit can only provide a unique path prediction path with the highest possibility according to the probability, but cannot consider the multi-modal possibility of the future trend of the pedestrian, which does not accord with the objective rule that a plurality of possible motion paths may exist when the pedestrian moves. Therefore, the above methods do not take comprehensive and detailed consideration for the pedestrian trajectory prediction problem in a complex scene.
Disclosure of Invention
In order to overcome the defects and shortcomings of a traditional method and a plurality of methods combined with a neural network in the current track prediction field, the invention provides a method which can consider the interaction behavior among pedestrians and combines sequence prediction and generation of a confrontation network.
In order to achieve the purpose, the technical scheme of the invention is as follows: the method comprises the following steps:
A. Data pre-processing
Through an image processing technology and a video calibration technology, the motion trail of the pedestrian under a certain scene, namely the world coordinates (x, y) of the pedestrian at each moment is acquired. And simultaneously, recording the frame id and the pedestrian number ped id of the current acquisition time. And converting all the acquired pedestrian information into a matrix of 1 [ the number of pedestrians, 4] -the 1 st column represents the acquisition time frame id, the 2 nd column represents the pedestrian number ped id, the 3 rd column represents the abscissa x in the world coordinate of the pedestrian, and the 4 th column represents the ordinate y in the world coordinate of the pedestrian. In this case, the interval between two adjacent acquisition time instants is 0.4s (i.e. 2.5Fps) by resampling method. And finally, arranging the frame ids from small to large according to the acquisition time from top to bottom.
B. The data is passed through a generator to generate a preliminary predicted trajectory. The generator structure is as follows: the generator of the invention is constituted by an encoder-pooling layer-decoder.
B1, inputting the preprocessed data into the coder for coding
An encoder: firstly, embedding the position of each person, and obtaining a vector with fixed length through a layer of MLP structure
Figure BDA0002386160850000021
Where i denotes the ith pedestrian and t denotes the current time. And converting the embedded information vector into long-short term memory sequence information through a long-short term memory (LSTM) unit, and encoding the history track information of the pedestrian. Hidden state of final output
Figure BDA0002386160850000022
Will contain information for the entire track. The formula is as follows:
Figure BDA0002386160850000023
Figure BDA0002386160850000024
wherein tanh is a nonlinear activation function,
Figure BDA0002386160850000025
is the x coordinate of the ith pedestrian at time t,
Figure BDA0002386160850000026
is the y-coordinate, W, of the ith pedestrian at time teIs an embedded weight vector, WencoderThe LSTM weights of all pedestrians are shared.
B2 pooling the output of the encoder
A pooling layer: most of the existing algorithms, which assign an LSTM unit to each pedestrian and let the encoder learn the motion state of a person while storing his/her historical trajectory, do notThe interaction behavior of each pedestrian with each other in the motion state can be well captured. Therefore, the present invention proposes the concept of "pooling layer operations" that can effectively combine information from different encoders, thereby making the motion trajectory hidden layer information of each pedestrian "shared". Therefore, we propose a pooling layer for sharing information of pedestrians in motion. At observation time tobsThen, we pool the hidden state of all people as a tensor PiWherein i represents the ith individual. Of the final output
Figure BDA0002386160850000027
As part of the Decoder input. The formula is as follows:
Figure BDA0002386160850000028
Figure BDA0002386160850000029
where γ (.) is a multilayer perceptron (fully-connected layer containing multiple hidden layers) using the tanh activation function, W cIs an embedding weight vector.
B3, inputting the pooled data into a decoder for decoding
A decoder: the decoder is implemented using an LSTM sequence model for generating the predicted track. Unlike most other current LSTM, the decoder can be viewed as a generator with input conditions. The formula is as follows:
Figure BDA0002386160850000031
Figure BDA0002386160850000032
in the formula, WdRepresents an embedding weight, WdecoderIndicating LSTM weights.
C. Inputting the preliminary predicted track generated by the generator into a discriminator for discrimination
The discriminator structure: the discriminator is composed of an encoder, and primary prediction data generated after the original data is processed by the generator is packaged into an input vector T and is sent to the discriminator to be compared and verified with a real future track. We apply a multi-layer perceptron on the last hidden layer of the decoder to obtain a classification score, after which the discriminator ideally learns subtle interaction rules and defines tracks that fail to comply with the interaction rules as "false". The loss function values are updated through successive iterations. Meanwhile, parameters such as internal weight, bias terms and the like of the model are updated in a back propagation mode, and error loss after passing through the discriminator is observed through new multiple iterations until a local optimal solution or even a global optimal solution is achieved.
Discriminator objective function:
optimizing a generator G: min (G) V (D, G) ═ Ez~Pz(z)[log(1–D(G(z)))].
(ii) optimizing discriminator D: max (D) V (D, G) ═ Ex~Pdata(x)[log(D(x))]+Ez~Pz(z)[log(1–D(G(z)))].
When the discriminator output is 0.5, the discrimination model is completely obscured from the real data and the generated prediction data, and the optimal generation model is obtained.
The identification formula is as follows:
Figure BDA0002386160850000033
hdis=LSTM(T;Wdis),
Yout=MLP(hdis;Wout).
where γ (.) is a multi-layer perceptron (fully-connected layer containing multiple hidden layers) using tanh activation function, vector T is an encapsulating vector of preliminary predicted data, and W isdisTo identify the vector weights, YoutThe classification predicts whether the trajectory is true or false.
Discriminator loss function: we adopt L2A loss function. In each scenario, k possible predictors are generated by a loss function. The formula is as follows:
Figure BDA0002386160850000034
D. training set-test set-validation set partitioning
And respectively dividing a training set, a testing set and a verification set according to the ratio of 6:2: 2. And continuously verifying the training effect of the model by using the verification set in the training process, and testing the final effect of the model by using the test set after the training is finished.
E. Training the preprocessed data guide model
The preprocessed data are led into a model for training, 8 observation points are input, and future prediction points of the track are generated. Model parameters are continuously updated through the last ten thousand iterations until the optimal model parameters are found through the reduction of loss function values. When the output of the discriminator is 0.5, the discriminator cannot distinguish the difference between the real track and the model generating track, the model training effect is optimal, and the model weight file is saved.
F. Test model
And testing the model effect by using the test set according to the trained model weight file, setting the number of expected observation points to be generally 8 or 12, and observing the average Euclidean distance error (ADE) and the final Euclidean distance error (FDE) in meter (m).
G. Visualization
And carrying out visual operation on the trained, verified and tested model to obtain the trajectory which can be selected by the pedestrian in a period of time in the future.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention provides improvement based on a common generation countermeasure network and provides a pedestrian trajectory prediction method under a complex scene. In addition, most of the previous algorithms are broken through, mutual interference and interaction behaviors among pedestrians are not considered, and the method is suitable for pedestrian track prediction in complex and crowded environments (such as shopping malls and outside concert venues). The problem that the motion trail of people is difficult to predict in the environment with large pedestrian volume is effectively solved. Therefore, the pedestrian track is accurately predicted, the factors influencing each other are fully considered, a scheme for planning the pedestrian path can be provided for a plurality of places with large pedestrian flow in life, and the risk of the pedestrian in accidental situations under the environment is reduced.
2. Different from the prior algorithm which only provides one future track, the invention provides a plurality of schemes for predicting the path when the pedestrian moves, generates a plurality of reasonable and effective predicted paths, and accords with the objective rule of the non-uniqueness of the moving direction and the path of the pedestrian. For example, a pedestrian may subjectively stop, slow down, or change direction while moving, or may be used in combination with a change in scene. And when a couple or couple goes with each other, their movements tend to be consistent. It is therefore of great importance to consider the factors that influence each other in this group of people. The invention provides a reasonable and feasible scheme aiming at the multiple selectivity of the pedestrians.
3. The invention provides a planning algorithm aiming at the pedestrian track. The model has strong generalization capability, is suitable for predicting the track of the pedestrian and has good effect on predicting the track of the vehicle. Therefore, the method can be suitable for the path planning problem of vehicle motion on the road after being slightly improved based on the invention, and the bottleneck of the unmanned driving field in the aspect of track prediction is greatly broken through.
Drawings
FIG. 1 is a flow chart for practicing the present invention;
FIG. 2 is a schematic diagram of pre-processed data;
FIG. 3 is a schematic diagram of an output predicted trajectory.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 2, the data is processed into a matrix of 1 [ number of pedestrians, 4 ]. The 1 st column represents the acquisition time frame id, the 2 nd column represents the pedestrian number ped id, the 3 rd column represents the pedestrian abscissa x, and the 4 th column represents the pedestrian ordinate y. The difference between adjacent different frame ids is 0.4, indicating a sampling time interval of 0.4 seconds. At this point, we have obtained the raw data.
Inputting the preprocessed pedestrian track sequence into an encoder for encoding. Firstly, a weight vector is distributed to the current position of each pedestrian, and a hyperbolic tangent function is used for activation to obtain the current state vector of the pedestrian. Then, taking each pedestrian as an LSTM unit, inputting the current state vector and past hidden vector of the pedestrian to obtain the current own hidden vector of each pedestrian, wherein the current own hidden vector comprises the position and direction of the current pedestrian and the position and direction information of a past period of time.
Pooling the hidden vector of each pedestrian obtained in the previous step, so that the hidden state of all the pedestrians is pooled into a tensor P iAnd the purpose of global sharing of all pedestrian information is achieved. And distributing a pooling output weight vector for each pedestrian at the current moment, and inputting the hidden state and the pooling tensor of the pedestrian at the current moment. And then embedding the weight vector through a multi-layer perceptron which uses a hyperbolic tangent function as an activation function to generate an output vector.
And inputting the output vector after the pooling into a decoder for decoding. Firstly, a hyperbolic tangent function is used for activating coordinate information of a pedestrian at the last moment, and then a decoder weight vector is embedded through an LSTM sequence model to generate a preliminary prediction track.
And sending the output information in the generator to a discriminator for discrimination, converting the track prediction information generated in the generator into a current hidden vector by using an LSTM sequence, and embedding a multi-layer perceptron structure in the last hidden layer of a decoder, so that the discriminator can classify and grade the predicted track information generated in the generator and leave information conforming to an interactive prediction condition. After thousands of iterations, the loss function is updated, and the internal weights of the model are updated through back propagation until the optimal model parameters are found. Finally finding k prediction tracks meeting the conditions.
After model visualization, as shown in fig. 3, the model outputs a possible future trajectory as shown in fig. 3, and the dotted line represents the predicted future trajectory of the model.

Claims (2)

1. A pedestrian trajectory prediction method based on a generation countermeasure network is characterized by comprising the following steps: the method comprises the following steps:
A. preprocessing the acquired data;
B. generating a preliminary prediction track by the data through a generator;
b1, inputting the preprocessed data into an encoder for encoding;
packaging the coordinates of each pedestrian into an embedded vector and obtaining a vector with fixed length through a layer of MLP structure
Figure FDA0003358735510000011
Wherein i represents the ith pedestrian, t represents the current moment, the embedded information vector is converted into long-short term memory sequence information through a long-short term memory (LSTM) unit, the historical track information of the pedestrian is encoded, and the hidden state is finally output
Figure FDA0003358735510000012
The information of the whole track is contained, and the specific formula is as follows:
Figure FDA0003358735510000013
Figure FDA0003358735510000014
wherein, tanh is a nonlinear activation function,
Figure FDA0003358735510000015
is the x coordinate of the ith pedestrian at time t,
Figure FDA0003358735510000016
is the y-coordinate, W, of the ith pedestrian at time teIs an embedded weight vector, WencoderThe LSTM weights of all pedestrians are shared,
b2, performing pooling operation on the output of the encoder;
Data encoded by the encoder enters a pooling layer, shares hidden information of all pedestrians in a motion state under the same scene, and is observed at an observation time tobsThen, we pool the hidden state of all people as a tensor PiWhere i represents the ith person, finally output
Figure FDA0003358735510000017
As part of the Decoder input, the specific formula is as follows:
Figure FDA0003358735510000018
Figure FDA0003358735510000019
where γ (.) is a multilayer perceptron using a tanh activation function, WcIs to embed the weight vector(s) in,
b3, inputting the pooled data into a decoder for decoding;
the decoder is implemented using an LSTM sequence model, which can be regarded as a generator with input conditions, to generate the predicted trajectory through the fully-connected layer, with the following specific formula:
Figure FDA00033587355100000110
Figure FDA00033587355100000111
wherein, WdRepresents an embedding weight, WdecoderThe weight of the LSTM is represented,
C. inputting the preliminary predicted track generated by the generator into a discriminator for discrimination;
after the original data are processed by the generator, the generated preliminary prediction data are packaged into an input vector T and are sent to the discriminator to be compared and verified with a real future track; applying a multi-layer perceptron on the last hidden layer of the encoder to obtain a classification score, after which the discriminator ideally learns subtle interaction rules and defines a trajectory that fails to comply with the interaction rules as "false"; updating the loss function value through continuous iteration; meanwhile, parameters such as internal weight, bias term and the like of the model are updated in a back propagation mode, and error loss after passing through a discriminator is observed through new multiple iterations until a local optimal solution or even a global optimal solution is reached;
Discriminator objective function:
optimizing a generator G: min (G) V (D, G) ═ Ez~Pz(z)[log(1–D(G(z)))];
(ii) optimizing discriminator D: max (D) V (D, G) ═ Ex~Pdata(x)[log(D(x))]+Ez~Pz(z)[log(1–D(G(z)))];
When the output of the discriminator is 0.5, the discrimination model is proved to be completely unclear real data and generated prediction data, and the optimal generation model is obtained at the moment;
the specific identification formula is as follows:
Figure FDA0003358735510000021
hdis=LSTM(T;Wdis),
Yout=MLP(hdis;Wout);
where γ () is the multi-layer perceptron using tanh activation function, vector T is the envelope vector of the preliminary prediction data, WdisTo identify the vector weights, YoutClassifying whether the predicted trajectory is true or false;
discriminator loss function: we adopt L2A loss function; in each scene, k possible predicted values are generated through a loss function; the formula is as follows:
Figure FDA0003358735510000022
D. dividing a training set, a test set and a verification set;
respectively dividing a training set, a testing set and a verification set according to the ratio of 6:2: 2; continuously verifying the training effect of the model by using a verification set in the training process, and testing the final effect of the model by using a test set after the training is finished;
E. training the preprocessed data guide model;
inputting 8 observation points to generate future prediction points of the track; continuously updating the model parameters through the last ten thousand iterations until the optimal model parameters are found through the reduction of the loss function value; when the output of the discriminator is 0.5, the discriminator cannot distinguish the difference between the real track and the model generating track, the model training effect is optimal, and the model weight file is saved;
F. Testing the model;
testing the model effect by using the trained model weight file and a test set, setting the number of expected observation points to be 8 or 12, and observing the average Euclidean distance error (ADE) and the final Euclidean distance error (FDE) with the unit of meter (m);
G. visualization;
and carrying out visual operation on the trained, verified and tested model to obtain the trajectory which can be selected by the pedestrian in a period of time in the future.
2. The pedestrian trajectory prediction method based on the generation countermeasure network according to claim 1, wherein the motion trajectory of the pedestrian under a certain scene, namely the world coordinates (x, y) of the pedestrian at each moment, is acquired through an image processing technology and a video calibration technology; meanwhile, recording the frame id and the pedestrian number ped id of the current acquisition time; converting all the collected pedestrian information into a matrix of 1 [ the number of pedestrians, 4] -the 1 st column represents the collection time frame id, the 2 nd column represents the pedestrian number ped id, the 3 rd column represents the abscissa x in the world coordinate of the pedestrian, and the 4 th column represents the ordinate y in the world coordinate of the pedestrian; at the moment, the interval between two adjacent acquisition moments is 0.4s by a resampling method; and finally, arranging the frame ids from small to large from top to bottom according to the acquisition time.
CN202010098815.3A 2020-02-18 2020-02-18 Pedestrian trajectory prediction method based on generation of countermeasure network Expired - Fee Related CN111339867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010098815.3A CN111339867B (en) 2020-02-18 2020-02-18 Pedestrian trajectory prediction method based on generation of countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010098815.3A CN111339867B (en) 2020-02-18 2020-02-18 Pedestrian trajectory prediction method based on generation of countermeasure network

Publications (2)

Publication Number Publication Date
CN111339867A CN111339867A (en) 2020-06-26
CN111339867B true CN111339867B (en) 2022-05-24

Family

ID=71185181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010098815.3A Expired - Fee Related CN111339867B (en) 2020-02-18 2020-02-18 Pedestrian trajectory prediction method based on generation of countermeasure network

Country Status (1)

Country Link
CN (1) CN111339867B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11481607B2 (en) * 2020-07-01 2022-10-25 International Business Machines Corporation Forecasting multivariate time series data
CN111860269B (en) * 2020-07-13 2024-04-16 南京航空航天大学 Multi-feature fusion series RNN structure and pedestrian prediction method
CN112069889B (en) * 2020-07-31 2021-08-03 北京信息科技大学 Civil aircraft trajectory prediction method, electronic device and storage medium
CN112101865B (en) * 2020-09-14 2024-06-28 拉扎斯网络科技(上海)有限公司 Latency acquisition method, apparatus, computer device, and readable storage medium
CN112215193B (en) * 2020-10-23 2023-07-18 深圳大学 Pedestrian track prediction method and system
CN112541449A (en) * 2020-12-18 2021-03-23 天津大学 Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle
CN112766561B (en) * 2021-01-15 2023-11-17 东南大学 Attention mechanism-based generation type countermeasure track prediction method
CN112907088B (en) * 2021-03-03 2024-03-08 杭州诚智天扬科技有限公司 Parameter adjustment method and system for score-clearing model
CN114065870A (en) * 2021-11-24 2022-02-18 中国科学技术大学 Vehicle track generation method and device
CN113985897B (en) * 2021-12-15 2024-05-31 北京工业大学 Mobile robot path planning method based on pedestrian track prediction and social constraint
CN114445777A (en) * 2022-01-30 2022-05-06 重庆长安汽车股份有限公司 LSTM neural network pedestrian trajectory prediction method based on group behavior optimization
CN115309164B (en) * 2022-08-26 2023-06-27 苏州大学 Man-machine co-fusion mobile robot path planning method based on generation of countermeasure network
CN116203971A (en) * 2023-05-04 2023-06-02 安徽中科星驰自动驾驶技术有限公司 Unmanned obstacle avoidance method for generating countering network collaborative prediction
CN117475090B (en) * 2023-12-27 2024-06-11 粤港澳大湾区数字经济研究院(福田) Track generation model, track generation method, track generation device, terminal and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564129A (en) * 2018-04-24 2018-09-21 电子科技大学 A kind of track data sorting technique based on generation confrontation network
CN109635745A (en) * 2018-12-13 2019-04-16 广东工业大学 A method of Multi-angle human face image is generated based on confrontation network model is generated
CN109872346A (en) * 2019-03-11 2019-06-11 南京邮电大学 A kind of method for tracking target for supporting Recognition with Recurrent Neural Network confrontation study
CN110781838A (en) * 2019-10-28 2020-02-11 大连海事大学 Multi-modal trajectory prediction method for pedestrian in complex scene
CN110796080A (en) * 2019-10-29 2020-02-14 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation of countermeasure network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163439A (en) * 2019-05-24 2019-08-23 长安大学 A kind of city size taxi trajectory predictions method based on attention mechanism

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564129A (en) * 2018-04-24 2018-09-21 电子科技大学 A kind of track data sorting technique based on generation confrontation network
CN109635745A (en) * 2018-12-13 2019-04-16 广东工业大学 A method of Multi-angle human face image is generated based on confrontation network model is generated
CN109872346A (en) * 2019-03-11 2019-06-11 南京邮电大学 A kind of method for tracking target for supporting Recognition with Recurrent Neural Network confrontation study
CN110781838A (en) * 2019-10-28 2020-02-11 大连海事大学 Multi-modal trajectory prediction method for pedestrian in complex scene
CN110796080A (en) * 2019-10-29 2020-02-14 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation of countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Intent-Aware Conditional Generative Adversarial Network for Pedestrian Path Prediction;Yasheng Sun等;《2019 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA)》;20191017;第155-160页 *
基于注意力机制的行人轨迹预测生成模型;孙亚圣等;《计算机应用》;20190331;第39卷(第3期);第668-674页 *

Also Published As

Publication number Publication date
CN111339867A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111339867B (en) Pedestrian trajectory prediction method based on generation of countermeasure network
Sadeghian et al. Sophie: An attentive gan for predicting paths compliant to social and physical constraints
Cai et al. Environment-attention network for vehicle trajectory prediction
Fernando et al. Deep inverse reinforcement learning for behavior prediction in autonomous driving: Accurate forecasts of vehicle motion
Zhao et al. A spatial-temporal attention model for human trajectory prediction.
CN112734808B (en) Trajectory prediction method for vulnerable road users in vehicle driving environment
CN110781838A (en) Multi-modal trajectory prediction method for pedestrian in complex scene
Peng et al. MASS: Multi-attentional semantic segmentation of LiDAR data for dense top-view understanding
Mersch et al. Maneuver-based trajectory prediction for self-driving cars using spatio-temporal convolutional networks
CN115147790B (en) Future track prediction method of vehicle based on graph neural network
Fu et al. Trajectory prediction-based local spatio-temporal navigation map for autonomous driving in dynamic highway environments
CN112347923A (en) Roadside end pedestrian track prediction algorithm based on confrontation generation network
CN112651374B (en) Future trajectory prediction method based on social information and automatic driving system
CN115829171B (en) Pedestrian track prediction method combining space-time information and social interaction characteristics
CN112949597A (en) Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
Zou et al. Multi-modal pedestrian trajectory prediction for edge agents based on spatial-temporal graph
CN114399743A (en) Method for generating future track of obstacle
Kuo et al. Trajectory prediction with linguistic representations
Mukherjee et al. Interacting vehicle trajectory prediction with convolutional recurrent neural networks
CN116595871A (en) Vehicle track prediction modeling method and device based on dynamic space-time interaction diagram
Chen et al. STIGCN: spatial–temporal interaction-aware graph convolution network for pedestrian trajectory prediction
Zernetsch et al. Cyclist Trajectory Forecasts by Incorporation of Multi-View Video Information
CN114723782A (en) Traffic scene moving object perception method based on different-pattern image learning
Xiao et al. Pedestrian trajectory prediction in heterogeneous traffic using facial keypoints-based convolutional encoder-decoder network
Xu et al. Vehicle trajectory prediction considering multi-feature independent encoding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zeng Weiliang

Inventor after: Chen Yihao

Inventor after: Yao Ruoyu

Inventor after: Zhu Mingzhou

Inventor after: Li Xiqi

Inventor after: Zheng Yufan

Inventor before: Chen Yihao

Inventor before: Zeng Weiliang

Inventor before: Yao Ruoyu

Inventor before: Zhu Mingzhou

Inventor before: Li Xiqi

Inventor before: Zheng Yufan

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220524