Nothing Special   »   [go: up one dir, main page]

CN115943400B - Track prediction method and device based on time and space learning and computer equipment - Google Patents

Track prediction method and device based on time and space learning and computer equipment Download PDF

Info

Publication number
CN115943400B
CN115943400B CN202180050155.4A CN202180050155A CN115943400B CN 115943400 B CN115943400 B CN 115943400B CN 202180050155 A CN202180050155 A CN 202180050155A CN 115943400 B CN115943400 B CN 115943400B
Authority
CN
China
Prior art keywords
matrix
obstacle
predicted
feature matrix
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202180050155.4A
Other languages
Chinese (zh)
Other versions
CN115943400A (en
Inventor
许家妙
何明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Publication of CN115943400A publication Critical patent/CN115943400A/en
Application granted granted Critical
Publication of CN115943400B publication Critical patent/CN115943400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Algebra (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A trajectory prediction method based on temporal and spatial learning, comprising: acquiring preset frame position data of an obstacle to be predicted, and map data (202); generating a target matrix according to the preset frame position data and the map data, wherein the target matrix comprises a position matrix corresponding to the preset frame position data and a map matrix corresponding to the obstacle to be predicted (204); inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix (206); performing spatial information integration on the first feature matrix and the second feature matrix to obtain a spatial feature matrix (208); and inputting the spatial feature matrix into a track prediction model to obtain a target track (210) of the obstacle to be predicted.

Description

Track prediction method and device based on time and space learning and computer equipment
Technical Field
The application relates to a track prediction method, a track prediction device, computer equipment, a storage medium and a vehicle based on time and space learning.
Background
During automatic driving, it is very necessary to predict the trajectory of obstacles in the surrounding environment over a certain period of time. By predicting the future track of the obstacle, the intention of the obstacle can be recognized earlier by the automatic driving vehicle, and the driving route and the driving speed are planned according to the intention of the obstacle, so that collision is avoided, and the occurrence of safety accidents is reduced. Currently, track prediction may be performed by a track prediction method based on deep learning, such as preprocessing historical track information of an obstacle and map data into raster image or vectorized data, and further processing the raster image or vectorized data with a deep network.
The historical track information of the obstacle may be referred to as time information, and the relationship of the historical track information of the obstacle with the map data may be referred to as spatial information. Because the time information and the space information are particularly important for track prediction of the obstacle, the existing track prediction method based on deep learning cannot fully utilize the time information and the space information at the same time, so that the accuracy of track prediction is low.
Disclosure of Invention
According to various embodiments of the present disclosure, a trajectory prediction method, apparatus, computer device, storage medium, and vehicle based on temporal and spatial learning are provided.
A trajectory prediction method based on temporal and spatial learning, comprising:
acquiring preset frame position data of an obstacle to be predicted and map data;
generating a target matrix according to the preset frame position data and the map data, wherein the target matrix comprises a position matrix corresponding to the preset frame position data and a map matrix corresponding to the obstacle to be predicted;
Inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix;
integrating the spatial information of the first feature matrix and the second feature matrix to obtain a spatial feature matrix; and
And inputting the space feature matrix into a track prediction model to obtain the target track of the obstacle to be predicted.
A trajectory prediction device based on temporal and spatial learning, comprising:
the data acquisition module is used for acquiring preset frame position data of the obstacle to be predicted and map data;
The matrix generation module is used for generating a target matrix according to the preset frame position data and the map data, wherein the target matrix comprises a position matrix corresponding to the preset frame position data and a map matrix corresponding to the obstacle to be predicted;
The time information extraction module is used for inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix;
The spatial information integration module is used for integrating the spatial information of the first feature matrix and the second feature matrix to obtain a spatial feature matrix; and
And the track prediction module is used for inputting the space feature matrix into a track prediction model to obtain the target track of the obstacle to be predicted.
A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the one or more processors to perform the steps of:
acquiring preset frame position data of an obstacle to be predicted and map data;
generating a target matrix according to the preset frame position data and the map data, wherein the target matrix comprises a position matrix corresponding to the preset frame position data and a map matrix corresponding to the obstacle to be predicted;
Inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix;
integrating the spatial information of the first feature matrix and the second feature matrix to obtain a spatial feature matrix; and
And inputting the space feature matrix into a track prediction model to obtain the target track of the obstacle to be predicted.
One or more computer storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of:
acquiring preset frame position data of an obstacle to be predicted and map data;
generating a target matrix according to the preset frame position data and the map data, wherein the target matrix comprises a position matrix corresponding to the preset frame position data and a map matrix corresponding to the obstacle to be predicted;
Inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix;
integrating the spatial information of the first feature matrix and the second feature matrix to obtain a spatial feature matrix; and
And inputting the space feature matrix into a track prediction model to obtain the target track of the obstacle to be predicted.
A vehicle comprising the steps of performing the above-described trajectory prediction method based on temporal and spatial learning.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below. Other features and advantages of the application will be apparent from the description and drawings, and from the claims.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram of an application environment for a trajectory prediction method based on temporal and spatial learning in one or more embodiments.
FIG. 2 is a flow diagram of a trajectory prediction method based on temporal and spatial learning in one or more embodiments.
FIG. 3 is a schematic diagram of a lane line graph resulting from searching for associated lane lines in one or more embodiments.
Fig. 4 is a flowchart illustrating a step of inputting a target matrix into a time information model to obtain a first feature matrix corresponding to a position matrix and a second feature matrix corresponding to a map matrix in one or more embodiments.
Fig. 5 is a flowchart illustrating a step of integrating spatial information of the first feature matrix and the second feature matrix to obtain a spatial feature matrix in one or more embodiments.
FIG. 6 is a block diagram of a trajectory prediction device based on temporal and spatial learning in one or more embodiments.
FIG. 7 is a block diagram of a computer device in one or more embodiments.
Detailed Description
In order to make the technical scheme and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that the terms "first," "second," and the like in the description and in the claims are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The track prediction method based on time and space learning provided by the application can be applied to an application environment shown in figure 1. In-vehicle sensor 102 communicates with in-vehicle computer device 104 via a network. The number of the vehicle-mounted sensors may be one or a plurality. The in-vehicle computer device may be simply referred to as a computer device. The vehicle-mounted sensor 102 sends acquired point cloud data to the computer equipment 104, the computer equipment 104 carries out target detection on the point cloud data, preset frame position data of an obstacle to be predicted is obtained, pre-stored map data is obtained, a target matrix is generated according to the preset frame position data and the map data, the target matrix comprises a position matrix corresponding to the preset frame position data and a map matrix corresponding to the obstacle to be predicted, and therefore the target matrix is input into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix, and the first feature matrix and the second feature matrix are integrated in space information to obtain a space feature matrix; and further inputting the space feature matrix into a track prediction model to obtain a target track of the obstacle to be predicted. The in-vehicle sensor 102 may be, but is not limited to, a lidar, a laser scanner, a camera. The in-vehicle computer device 104 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, or may be implemented by a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a track prediction method based on time and space learning is provided, and the method is applied to the computer device in fig. 1 for illustration, and includes the following steps:
Step 202, obtaining preset frame position data of an obstacle to be predicted and map data.
The obstacle to be predicted refers to a dynamic obstacle around the unmanned vehicle during running. The obstacle to be predicted may include a pedestrian, a vehicle, or the like. The preset frame position data refers to the position of the obstacle to be predicted in a history continuous multi-frame, including the current frame position.
The map data refers to a high-precision map stored in advance in the computer device. The precision map contains rich and fine road traffic information elements. The high-precision map not only has high-precision coordinates, but also includes accurate road shapes, and also includes data of gradient, curvature, heading, elevation, roll, etc. of each lane. The high-precision map depicts not only roads but also lane lines existing on each road.
In the driving process of the unmanned vehicle, the vehicle-mounted sensor installed on the vehicle can send the collected point cloud data to the computer equipment. The computer equipment can store the point cloud data by taking the frame as a unit, and records the information such as the data acquisition time of each frame of the point cloud data. The vehicle-mounted sensor can be a laser radar, a laser scanner, a camera and the like. The computer equipment can conduct track prediction in real time, and can acquire preset frame point cloud data including current frame point cloud data according to the current frame, conduct target detection on the preset frame point cloud data, determine the position of an obstacle to be predicted in a world coordinate system in each frame, and therefore obtain preset frame position data of the obstacle to be predicted. The preset frame number may be preset, and similarly, the predicted frame number may also be preset, where the predicted frame number refers to the frame number corresponding to the target track obtained by track prediction. For example, the frequency of the lidar is 10Hz, and it is necessary to predict a target trajectory within 3s from trajectory data of the unmanned vehicle within 2s, the preset frame number is 2×10=20 frames, and the predicted frame number is 3×10=30 frames. In each frame, the position of the obstacle to be predicted in the world coordinate system may be represented by (x, y), and thus, the preset frame position data includes data in the abscissa direction (x direction) and data in the ordinate direction (y direction).
Step 204, generating a target matrix according to the preset frame position data and the map data, wherein the target matrix comprises a position matrix corresponding to the preset frame position data and a map matrix corresponding to the obstacle to be predicted.
The target matrix is a matrix obtained by integrating the preset frame data and the map data.
Specifically, the computer device may first convert the preset frame position data into a corresponding position matrix, where the position matrix includes the position of the obstacle to be predicted in each frame. Searching associated lane lines of the obstacle to be predicted in the map data according to the preset frame position data, wherein the associated lane lines refer to lanes of the obstacle to be predicted which is likely to drive in the future. And obtaining a map matrix corresponding to the obstacle to be predicted according to the associated lane lines. The map matrix comprises a plurality of lane lines corresponding to the obstacle to be predicted and positions of lane line points corresponding to each associated lane line, and the position matrix and the map matrix are combined into a matrix to obtain the target matrix.
And 206, inputting the target matrix into the time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix.
The target matrix comprises a position matrix corresponding to the position data of the preset frame and a map matrix corresponding to the obstacle to be predicted. The first feature matrix refers to a matrix comprising time information hidden in the position matrix, i.e. comprising time information embodied by the position data of the obstacle to be predicted. The second feature matrix refers to a matrix comprising time information hidden in the map matrix, i.e. comprising time information embodied by associated lane lines of the obstacle to be predicted.
The computer equipment is pre-stored with a time information model which is obtained through training of a large amount of sample data. For example, the temporal information model may be a convolutional neural network model. Inputting the target matrix into a time information model, processing frame number channels of a position matrix and a map matrix in the target matrix through the time information model, and learning hidden time information in the frame number channels, namely extracting time features in the position matrix and the map matrix, and further respectively obtaining a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix according to the extracted time features.
And step 208, integrating the spatial information of the first feature matrix and the second feature matrix to obtain a spatial feature matrix.
In order to acquire the relation between the position data of the preset frame of the obstacle to be predicted and the lane line information, the first feature matrix corresponding to the position matrix and the second feature matrix corresponding to the map matrix can be subjected to spatial information integration. Specifically, the computer device may calculate the similarity between the first feature matrix and the second feature matrix, so as to calculate a new map feature according to the similarity and the second feature matrix corresponding to the map matrix, and use the new map feature as a third feature matrix, and further combine the third feature matrix with the first feature matrix, so as to implement connection between the new map feature and the position feature of the obstacle to be predicted, and obtain the spatial feature matrix. The spatial feature matrix is used for representing the relation between the preset frame position data of the obstacle to be predicted and the lane line information.
Step 210, inputting the spatial feature matrix into a track prediction model to obtain a target track of the obstacle to be predicted.
The computer equipment is pre-stored with a track prediction model, and the track prediction model and the time information model can be obtained through training of the same sample data. For example, the trajectory prediction model may be an Encode-Decode (codec) network model, and in particular may be a convolutional neural network model. And carrying out track prediction on the space feature matrix through a track prediction model, and outputting a target track. The target trajectory may be a trajectory corresponding to a predicted frame number, which refers to a frame number corresponding to a future time period that the computer device needs to predict. For example, if it is necessary to predict the target trajectory within 3s, the predicted frame number is 3×10=30 frames.
In one embodiment, in the process of training the time information model and the track prediction model through the sample data, a back propagation algorithm, such as an optimization method of SGD (Stochastic GRADIENT DESCENT, random gradient descent method), adam (Adaptive Moment Estimation ) algorithm, and the like, may be used to train the models to obtain model parameters, and the time information model and the track prediction model are stored with corresponding model parameters to obtain a trained time information model and a trained track prediction model.
In this embodiment, preset frame position data of an obstacle to be predicted is obtained, map data is obtained, and a target matrix is generated according to the preset frame position data and the map data, wherein the target matrix comprises a position matrix corresponding to the preset frame position data and a map matrix corresponding to the obstacle to be predicted, so that data meeting the input requirement of a model can be obtained, the number of the matrices can be reduced, and the integration of time information and space information can be conveniently carried out subsequently. And inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix, integrating the spatial information of the first feature matrix and the second feature matrix to obtain a spatial feature matrix, and inputting the spatial feature matrix into a track prediction model to obtain the target track of the obstacle to be predicted. The method can fully utilize the time information and the space information of the obstacle to be predicted in the preset frame, and improves the accuracy of the track prediction result.
In one embodiment, the position matrix is a matrix marked with an abscissa direction and an ordinate direction corresponding to the preset frame position data, the map matrix is a matrix marked with an abscissa direction and an ordinate direction of an associated lane line corresponding to the preset frame position data, the time information model and the track prediction model are both one-dimensional convolutional neural network models, and the time information model and the track prediction model can be the same type of convolutional neural network model or different types of convolutional neural network models.
The computer device may mark an abscissa direction and an ordinate direction corresponding to the preset frame position data in the position matrix, and mark an abscissa direction and an ordinate direction of an associated lane line corresponding to the preset frame position data in the map matrix. Specifically, the abscissa data of the position of the obstacle to be predicted in each frame in the position matrix may be marked in the abscissa direction, and the ordinate data may be marked in the ordinate direction, so as to distinguish the data in the abscissa direction from the data in the ordinate direction. When the preset frame number is M, the position matrix may be represented as an m×2-dimensional matrix [ [ x_ { -m+1}, y_ { -m+1} ], [ x_ { -m+2}, y_ { -m+2} ], [ x_ {0}, y_ {0} ], where [ x_ { -m+1}, y_ { -m+1} ] is a position of the obstacle to be predicted in the world coordinate system of the historical M-1-th frame, [ x_ {0}, y_ {0} ] represents a position of the obstacle to be predicted in the world coordinate system of the current frame, and 2 represents an x coordinate direction and a y coordinate direction. The map matrix comprises a plurality of associated lane lines corresponding to the obstacle to be predicted and positions of lane line points corresponding to each associated lane line, and the abscissa data of the positions of the lane line points can be marked in the abscissa direction, and the ordinate data can be marked in the ordinate direction and used for distinguishing the data in the abscissa direction and the data in the ordinate direction. The map matrix is a K× (n+M) ×2-dimensional matrix, where K represents the number of associated lane lines. N represents the predicted frame number, M represents the preset frame number, and (N+M) x 2 represents the positions of N+M points of a certain associated lane line in a world coordinate system.
The time information model and the track prediction model can be convolutional neural network models with a convolutional kernel size of 1 and a step length of 1, so that data in the abscissa direction and data in the ordinate direction are independently processed without interference, the calculated amount can be reduced, and the calculation speed is higher.
In one embodiment, generating the target matrix from the preset frame position data and the map data includes: converting preset frame position data of an obstacle to be predicted into a position matrix; determining a map matrix corresponding to the obstacle to be predicted according to the preset frame position data and the map data; and combining the position matrix with the map matrix to obtain a target matrix.
The preset frame position data comprises position coordinates of the obstacle to be predicted in each frame. The position matrix may be a matrix marked with an abscissa direction and an ordinate direction corresponding to the preset frame position data. Searching the associated lane lines of the obstacle to be predicted in the map data according to the preset frame position data, wherein the associated lane lines refer to lane lines of the obstacle to be predicted, which can possibly run after the initial frame position. After the associated lane lines are searched, the associated lane lines can be sampled into a plurality of points, so that a lane line point set is obtained, and the lane line point set is converted into a map matrix. The map matrix is marked with the abscissa direction and the ordinate direction of the associated lane line corresponding to the preset frame position data. And combining the position matrix with the map matrix to obtain a target matrix.
In this embodiment, the preset frame position data of the obstacle to be predicted is converted into the position matrix, and the map matrix corresponding to the obstacle to be predicted is determined according to the preset frame position data and the map data, so that the position matrix capable of distinguishing the data in the abscissa direction and the data in the ordinate direction and the map matrix are obtained, which is beneficial to separate processing of the data in the abscissa direction and the data in the ordinate direction, reduces the calculation amount and increases the calculation speed. And combining the position matrix with the map matrix to obtain a target matrix, so that the number of the matrices can be reduced, and the integration of time information and space information can be conveniently carried out subsequently.
Further, determining a map matrix corresponding to the obstacle to be predicted according to the preset frame position data and the map data includes: searching associated lane lines of the obstacle to be predicted in the map data according to the preset frame position data; sampling the related lane lines to obtain a lane line point set; and converting the lane line point set into a map matrix corresponding to the obstacle to be predicted.
Searching lane line points closest to the map data through initial frame position data of preset frame position data, wherein the lane line points are represented by O. For example, a KNN (K-Nearest Neighbor) method may be employed to search for lane line point O. And continuously searching lane lines in the driving direction of the obstacle to be predicted by taking O as a starting point, and generating a lane line diagram according to the searched associated lane lines. The length of the searched associated lane line may be v×t×n+m, where V represents an average speed of the obstacle to be predicted in the preset frame, and may be calculated according to the preset frame position data, T represents a time interval between adjacent frame position data, for example, 100 ms, N is a predicted frame number, and M is a preset frame number. As shown in FIG. 3, a schematic diagram of a lane diagram obtained by searching for an associated lane line in one embodiment, where the lane diagram includes three associated lane lines A-C, A-B and A-D, and the point A is the start point of the lane line and has the same meaning as the point O. And uniformly sampling each associated lane line into N+M points, namely representing each associated lane line by using the uniformly sampled points, so as to obtain a lane line point set. And converting the lane line point set into a map matrix corresponding to the obstacle to be predicted, wherein the map matrix is a K× (N+M) multiplied by 2-dimensional matrix, and K represents the number of associated lane lines. N represents the predicted frame number, M represents the preset frame number, and (N+M) x 2 represents the positions of N+M points of a certain associated lane line in a world coordinate system.
In this embodiment, the relevant lane lines of the obstacle to be predicted are searched in the map data according to the preset frame position data, the relevant lane lines are sampled to obtain the lane line point set, the lane line point set is converted into the map matrix corresponding to the obstacle to be predicted, and the map matrix capable of distinguishing the abscissa direction data and the ordinate direction data is obtained, so that the subsequent separation processing of the abscissa direction data and the ordinate direction data is facilitated, the calculated amount is reduced, and the calculation efficiency is improved.
In one embodiment, as shown in fig. 4, the step of inputting the target matrix into the time information model to obtain the first feature matrix corresponding to the position matrix and the second feature matrix corresponding to the map matrix includes:
step 402, inputting the target matrix into a one-dimensional convolutional neural network model, and extracting features of multiple direction dimensions from the position matrix and the map matrix of the target matrix through the one-dimensional convolutional neural network model.
Step 404, obtaining a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix according to the extracted features.
The temporal information model may be a one-dimensional convolutional neural network model. The one-dimensional convolutional neural network model may be a convolutional neural network model with a convolutional kernel size (kernel size) of 1 and a step size (stride) of 1, so that abscissa data and ordinate data are processed independently and do not interfere with each other.
The one-dimensional convolutional neural network model is used for extracting characteristics of multiple direction dimensions of the position matrix and the map matrix of the target matrix respectively, and the position matrix and the map matrix both comprise data of an abscissa direction and data of an ordinate direction, and the multiple direction dimensions refer to the abscissa direction and the ordinate direction. The first feature matrix corresponding to the position matrix can be obtained according to the extracted features of the position matrix, and the second feature matrix corresponding to the map matrix can be obtained according to the extracted features of the map matrix. The number of channels corresponding to the features of the extracted target feature matrix is C, and the first feature matrix is a C multiplied by 2 dimensional matrix, wherein 2 represents the x-direction dimension and the y-direction dimension. The number of channels corresponding to the extracted features of the map matrix is also C, and the second feature matrix is a KXCX2-dimensional matrix, wherein K represents the number of lane lines in the map matrix, and 2 represents the x-direction dimension and the y-direction dimension. Further, the channel data may be the number of channels of the last convolutional layer of the one-dimensional convolutional neural network model.
In the embodiment, the time information extraction is performed on the target matrix through the one-dimensional convolutional neural network model, the network structure of the one-dimensional convolutional neural network model is smaller, the time information in a plurality of coordinate directions can be independently processed, the calculated amount of the model is effectively reduced, and the extraction efficiency of the time information is improved.
In one embodiment, as shown in fig. 5, the step of integrating the spatial information of the first feature matrix and the second feature matrix to obtain the spatial feature matrix includes:
step 502, comparing the first feature matrix with the second feature matrix to obtain a similarity.
And step 504, calculating a third feature matrix corresponding to the second feature matrix according to the similarity.
Step 506, combining the third feature matrix with the first feature matrix to obtain a spatial feature matrix.
The first feature matrix includes time information embodied in position data of the obstacle to be predicted. The second feature matrix comprises time information embodied by associated lane lines of the obstacle to be predicted. The similarity refers to the similarity between the track generated by the preset frame position data of the obstacle to be predicted and each associated lane line in the second feature matrix.
The method of calculating the similarity between the first feature matrix and the second feature matrix by the computer device may be to multiply the first feature matrix with a transpose of the second feature matrix to obtain a similarity vector. The similarity vector comprises a similarity vector of a track generated by the preset frame position data of the obstacle to be predicted and each associated lane line in the second feature matrix. And normalizing all similarity vectors by using a softmax normalization function to obtain probability vectors. The probability vector comprises the similarity between the track generated by the preset frame position data of the obstacle to be predicted and each associated lane line in the second feature matrix. Taking fig. 3 as an example, the probability vector= [0.1,0.3,0.6] indicates that the similarity between the track generated by the preset frame position data of the obstacle to be predicted and the associated lane lines a-C, A-B and a-D is 0.1,0.3 and 0.6, respectively.
Multiplying each associated lane line in the second feature matrix with the corresponding probability value, and adding to obtain a third feature matrix corresponding to the second feature matrix. The third feature matrix is a new map feature, which is a C x 2-dimensional matrix. And combining the third feature matrix with the first feature matrix to obtain a spatial feature matrix. Combining means to connect the third feature matrix with the first feature matrix. The spatial feature matrix is a 2c×2 dimensional matrix.
In this embodiment, the first feature matrix and the second feature matrix are compared to obtain a similarity, a third feature matrix corresponding to the second feature matrix is calculated according to the similarity, and the third feature matrix is combined with the first feature matrix to obtain a spatial feature matrix. The relation between the preset frame position data of the obstacle to be predicted and the associated lane lines can be obtained, and the space information is obtained, so that the accuracy of track prediction can be further improved.
In one embodiment, obtaining preset frame position data of an obstacle to be predicted includes: acquiring preset frame point cloud data of an obstacle to be predicted; and respectively inputting the point cloud data of the preset frames into a target detection model, positioning the position information of the obstacle to be predicted corresponding to each frame, and obtaining the position data of the preset frames of the obstacle to be predicted according to the position information of the obstacle to be predicted corresponding to the preset frames.
The preset frame point cloud data refers to historical continuous multi-frame point cloud data including current frame point cloud data. The point cloud data is data in which the sensor records scanned surrounding environment information in a point cloud form, wherein the surrounding environment information comprises obstacles to be predicted in the surrounding environment of the vehicle, and the number of the obstacles to be predicted can be multiple. The point cloud data may specifically include three-dimensional coordinates of points, laser reflection intensity, color information, and the like. The three-dimensional coordinates are used to represent position information of the surface of the obstacle to be predicted in the surrounding environment.
The computer equipment respectively inputs the preset frame point cloud data into the target detection model, and determines a three-dimensional bounding box corresponding to each frame of the obstacle to be predicted, so as to obtain preset frame position data of the obstacle to be predicted. The object detection model may be any one of object detection Models such as PointNet, pointPillar, polarNet and SEMANTIC SEGMENT Models (semantic segmentation Models). The three-dimensional bounding box includes the coordinates, size, orientation, etc. of the center point of each obstacle to be predicted. The center point coordinates represent position information of the obstacle to be predicted.
In the embodiment, the target detection is performed on the preset frame point cloud data in the target detection model, so that the position of the obstacle to be predicted in each frame can be accurately and rapidly obtained, and the accuracy of the subsequent track prediction is improved.
In one embodiment, as shown in fig. 6, there is provided a trajectory prediction apparatus based on temporal and spatial learning, including: a data acquisition module 602, a matrix generation module 604, a temporal information extraction module 606, a spatial information integration module 608, and a trajectory prediction module 610, wherein:
the data acquisition module 602 is configured to acquire preset frame position data of an obstacle to be predicted, and map data.
The matrix generating module 604 is configured to generate a target matrix according to the preset frame position data and the map data, where the target matrix includes a position matrix corresponding to the preset frame position data and a map matrix corresponding to the obstacle to be predicted.
The time information extraction module 606 is configured to input the target matrix into the time information model, and obtain a first feature matrix corresponding to the location matrix and a second feature matrix corresponding to the map matrix.
The spatial information integration module 608 is configured to integrate the spatial information of the first feature matrix and the second feature matrix to obtain a spatial feature matrix.
The track prediction module 610 is configured to input the spatial feature matrix into a track prediction model, so as to obtain a target track of the obstacle to be predicted.
In one embodiment, the time information extraction module 606 is further configured to input the target matrix into a one-dimensional convolutional neural network model, and perform feature extraction in multiple coordinate directions on the location matrix and the map matrix in the target matrix through the one-dimensional convolutional neural network model; and obtaining a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix according to the extracted features.
In one embodiment, the spatial information integration module 608 is further configured to compare the first feature matrix with the second feature matrix to obtain a similarity; calculating a third feature matrix corresponding to the second feature matrix according to the similarity; and combining the third feature matrix with the first feature matrix to obtain a spatial feature matrix.
In one embodiment, the matrix generation module 604 is further configured to convert the preset frame position data of the obstacle to be predicted into a position matrix; determining a map matrix corresponding to the obstacle to be predicted according to the preset frame position data and the map data; and combining the position matrix with the map matrix to obtain a target matrix.
In one embodiment, the matrix generating module 604 is further configured to search the map data for an associated lane line of the obstacle to be predicted according to the preset frame position data; sampling the related lane lines to obtain a lane line point set; and converting the lane line point set into a map matrix corresponding to the obstacle to be predicted.
In one embodiment, the data acquisition module 602 is further configured to acquire preset frame point cloud data of an obstacle to be predicted; and respectively inputting the preset frame point cloud data into a target detection model, and positioning the position information of the obstacle to be predicted corresponding to each frame to obtain the preset frame position data of the obstacle to be predicted.
For specific limitations on the trajectory prediction apparatus based on the time and space learning, reference may be made to the above limitation on the trajectory prediction method based on the time and space learning, and the description thereof will not be repeated here. The various modules in the above-described time and space learning-based trajectory prediction device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 7. The computer device includes a processor, a memory, a communication interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium. The database of the computer device is used for storing data of a track prediction method based on time and space learning. The communication interface of the computer device is used for connecting and communicating with an external terminal. The computer readable instructions, when executed by a processor, implement a trajectory prediction method based on temporal and spatial learning.
It will be appreciated by those skilled in the art that the structure shown in FIG. 7 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
A computer device comprising a memory and one or more processors, the memory having stored thereon computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to perform the steps of the various method embodiments described above.
One or more computer storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the various method embodiments described above.
The computer storage medium is a readable storage medium, and the readable storage medium may be nonvolatile or volatile.
In one embodiment, a vehicle is provided, which may include an autonomous vehicle, the vehicle including the above-described computer device, and steps of the above-described embodiments of a trajectory prediction method based on temporal and spatial learning may be performed.
Those skilled in the art will appreciate that implementing all or part of the processes of the methods of the embodiments described above may be accomplished by instructing the associated hardware by computer readable instructions stored on a non-transitory computer readable storage medium, which when executed may comprise processes of embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (20)

1. A trajectory prediction method based on temporal and spatial learning, comprising:
acquiring preset frame position data of an obstacle to be predicted and map data;
Converting the preset frame position data of the obstacle to be predicted into a position matrix;
searching the map data for the associated lane lines of the obstacle to be predicted according to the preset frame position data; the associated lane line refers to a lane line of future prediction running of the obstacle to be predicted;
Obtaining a map matrix corresponding to the obstacle to be predicted according to the associated lane lines;
Combining the position matrix with the map matrix to obtain a target matrix;
Inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix;
comparing the first feature matrix with the second feature matrix to obtain similarity;
Calculating a third feature matrix corresponding to the second feature matrix according to the similarity;
combining the third feature matrix with the first feature matrix to obtain a spatial feature matrix; and
And inputting the space feature matrix into a track prediction model to obtain the target track of the obstacle to be predicted.
2. The method of claim 1, wherein the inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the location matrix and a second feature matrix corresponding to the map matrix comprises:
Inputting the target matrix into a one-dimensional convolutional neural network model, and extracting features of a plurality of coordinate directions from a position matrix and a map matrix in the target matrix through the one-dimensional convolutional neural network model; and
And obtaining a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix according to the extracted features.
3. The method of claim 1, wherein the comparing the first feature matrix with the second feature matrix to obtain a similarity, and calculating a third feature matrix corresponding to the second feature matrix according to the similarity comprises:
calculating a similarity vector of a track generated by the position data of the preset frame of the obstacle to be predicted and each associated lane line in the second feature matrix;
normalizing all the similarity vectors by using a normalization function to obtain probability vectors; the probability vector comprises the similarity between the track generated by the preset frame position data of the obstacle to be predicted and each associated lane line in the second feature matrix; and
Multiplying each associated lane line in the second feature matrix with a corresponding probability value, and adding to obtain a third feature matrix corresponding to the second feature matrix.
4. The method of claim 1, wherein the obtaining the preset frame position data of the obstacle to be predicted comprises:
acquiring preset frame point cloud data of an obstacle to be predicted; and
And respectively inputting the preset frame point cloud data into a target detection model, and positioning the position information of the obstacle to be predicted corresponding to each frame to obtain the preset frame position data of the obstacle to be predicted.
5. The method of claim 1, wherein the searching the map data for the associated lane line of the obstacle to be predicted based on the preset frame position data comprises:
Searching lane line points closest to the map data according to the initial frame position data of the preset frame position data; and
And continuously searching the associated lane line of the obstacle to be predicted from the driving direction of the obstacle to be predicted by taking the lane line point as a starting point.
6. The method of claim 1, wherein the obtaining a map matrix corresponding to the obstacle to be predicted from the associated lane lines comprises:
sampling the related lane lines to obtain a lane line point set; and
And converting the lane line point set into a map matrix corresponding to the obstacle to be predicted.
7. A trajectory prediction device based on temporal and spatial learning, comprising:
the data acquisition module is used for acquiring preset frame position data of the obstacle to be predicted and map data;
The matrix generation module is used for converting the preset frame position data of the obstacle to be predicted into a position matrix; searching the map data for the associated lane lines of the obstacle to be predicted according to the preset frame position data; the associated lane line refers to a lane line of future prediction running of the obstacle to be predicted; obtaining a map matrix corresponding to the obstacle to be predicted according to the associated lane lines; combining the position matrix with the map matrix to obtain a target matrix;
The time information extraction module is used for inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix;
The spatial information integration module is used for comparing the first feature matrix with the second feature matrix to obtain similarity, and calculating a third feature matrix corresponding to the second feature matrix according to the similarity; combining the third feature matrix with the first feature matrix to obtain a spatial feature matrix; the similarity refers to the similarity between the track generated by the preset frame position data of the obstacle to be predicted and each associated lane line in the second feature matrix; and
And the track prediction module is used for inputting the space feature matrix into a track prediction model to obtain the target track of the obstacle to be predicted.
8. The apparatus of claim 7, wherein the time information extraction module is further configured to input the target matrix into a one-dimensional convolutional neural network model, and perform feature extraction in a plurality of coordinate directions on a location matrix and a map matrix in the target matrix through the one-dimensional convolutional neural network model, respectively; and obtaining a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix according to the extracted features.
9. The apparatus of claim 7, wherein the spatial information integration module is further configured to calculate a similarity vector of a trajectory generated by the preset frame position data of the obstacle to be predicted and each associated lane line in the second feature matrix; normalizing all the similarity vectors by using a normalization function to obtain probability vectors; the probability vector comprises the similarity between the track generated by the preset frame position data of the obstacle to be predicted and each associated lane line in the second feature matrix; multiplying each associated lane line in the second feature matrix with a corresponding probability value, and adding to obtain a third feature matrix corresponding to the second feature matrix.
10. The apparatus of claim 7, wherein the matrix generation module is further configured to search map data for a lane line point closest to the map data by initial frame position data of the preset frame position data; and continuously searching the associated lane line of the obstacle to be predicted from the driving direction of the obstacle to be predicted by taking the lane line point as a starting point.
11. A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to perform the steps of:
acquiring preset frame position data of an obstacle to be predicted and map data;
Converting the preset frame position data of the obstacle to be predicted into a position matrix;
searching the map data for the associated lane lines of the obstacle to be predicted according to the preset frame position data; the associated lane line refers to a lane line of future prediction running of the obstacle to be predicted;
Obtaining a map matrix corresponding to the obstacle to be predicted according to the associated lane lines;
Combining the position matrix with the map matrix to obtain a target matrix;
Inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix;
comparing the first feature matrix with the second feature matrix to obtain similarity;
Calculating a third feature matrix corresponding to the second feature matrix according to the similarity;
combining the third feature matrix with the first feature matrix to obtain a spatial feature matrix; and
And inputting the space feature matrix into a track prediction model to obtain the target track of the obstacle to be predicted.
12. The computer device of claim 11, wherein the processor when executing the computer readable instructions further performs the steps of: inputting the target matrix into a one-dimensional convolutional neural network model, and extracting features of a plurality of coordinate directions from a position matrix and a map matrix in the target matrix through the one-dimensional convolutional neural network model; and obtaining a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix according to the extracted features.
13. The computer device of claim 11, wherein the processor when executing the computer readable instructions further performs the steps of: calculating a similarity vector of a track generated by the position data of the preset frame of the obstacle to be predicted and each associated lane line in the second feature matrix; normalizing all the similarity vectors by using a normalization function to obtain probability vectors; the probability vector comprises the similarity between the track generated by the preset frame position data of the obstacle to be predicted and each associated lane line in the second feature matrix; multiplying each associated lane line in the second feature matrix with a corresponding probability value, and adding to obtain a third feature matrix corresponding to the second feature matrix.
14. The computer device of claim 11, wherein the processor when executing the computer readable instructions further performs the steps of: searching lane line points closest to the map data according to the initial frame position data of the preset frame position data; and continuously searching the associated lane line of the obstacle to be predicted from the driving direction of the obstacle to be predicted by taking the lane line point as a starting point.
15. The computer device of claim 14, wherein the processor, when executing the computer readable instructions, further performs the steps of: sampling the related lane lines to obtain a lane line point set; and converting the lane line point set into a map matrix corresponding to the obstacle to be predicted.
16. One or more computer storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of:
acquiring preset frame position data of an obstacle to be predicted and map data;
Converting the preset frame position data of the obstacle to be predicted into a position matrix;
searching the map data for the associated lane lines of the obstacle to be predicted according to the preset frame position data; the associated lane line refers to a lane line of future prediction running of the obstacle to be predicted;
Obtaining a map matrix corresponding to the obstacle to be predicted according to the associated lane lines;
Combining the position matrix with the map matrix to obtain a target matrix;
Inputting the target matrix into a time information model to obtain a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix;
Comparing the first feature matrix with the second feature matrix to obtain similarity
Calculating a third feature matrix corresponding to the second feature matrix according to the similarity;
combining the third feature matrix with the first feature matrix to obtain a spatial feature matrix; and
And inputting the space feature matrix into a track prediction model to obtain the target track of the obstacle to be predicted.
17. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of: inputting the target matrix into a one-dimensional convolutional neural network model, and extracting features of a plurality of coordinate directions from a position matrix and a map matrix in the target matrix through the one-dimensional convolutional neural network model; and obtaining a first feature matrix corresponding to the position matrix and a second feature matrix corresponding to the map matrix according to the extracted features.
18. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of:
calculating a similarity vector of a track generated by the position data of the preset frame of the obstacle to be predicted and each associated lane line in the second feature matrix;
normalizing all the similarity vectors by using a normalization function to obtain probability vectors; the probability vector comprises the similarity between the track generated by the preset frame position data of the obstacle to be predicted and each associated lane line in the second feature matrix; and
Multiplying each associated lane line in the second feature matrix with a corresponding probability value, and adding to obtain a third feature matrix corresponding to the second feature matrix.
19. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of: searching lane line points closest to the map data according to the initial frame position data of the preset frame position data; and continuously searching the associated lane line of the obstacle to be predicted from the driving direction of the obstacle to be predicted by taking the lane line point as a starting point.
20. A vehicle comprising performing the trajectory prediction method of any one of claims 1-6.
CN202180050155.4A 2021-04-28 2021-04-28 Track prediction method and device based on time and space learning and computer equipment Active CN115943400B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/090552 WO2022226837A1 (en) 2021-04-28 2021-04-28 Time and space learning-based method and apparatus for predicting trajectory, and computer device

Publications (2)

Publication Number Publication Date
CN115943400A CN115943400A (en) 2023-04-07
CN115943400B true CN115943400B (en) 2024-06-18

Family

ID=83847689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180050155.4A Active CN115943400B (en) 2021-04-28 2021-04-28 Track prediction method and device based on time and space learning and computer equipment

Country Status (2)

Country Link
CN (1) CN115943400B (en)
WO (1) WO2022226837A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015847A (en) * 2020-10-19 2020-12-01 北京三快在线科技有限公司 Obstacle trajectory prediction method and device, storage medium and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016175549A (en) * 2015-03-20 2016-10-06 株式会社デンソー Safety confirmation support device, safety confirmation support method
CN109583151B (en) * 2019-02-20 2023-07-21 阿波罗智能技术(北京)有限公司 Method and device for predicting running track of vehicle
CN109885066B (en) * 2019-03-26 2021-08-24 北京经纬恒润科技股份有限公司 Motion trail prediction method and device
US11679764B2 (en) * 2019-06-28 2023-06-20 Baidu Usa Llc Method for autonomously driving a vehicle based on moving trails of obstacles surrounding the vehicle
CN112364997B (en) * 2020-12-08 2021-06-04 北京三快在线科技有限公司 Method and device for predicting track of obstacle
CN112651557A (en) * 2020-12-25 2021-04-13 际络科技(上海)有限公司 Trajectory prediction system and method, electronic device and readable storage medium
CN112651990B (en) * 2020-12-25 2022-12-16 际络科技(上海)有限公司 Motion trajectory prediction method and system, electronic device and readable storage medium
CN112348293A (en) * 2021-01-07 2021-02-09 北京三快在线科技有限公司 Method and device for predicting track of obstacle

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015847A (en) * 2020-10-19 2020-12-01 北京三快在线科技有限公司 Obstacle trajectory prediction method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2022226837A1 (en) 2022-11-03
CN115943400A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
WO2022222095A1 (en) Trajectory prediction method and apparatus, and computer device and storage medium
WO2021134296A1 (en) Obstacle detection method and apparatus, and computer device and storage medium
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
Mahaur et al. Road object detection: a comparative study of deep learning-based algorithms
CN112288770A (en) Video real-time multi-target detection and tracking method and device based on deep learning
JP7556142B2 (en) Efficient 3D object detection from point clouds
CN111144304A (en) Vehicle target detection model generation method, vehicle target detection method and device
CN113239719B (en) Trajectory prediction method and device based on abnormal information identification and computer equipment
CN112200830A (en) Target tracking method and device
CN111191533A (en) Pedestrian re-identification processing method and device, computer equipment and storage medium
CN113383283B (en) Perceptual information processing method, apparatus, computer device, and storage medium
CN113587944B (en) Quasi-real-time vehicle driving route generation method, system and equipment
CN113189989B (en) Vehicle intention prediction method, device, equipment and storage medium
CN114998736A (en) Infrared weak and small target detection method and device, computer equipment and storage medium
CN115294539A (en) Multitask detection method and device, storage medium and terminal
US20220155096A1 (en) Processing sparse top-down input representations of an environment using neural networks
US20210383213A1 (en) Prediction device, prediction method, computer program product, and vehicle control system
CN115943400B (en) Track prediction method and device based on time and space learning and computer equipment
US20220057992A1 (en) Information processing system, information processing method, computer program product, and vehicle control system
CN115053277B (en) Method, system, computer device and storage medium for lane change classification of surrounding moving object
US20230059370A1 (en) Gaze and awareness prediction using a neural network model
CN114945961B (en) Lane changing prediction regression model training method, lane changing prediction method and apparatus
CN111813131B (en) Guide point marking method and device for visual navigation and computer equipment
CN114119757A (en) Image processing method, apparatus, device, medium, and computer program product
CN117706942B (en) Environment sensing and self-adaptive driving auxiliary electronic control method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant