Nothing Special   »   [go: up one dir, main page]

WO2019023628A1 - Neural networks for vehicle trajectory planning - Google Patents

Neural networks for vehicle trajectory planning Download PDF

Info

Publication number
WO2019023628A1
WO2019023628A1 PCT/US2018/044172 US2018044172W WO2019023628A1 WO 2019023628 A1 WO2019023628 A1 WO 2019023628A1 US 2018044172 W US2018044172 W US 2018044172W WO 2019023628 A1 WO2019023628 A1 WO 2019023628A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
neural network
training
data
training data
Prior art date
Application number
PCT/US2018/044172
Other languages
French (fr)
Inventor
Abhijit Ogale
Mayank Bansal
Alexander KRIZHEVSKY
Original Assignee
Waymo Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/662,031 external-priority patent/US11256983B2/en
Priority claimed from US15/662,007 external-priority patent/US10883844B2/en
Application filed by Waymo Llc filed Critical Waymo Llc
Priority to US16/628,086 priority Critical patent/US20200174490A1/en
Publication of WO2019023628A1 publication Critical patent/WO2019023628A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18154Approaching an intersection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0062Adapting control system settings
    • B60W2050/0075Automatic parameter input, automatic initialising or calibrating means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0062Adapting control system settings
    • B60W2050/0075Automatic parameter input, automatic initialising or calibrating means
    • B60W2050/0083Setting, resetting, calibration
    • B60W2050/0088Adaptive recalibration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/10Historical data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • B60W2556/50External transmission of data to or from the vehicle of positioning data, e.g. GPS [Global Positioning System] data

Definitions

  • This specification describes a computer-implemented neural network system configured to plan a trajectory for a vehicle.
  • Neural networks are machine-learning models that employ multiple layers of operations to predict one or more outputs from one or more inputs.
  • Neural networks typically include one or more hidden layers situated between an input layer and an output layer. The output of each layer is used as input to another layer in the network, e.g., the next hidden layer or the output layer.
  • Each layer of a neural network specifies one or more transformation operations to be performed on inputs to the layer.
  • the transformation operations can be characterized by values of internal parameters of the neural network.
  • Some neural network layers have operations that are referred to as neurons.
  • Each neuron receives one or more inputs and generates an output that is received by another neural network layer. Often, each neuron receives inputs from other neurons, and each neuron provides an output to one or more other neurons.
  • An architecture of a neural network specifies what layers are included in the network and their properties, as well as how the neurons of each layer of the network are connected. In other words, the architecture may specify which layers provide their output as input to which other layers and how the output is provided.
  • each layer of a neural network is performed by one or more computers at one or more locations that are configured to implement the transformation operations.
  • a layer being described as performing operations means that the computers implementing the transformation operations of the layer perform the operations.
  • This specification describes neural network systems that are configured to plan a trajectory for a vehicle.
  • such systems are deployed on autonomous or semi-autonomous vehicles in order to guide movement of the vehicle as it travels toward a goal location or along an intended route.
  • Autonomous and semi-autonomous vehicles use computing systems to make driving decisions and to at least partially effect control of the vehicle.
  • a fully autonomous vehicle can include computer-based control systems that make fully autonomous driving decisions to effect fully autonomous control independent of a human driver, whereas a semi- autonomous vehicle can include computer control systems that make semi-autonomous driving decisions to effect semi-autonomous control that aids a human driver.
  • the autonomous or semi-autonomous vehicle is an automobile, e.g., a sedan, a lorry, a pickup truck, a van, a sport utility vehicle, or a motorcycle.
  • the vehicle is a watercraft, e.g., a boat, or an aircraft, e.g., an airplane or helicopter.
  • Autonomous and semi-autonomous vehicles may include one or more environmental sensing systems that monitor the environment of a vehicle.
  • a light detection and ranging (LIDAR) system may continuously sweep an area surrounding the vehicle on which the sensing systems are installed, e.g., a vicinity of the vehicle.
  • the sensing systems generate sensor data from the sweeps that characterize aspects of the current environment of the vehicle.
  • the vehicle's computing systems are configured to process sensor data from one or more sensing systems in real-time and to project the data onto a 2D-space to form an image.
  • the image may represent the results of sweeps by one or more sensing systems.
  • the computing systems of an autonomous or semi-autonomous vehicle may process information derived from sensor data from the vehicle's sensing systems. For instance, information about a vehicle's environment can be processed, along with navigation data and information about previous locations of the vehicle, to determine a planned trajectory of the vehicle.
  • the planned trajectory can indicate a series of waypoints that each represent a proposed location for the vehicle to maneuver to at a time in the near future.
  • the system selects waypoints taking into account an intended route or destination of the vehicle, safety (e.g., collision avoidance), and ride comfort for passengers in the vehicle.
  • Some implementations of the subject matter disclosed herein include a computing system for planning a trajectory of a vehicle.
  • the system can include a memory configured to store waypoint data indicating one or more waypoints, each waypoint representing a previously traveled location of the vehicle or a location in a planned trajectory for the vehicle, one or more computers, and one or more storage devices storing instructions that when executed cause the one or more computers to implement a trajectory planning neural network and a trajectory management system.
  • the trajectory planning neural network can be configured to, at each time step of multiple time steps: obtain a first neural network input and a second neural network input, wherein (i) the first neural network input characterizes a set of waypoints indicated by the waypoint data, and (ii) the second neural network input characterizes (a) environmental data that represents a current state of an environment of the vehicle and (b) navigation data that represents a planned navigation route for the vehicle; and process the first neural network input and the second neural network input to generate a set of output scores, wherein each output score in the set of output scores corresponds to a different location of a set of possible locations in a vicinity of the vehicle and indicates a likelihood that the respective location is an optimal location for a next waypoint in the planned trajectory for the vehicle to follow the planned navigation route.
  • the trajectory management system can be configured to, at each time step of the multiple time steps: select, based on the set of output scores generated by the trajectory planning neural network at the time step, one of the set of possible locations as the waypoint for the planned trajectory of the vehicle at the time step; and update the waypoint data by writing to the memory an indication of the selected one of the set of possible locations as the waypoint for the planned trajectory of the vehicle at the time step.
  • the set of waypoints characterized by the first neural network input can include at least one waypoint that represents a previously traveled location of the vehicle at a time that precedes the multiple time steps.
  • the set of waypoints characterized by the first neural network input can include the waypoints that were determined at each preceding time step of the multiple time steps.
  • the set of waypoints characterized by the first neural network input can include: (i) one or more first waypoints that represent previously traveled locations of the vehicle at times that precede the multiple time steps, and (ii) one or more second waypoints that were determined at preceding time steps of the multiple time steps.
  • the trajectory planning neural network can be a feedforward neural network.
  • the second neural network input at each time step of the multiple time steps can characterize the environmental data that represents the current state of the environment of the vehicle at the time step.
  • the second neural network input can characterize multiple channels of environmental data.
  • the multiple channels of environmental data can include two or more of roadgraph data representing one or more roads in the vicinity of the vehicle, perception object data representing locations of objects that have been detected as being in the vicinity of the vehicle, speed limit data representing speed limits associated with the one or more roads in the vicinity of the vehicle, light detection and ranging (LIDAR) data representing a LIDAR image of the vicinity of the vehicle, radio detection and ranging (RADAR) data representing a RADAR image of the vicinity of the vehicle, camera data representing an optical image of the vicinity of the vehicle, or traffic artifacts data representing identified traffic artifacts in the vicinity of the vehicle.
  • LIDAR light detection and ranging
  • RADAR radio detection and ranging
  • the second neural network input at each time step of the multiple time steps can characterize the navigation data that represents the planned navigation route for the vehicle.
  • the second neural network input at each time step of the multiple time steps can characterize both the environmental data that represents the current state of the environment of the vehicle at the time step and the navigation data that represents the planned navigation route for the vehicle.
  • Each successive pair of time steps in the multiple time steps can represent a successive pair of real -world times that are separated by a fixed interval in the range 100 milliseconds to 500 milliseconds.
  • a vehicle control subsystem can be configured to determine control actions for the vehicle to take to cause the vehicle to maneuver along a planned trajectory defined by the waypoints for at least some of the multiple time steps.
  • the vehicle can be maneuvered along the planned trajectory as a result of executing at least some of the control actions determined by the vehicle control subsystem, wherein the control actions include at least one of steering, braking, or accelerating the vehicle.
  • Some implementations of the subject matter disclosed herein include a computer-implemented method for planning a trajectory of a vehicle.
  • the method can include, for each time step in a series of time steps: obtaining a first neural network input that characterizes a set of waypoints that each represent a previous location of the vehicle or a location in a planned trajectory for the vehicle; obtaining a second neural network input that characterizes (i) environmental data that represents a current state of an environment of the vehicle and (ii) navigation data that represents a planned navigation route for the vehicle; providing the first neural network input and the second neural network input to a trajectory planning neural network and, in response, obtaining a set of output scores from the trajectory planning neural network, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle and indicating a likelihood that the respective location is an optimal location for a next waypoint in the planned trajectory for the vehicle to follow the planned navigation route; and selecting, based on the set of output scores, one of the possible locations in the
  • the set of waypoints characterized by the first neural network input at the time step can represent the locations of the vehicle that were selected at each preceding time step in the series of time steps.
  • the set of waypoints characterized by the first neural network input at the initial time step can represent locations that the vehicle has traversed at particular times that precede the series of time steps.
  • the second neural network input can characterize multiple channels of environmental data
  • the multiple channels of environmental data can include two or more of roadgraph data representing one or more roads in the vicinity of the vehicle, perception object data representing locations of objects that have been detected as being in the vicinity of the vehicle, speed limit data representing speed limits associated with the one or more roads in the vicinity of the vehicle, light detection and ranging (LIDAR) data representing a LIDAR image of the vicinity of the vehicle, radio detection and ranging (RADAR) data representing a RADAR image of the vicinity of the vehicle, camera data representing an optical image of the vicinity of the vehicle, or traffic artifacts data representing identified traffic artifacts in the vicinity of the vehicle.
  • LIDAR light detection and ranging
  • RADAR radio detection and ranging
  • the system can write an indication in memory of the selected one of the possible locations in the vicinity of the vehicle as the waypoint for the planned trajectory of the vehicle at the time step.
  • the system can determine control actions for the vehicle to take to cause the vehicle to maneuver along a planned trajectory defined by the waypoints for at least some of the series of time steps.
  • Each successive pair of time steps in the series of time steps can represent a successive pair of real -world times that are separated by a fixed interval in the range 100 milliseconds to 500 milliseconds.
  • the environmental data can identify, for each of one or more obj ects detected in the environment of the vehicle, a current occupancy of the object.
  • the current occupancies of the one or more objects detected in the environment can be predicted using a second neural network, e.g., a perception object occupancy prediction neural network.
  • Some implementations of the subject matter disclosed herein include a computer-implemented method for training a trajectory planning neural network system to determine waypoints for trajectories of vehicles.
  • the method can include obtaining, by a neural network training system, multiple training data sets.
  • Each training data set can include: (i) a first training input that characterizes a set of waypoints that represent respective locations of a vehicle at each of a series of first time steps, (ii) a second training input that characterizes at least one of (a) environmental data that represents a current state of an environment of the vehicle or (b) navigation data that represents a planned navigation route for the vehicle, and (iii) a target output characterizing a waypoint that represents a target location of the vehicle at a second time step that follows the series of first time steps.
  • the neural network training system can train the trajectory planning neural network system on the multiple training data sets, including, for each training data set of the multiple training data sets: processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a set of output scores, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle; determining an output error using the target output and the set of output scores, and adjusting the current values of the parameters of the trajectory planning neural network system using the output error.
  • the target output of the training data set can characterize a waypoint that represents an actual location of a human-operated vehicle at a time that corresponds to the second time step.
  • the second training input can characterize navigation data that represents a route that was prescribed for a driver of the human-operated vehicle to follow and that was traversed by the vehicle.
  • the second training input can further characterize environmental data that represents the current state of the environment of the human-operated vehicle.
  • the target output of the training data set can characterize a waypoint that represents a location of a virtual vehicle that was driven in a simulated environment at a time that corresponds to the second time step
  • the second training input can characterize navigation data that represents a route that was prescribed for an automated agent of the virtual vehicle to follow while driving in the simulated environment and that was traversed by the virtual vehicle.
  • the neural network training system can identify that the second subset of the multiple training data sets models driving behavior for one or more specified driving scenarios. In response to identifying that the second subset of the multiple training data sets models behavior for the one or more specified driving scenarios, the neural network training system can select the second subset of the multiple training data sets for inclusion in the multiple training data sets.
  • the one or more specified driving scenarios can include at least one of a lane merge scenario, an unprotected left turn scenario, a lane change scenario, or a collision scenario.
  • Obtaining the multiple training data sets can include: (i) selecting a first subset of training data sets with which to train the trajectory planning neural network system based on an indication that the second training inputs of the first subset of training data sets characterize environmental data from a first set of sensor channels, and (ii) selecting a second subset of training data sets with which to train the trajectory planning neural network system based on an indication that the second training inputs of the second subset of training data sets characterize environmental data from a second set of sensor channels, wherein the second set of sensor channels includes at least one sensor channel that is not included in the first set of sensor channels.
  • the at least one sensor channel that is included in the second set of sensor channels but not in the first set of sensors channels can be a light detection and ranging (LIDAR) sensor channel.
  • LIDAR light detection and ranging
  • Obtaining the multiple training data sets can include oversampling a first subset of training data sets that model driving behavior for one or more specified driving scenarios at a greater frequency than the specified driving scenarios occur in the real-world.
  • Obtaining the multiple training data sets can include: identifying multiple candidate training data sets, filtering the multiple candidate training data sets based on one or more criteria, and selecting to train the trajectory planning neural network system on candidate training data sets that satisfy the one or more criteria, to the exclusion of candidate training data sets that do not satisfy the one or more criteria.
  • Filtering the multiple candidate training data sets based on the one or more criteria can include discarding candidate training data sets that model driving behavior that is determined to violate a legal restriction.
  • the legal restriction is a speed limit.
  • the second training input that characterizes at least one of the environmental data or the navigation data can have been generated by processing at least one of the environmental data or the navigation data using an encoder neural network.
  • the training system can: for each training data set in the group of training data sets, processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a respective set of output scores for the training data set; determining the output error using the target outputs and the respective sets of output scores of all the training data sets in the group of training data sets; and adjusting the current values of the parameters of the trajectory planning neural network system using the output error.
  • the group of training data sets can model driving behavior of a same vehicle over a series of time steps.
  • a neural network system may generate a trajectory for a vehicle that satisfies criteria for vehicle navigation, such as criteria that improves passenger safety and comfort. For example, a planned trajectory for a vehicle may mimic or resemble trajectories that would be taken by human drivers.
  • a trajectory planning neural network system can be used to select waypoints in a planned trajectory for a vehicle.
  • the neural network system may improve the selection of waypoints so that a planned trajectory that results from the selected waypoints meets safety and comfort objectives for passengers in a vehicle.
  • the complexity of the trajectory planning neural network system can be reduced by storing waypoints in a memory that is external to the neural network system.
  • the neural network system can be conditioned on previously selected waypoints in a planned trajectory by processing an input that represents the previously selected waypoints, rather than maintaining such information in internal memory of the neural network system.
  • external memory can thus reduce the size and complexity of the neural network system as compared to other approaches, and may also reduce the complexity of training the neural network system and the computational expense required to determine waypoints for a planned trajectory.
  • An external memory is also beneficial in preventing the neural network system from losing memory of previously selected waypoints over time. Even the most sophisticated and well-trained recurrent neural networks (RNNs), such as LSTM networks, are prone to losing memory over time.
  • RNNs recurrent neural networks
  • neural network systems with external memory tend to generalize better than RNNs to processing longer sequences.
  • Figure 1 is a block diagram of an example computing environment of an autonomous or semi-autonomous vehicle.
  • Figures 2A-2D conceptually illustrate example techniques for selecting waypoints in a planned trajectory for a vehicle.
  • Figure 3 shows a set of images representing example environmental data and navigation data that may be processed by a trajectory planning neural network system.
  • Figure 4 is a flowchart of an example process for using a neural network system to determine waypoints for a planned trajectory of a vehicle.
  • Figure 5 is a flowchart of an example process for updating planned trajectories of a vehicle and transitioning control of the vehicle between planned trajectories.
  • Figure 6 is a block diagram depicting an environment of a neural network system for predicting movements of perception objects in the vicinity of an autonomous or semi-autonomous vehicle.
  • Figures 7A-7B conceptually illustrate example techniques for predicting occupancies (e.g., locations, headings) of perception objects in the vicinity of a vehicle.
  • Figure 8 is a conceptual diagram of an example environment of a neural network training system that trains a trajectory planning neural network system.
  • Figure 9 is a flowchart of an example process for training a trajectory planning neural network system.
  • Figure 10 is a diagram of losses in a training scheme for training a traj ectory planning neural network and other neural networks described herein.
  • Figure 11 depicts an example road graph and a corresponding road mask that can be processed by a trajectory planning neural network to determine waypoints for a planned trajectory of a vehicle.
  • Figure 12 conceptually illustrates perturbation of vehicle locations in training data for the trajectory planning neural network.
  • FIG. 1 depicts a block diagram of an example computing environment 100 of an autonomous or semi-autonomous vehicle.
  • the environment 100 includes various systems to provide automated or semi-automated control of a vehicle, such as an automobile (e.g., a street vehicle), a motorcycle, or a watercraft.
  • the vehicle can be a physical vehicle in a real-world environment or a virtual vehicle in a simulated environment.
  • systems 102, 114, 126, and 128 can be implemented as computer programs on one or more computers.
  • the computers for these systems, along with the external memory 120, can be physically installed on a vehicle so that the systems are arranged to travel along with the vehicle and to perform local processing for generating and executing planned trajectories.
  • one or more computers may be located off- board the vehicle, and the vehicle can communicate with these computers over a network (e.g., the Internet).
  • a network e.g., the Internet
  • the environment 100 includes a neural network system 102, a trajectory management system 114, an external memory 120, a navigation planning system 126, and a vehicle control system 128.
  • the neural network system 102 is configured to process input data components 108, 110, and 112, to score a set of locations in a vicinity of a vehicle.
  • the score for a given location can represent a likelihood that the location is an optimal location for inclusion as a waypoint in a planned trajectory for the vehicle.
  • the trajectory management system 1 14 processes the scores from the neural network system 102 to select one of the locations as a waypoint for a given time step in the planned trajectory.
  • the trajectory management system 1 14 also interfaces with external memory 120, which stores information about previous locations of the vehicle, e.g., locations previously traveled by the vehicle, locations previously selected for a planned trajectory (i.e., planned waypoints), or both. Similar operations can be performed over a series of time steps to iteratively select waypoints that collectively define a planned traj ectory for the vehicle.
  • external memory 120 stores information about previous locations of the vehicle, e.g., locations previously traveled by the vehicle, locations previously selected for a planned trajectory (i.e., planned waypoints), or both. Similar operations can be performed over a series of time steps to iteratively select waypoints that collectively define a planned traj ectory for the vehicle.
  • the neural network system 102 and the traj ectory management system 1 14 are configured to interact with a navigation planning system 126, which generates navigation data 112 representing a planned navigation route of the vehicle.
  • the planned navigation route can indicate one or more goal locations of the vehicle's travel and, optionally, a targeted path to arrive at the goal locations.
  • the neural network system 102 in coordination with the trajectory management system 1 14, is configured to generate a planned trajectory for a vehicle by determining, at each of a series of time steps, a waypoint for the planned trajectory at the time step.
  • the system 102 processes neural network inputs that include waypoint data 108, environmental data 1 10, and navigation data 1 12 to generate a set of scores that each correspond to a different location in a vicinity of the vehicle.
  • Each score can indicate a likelihood that its corresponding location is an optimal location for the vehicle to travel to at the current time step of the planned traj ectory.
  • each score can represent how well its corresponding location optimizes criteria for vehicle operation such as adherence to a planned route for the vehicle, passenger safety and comfort.
  • a waypoint selector 1 16 of the trajectory management system 1 14 can then select one of the scored locations in the vicinity of the vehicle as a waypoint for the planned trajectory at the current time step based on the set of scores generated by the neural network system 102, e.g., by selecting the location corresponding to the highest score in the set of scores.
  • the inputs and outputs of the neural network system can be represented by an image, where each pixel in the image corresponds to a respective location of the environment in the vicinity of the vehicle.
  • the image thus frames a window that at least partially surrounds the vehicle, where pixels nearer the outer edges of the image correspond to locations of the environment that are farther from the vehicle and pixels toward a center or other pre-defined location of the image correspond to locations of the environment that are nearer the vehicle.
  • the image can have multiple channels, such that each pixel defines multiple attributes of the environment at its corresponding location. For example, a first channel may represent a road graph, a second channel may represent a route, and a third channel may represent the presence or absence of perception objects.
  • the neural network system 102 processes a neural network input that includes waypoint data 108.
  • the waypoint data 108 identifies a set of previous locations of the vehicle before the current time step.
  • the previous locations identified by the waypoint data 108 can be previously traveled locations of the vehicle before the current time step (i.e., actual locations at which the vehicle was recently located), planned locations of the vehicle before the current time step (i.e., way points in the planned traj ectory that have already been generated (predicted) at time steps before the current time step), or both.
  • the neural network system 102 may take part in generating a planned trajectory for a vehicle that includes 20 way points, where each waypoint in the planned trajectory represents a planned location of the vehicle at a respective time step in a series of time steps (i.e., one time step for each waypoint).
  • all of the locations identified by the waypoint data 108 may be previously traveled locations at which the vehicle was actually driven at one or more time steps before ti.
  • the waypoint data 108 may identify each of the waypoints (i.e., planned locations) from ti through the most recent time step that immediately precedes the current time step. For instance, at time step ts>, the waypoint data 108 may identify each of the planned locations of the vehicle from ti through te.
  • the waypoint data 108 at a given time step after the initial time step ti can further include indications of one or more actual locations at which the vehicle was observed before ti.
  • a set of previously traveled locations indicated by the waypoint data 108 at time step ti can be maintained at each time step following ti.
  • the planned location that was predicted at the immediately preceding time step is added to the waypoint data 108, but none of the traveled locations from time steps before ti are removed from the waypoint data 108.
  • all or some of the previously traveled locations from before ti can be removed from the waypoint data 108 for time steps after the initial time step ti in a series of time steps for a planned trajectory.
  • the systems may adopt a sliding window approach in which traveled locations of the vehicle from time steps before ti are gradually phased out of the waypoint data 108 by removing the oldest remaining traveled location each time a new planned location is added to the waypoint data 108.
  • the traveled locations indicated by the waypoint data 108 may all be removed from the waypoint data 108 immediately after ti or at another specified time step.
  • the real-world time interval represented by each successive time step in a series of time steps for a planned trajectory is fixed. For example, if the real-world time interval between successive time steps is 100 milliseconds, then a planned trajectory defined by 20 way points would represent planned locations of a vehicle over the next two seconds at each 100 millisecond interval. In some implementations, the real-world time interval between successive time steps in a planned trajectory is in the range 100 milliseconds to 500 milliseconds, and can preferably be in the range 100 milliseconds to 200 milliseconds.
  • the neural network system 102 can further process a neural network input that includes environmental data 1 10 to generate scores for the possible waypoint locations at each time step.
  • environmental data 110 is data that characterizes a current state of an environment of the vehicle.
  • the environmental data 110 can characterize a wide range of environmental factors, and in some implementations, the data 1 10 can include multiple data channels that each represent a different environmental factor.
  • the environmental data 110 includes sensor data that characterizes information about the vehicle's current environment captured by one or more sensing subsystems on the vehicle.
  • An autonomous or semi-autonomous vehicle may include multiple sensing subsystems for sensing information about the environment in proximity of the vehicle.
  • a first sensing subsystem may be a light detection and ranging (LIDAR) system that emits and detects reflections of laser light
  • a second sensing subsystem may be a radio detection and ranging (RADAR) system that emits and detects reflections of radio waves.
  • LIDAR light detection and ranging
  • RADAR radio detection and ranging
  • Additional sensing subsystems may also be provided on a vehicle, such as a camera system that detects reflections of visible light.
  • the sensing subsystems can generate sensor data that indicates, for example, a distance of reflected radiation (e.g., laser light, radio waves, or visible light), a direction of the reflected radiation, an intensity of the reflected radiation, or a combination of these.
  • a given sensing subsystem can transmit one or more pulses of electromagnetic radiation in a particular direction and can measure the intensity of any reflections as well as the elapsed time between emitting the radiation and receiving the reflective signal.
  • a distance between an object in the environment and the current position of the vehicle can be determined based on the elapsed time between emitting the radiation and receiving the reflective signal.
  • the sensing subsystems can each continually sweep a particular space in angle, azimuth, or both.
  • the environmental data 110 includes a sensor input image, where the input image is a 2D projection of sensor data for a partial sweep, a single sweep, or multiple sweeps of one or more sensing subsystems of the vehicle.
  • a LIDAR input image for example, may be formed by generating a 2D projection of a LIDAR sweep around a vehicle.
  • the sensor input image may characterize sweeps from multiple sensing subsystems.
  • Figure 3 shows an example LIDAR input image 310.
  • the environmental data 110 may further include one or more non-sensor channels, i.e., channels that characterize information about the environment in a vicinity of the vehicle obtained from sources other than the vehicle's sensing systems.
  • the environmental data 110 includes a roadgraph channel for data that describes roads in the vicinity of the vehicle.
  • Figure 3 shows an image 302 representing data from an example roadgraph channel.
  • Roadgraph data can indicate the path of roads in the vicinity of the vehicle, lane boundaries, and other features of roads, parking lots, or other driving surfaces in the vicinity of the vehicle.
  • the environmental data 110 includes a speed limit channel.
  • the speed limit channel indicates the speed limits that apply at each location of the roads in the vicinity of the vehicle (e.g., roads identified by the roadgraph data).
  • the speed limits indicated by the speed limit channel may be a legal speed limit set by a governing authority in the location of the vehicle.
  • the speed limits indicated by the speed limit channel may be modified from the legal speed limit according to certain criteria.
  • the speed limit may be capped based on a passenger preference, road conditions (e.g., slick or icy roads), or based on the road feature at issue (e.g., ramps, intersections, and sharp turns may be assigned speed limits that are lower than a posted limit).
  • Figure 3 shows an example image 304 representing data from an example speed limit channel.
  • the environmental data 110 includes a traffic lights channel.
  • the traffic lights channel indicates the locations of traffic lights in the vicinity of the vehicle (e.g., traffic lights for roads identified by the roadgraph data).
  • the traffic lights channel can further include an indication of the current state of each traffic light, such as green, yellow, or red.
  • Figure 3 shows an example image 308 representing data from an example traffic lights channel. The status of each light can be encoded by varying the intensity and/or color of pixels in the input image corresponding to the roadway (e.g., lane) controlled by that light.
  • the environmental data 110 includes one or more perception objects channels.
  • Perception obj ects are objects that have been detected in the vicinity of the vehicle.
  • a perception objects neural network system (not shown in Figure 1) may process data from one or more sensor channels to identify objects located nearby the vehicle.
  • Perception objects may include other vehicles, pedestrians, vegetation, signposts, buildings, or combinations of these and other types of objects.
  • the perception objects channel includes data that indicates the locations, sizes, orientations, and/or types of perception objects in the vicinity of the vehicle.
  • Information from the perception objects channel can indicate locations to avoid in a planned trajectory of a vehicle for collision risk mitigation.
  • Figure 3 shows an image 312
  • the environmental data 110 can include information about the predicted movements (e.g., trajectories) of identified perception objects in the vicinity of an autonomous or semi- autonomous vehicle.
  • the neural network system 102 is further configured to process a neural network input that includes navigation data 112.
  • the navigation data 112 represents a planned navigation route for a vehicle.
  • the planned navigation route may be generated independent of actual conditions of the vehicle or its environment. Accordingly, the planned navigation route is generally insufficient to indicate how the vehicle should be maneuvered along the route.
  • the precise locations and movements for a vehicle traveling a route are instead indicated by the vehicle's planned trajectory.
  • the planned trajectory accounts for the navigation route but also considers information about the vehicle's current environment such as objects located near the vehicle, precise lane boundaries, and traffic controls.
  • the navigation data 112 may indicate that the planned route for a vehicle runs through an intersection, but may not indicate current conditions of the intersection such as the presence of other vehicles, lane closures, or the status of a traffic light in the intersection.
  • the planned trajectory in contrast, may be generated so that the vehicle crosses the intersection in a safe, legal, and comfortable manner.
  • the planned trajectory may indicate precise locations for the vehicle to follow in order to maneuver around an object, to change lanes, and to comply with applicable traffic laws, while maintaining the overall course of the planned navigation route.
  • the navigation data 112 specifies a portion of a navigation route that is within the current context of the vehicle.
  • the current context of the vehicle represents an area surrounding the vehicle (in vicinity of the vehicle) that is represented in the roadgraph and other data channels processed by the neural network system 102.
  • the current context of the vehicle may be an area between 50-200 feet surrounding the vehicle.
  • the navigation data 112 may indicate the route the vehicle should follow within this area surrounding the vehicle. If the vehicle is near its destination, the current context may include the final destination of the planned navigation route.
  • the planned navigation route may traverse lane centers of navigable lanes within the current context of the vehicle. In contrast, the planned trajectory may call for deviations from the lane centers, e.g., to avoid object or to safely perform a maneuver.
  • the neural network system 102 processes at each time step the neural network inputs that include waypoint data 108, environmental data 110, and navigation data 112 to generate the set of scores for possible waypoint locations in a planned trajectory of the vehicle.
  • the neural network system 102 may include an encoder neural network 106 as a first portion of the neural network system 102 and a trajectory planning neural network 104 as a second portion of the neural network system 102.
  • the encoder neural network 106 receives neural network inputs for the environmental data 110 and navigation data 112, and processes the inputs to generate an encoded representation 107 of the environmental data 110 and the navigation data 112.
  • the encoded representation 107 is a vector of values from a last hidden layer of the encoder neural network 106.
  • the encoder neural network 106 can be a feed-forward neural network.
  • the trajectory planning neural network 104 is configured to receive as inputs waypoint data 108, and the encoded representation 107 of the environmental data 110 and the navigation data 112.
  • the trajectory planning neural network 104 is further configured to generate, using the waypoint data 108 and the encoded representation 107 (or, in other implementations, using the environmental data 110 and navigation data 112 directly rather than the encoded representation 107), and in accordance with trained values of parameters of the neural network 104, a set of scores that each indicate a likelihood of a particular location being a "best" (e.g., most optimal) location for a waypoint along a planned trajectory of the vehicle at a current time step.
  • the trajectory planning neural network 104 can be a feedforward neural network.
  • the trajectory planning neural network 104 can further process an input that identifies a count or index for the prediction (e.g., an indication of the current time step for which the prediction is to be made or of the most recent prediction).
  • a count or index for the prediction e.g., an indication of the current time step for which the prediction is to be made or of the most recent prediction.
  • an index n can be provided as input for the network 104 to predict the first waypoint
  • an index n+l can be provided as input for the network 104 to predict the second waypoint
  • an index n+2 can be provided as input for the network 104 to predict the third waypoint, and so on.
  • the inclusion of a count or index for the sequence of waypoints as an input to the network 104 allows the network 104 to account for temporal factors in generating prediction scores for the next waypoint.
  • the location of the vehicle may remain unchanged for several time steps.
  • the count or index provides the network 104 with an indication of how long the vehicle has been stopped, thereby allowing the network 104 to account for this length in determining whether to advance the vehicle at a next time step or to leave the vehicle in place at the next time step.
  • the network 104 may not advance the vehicle at an earlier time step, but may advance the vehicle at a later time step indicated by the count or index after establishing that the vehicle has stopped.
  • the count or index can also be processed by the network 104 or an auxiliary network 1002 to facilitate refinement of a coarse-resolution waypoint to a fine-resolution waypoint. For example, if the predicted time-step exiting the stop is not far enough to switch to the next coarse-resolution pixel/location, the vehicle may not move. But if the movement if sufficient to switch to the next fine-resolution
  • the vehicle may move appropriately.
  • Using the index allows the N to selectively predict farther into the future.
  • a planned trajectory for a vehicle typically includes a collection of waypoints representing planned locations for the vehicle at each of a series of time steps
  • the neural network system 102 may not itself include memory to store information about which waypoints have been selected or traveled at preceding time steps.
  • the environment 100 therefore includes a trajectory management system 114 and an external memory 120. These components 114 and 120 are configured to store indications of waypoints from preceding time steps and to update the waypoint data 108 that will be processed by the neural network system 102 at each time step based on the results of previous time steps.
  • the trajectory management system 114 includes a waypoint selector 116 and a memory interface subsystem 118.
  • the waypoint selector 116 receives the set of scores for possible waypoint locations generated by the trajectory planning neural network 104 for the time step.
  • the waypoint selector 116 then applies one or more criteria to select a particular location from the set of possible locations as the waypoint of the planned trajectory for the current time step. For example, if the trajectory planning neural network 104 is trained to generate higher scores for locations having higher likelihoods of being an optimal waypoint for a trajectory, then the waypoint selector 116 may select the location corresponding to the highest score in the set as the waypoint for the current time step.
  • the memory interface subsystem 118 then sends an instruction to the external memory 120 to cause an indication of the selected waypoint location to be written to the external memory 120.
  • the external memory 120 includes one or more computer-readable storage devices that store data indicating locations of a vehicle.
  • the memory 120 can include a first database 122 that stores data indicating previously traveled locations of the vehicle and a second database 124 that stores data indicating waypoints of a planned trajectory of the vehicle.
  • the previously traveled locations of the vehicle can be determined based on global positioning system (GPS) coordinates or other techniques for measuring the actual location of the vehicle while it is driven.
  • the traveled locations stored in the first database 122 can include locations for each of at least n time steps preceding an initial time step at which the neural network system 102 and trajectory management system 114 begin generating a planned trajectory of the vehicle, where n is the number of previously traveled locations represented in the waypoint data 108 at the initial time step.
  • the second database 124 can store data indicating each waypoint selected for each time step of the planned trajectory up to the current time step being processed by the neural network system 102 and the trajectory management system 114.
  • the second database 124 can create a table for a planned trajectory that stores the location of each selected waypoint in the trajectory and sequence information that sequences the waypoints relative to each other (e.g., values for each waypoint that represent the corresponding time step for the waypoint).
  • the waypoint selector 116 selects a particular location as the waypoint for that time step, and the memory interface subsystem 118 then writes an indication of the selected location as the waypoint for that time step in the second database 124.
  • the trajectory management system 114 generates waypoint data 108 for the neural network system 102 to process at the time step.
  • the system 114 generates waypoint data 108 by using the memory interface subsystem 118 to access the extemal memory 120 and reading data indicating one or more previous locations of the vehicle for one or more time steps preceding the current time step.
  • the previous locations can include traveled locations from the first database 122, planned waypoints from the second database 124, or a combination of traveled locations and planned waypoints over a recent period of time.
  • the navigation planning system 126 is configured to determine a planned navigation route for a vehicle.
  • the planned navigation route can indicate one or more goal locations of the vehicle's travel and, optionally, a targeted path to arrive at the goal locations.
  • Figure 3 shows an image 306 representing an example planned navigation route for a vehicle.
  • the neural network system 102 can process navigation data 112 that characterizes a planned navigation route generated by the planning system 126.
  • the planning system 126 also controls an operational mode of a vehicle. For example, during a first mode for normal autonomous driving, the planning system 126 activates the trajectory management system 114 to use planned trajectories determined using the neural network system 102 as the basis for maneuvering the vehicle. However, during some rare
  • the planning system 126 may override the trajectory management system 114 and activate a second operational mode of the vehicle that adopts a different strategy for maneuvering the vehicle.
  • the planning system 126 may also allow a human operator to take manual control of the vehicle.
  • control actions can include steering, braking, and accelerating, for example.
  • a vehicle can include a vehicle control system 128 that facilitates an ability of the vehicle to maneuver along a planned trajectory.
  • the vehicle control system 128 can receive data indicating a planned trajectory (including data indicating a set of waypoints for a series of time steps) from trajectory management system 114 directly, or indirectly via the navigation planning system 126.
  • the vehicle control system 128 processes the planned trajectory and determines a set of control actions (e.g., steering, braking, accelerations) to perform over time to cause the vehicle to maneuver along the planned trajectory.
  • a set of control actions e.g., steering, braking, accelerations
  • the vehicle control system 128 determines control inputs to one or more control modules of the vehicle such as a steering module 130, braking module 132, and acceleration module 134. Based on such inputs, the control modules may carry out the set of control actions determined by the vehicle control system 128 so that the vehicle is maneuvered along the planned trajectory.
  • the waypoints of a planned trajectory may be provided as input to a finite-time Linear Quadratic Regulator (LQR) based optimizer to translate the planned trajectory into actual vehicle control inputs to cause the vehicle to maneuver along the planned trajectory.
  • LQR Linear Quadratic Regulator
  • the optimizer may adhere to one or more additional constraints beyond the waypoints of the planned trajectory in determining control inputs, such as maximum allowable vehicle speed, vehicle turning radius, and/or maximum curvature to generate control inputs for maneuvers that can be legally and feasibly performed by the vehicle.
  • FIG. 2A-2D an example process is depicted for selecting waypoints in a planned trajectory for a vehicle.
  • Figure 2A illustrates selection of a first waypoint of the planned trajectory at an initial time step ti.
  • the trajectory planning neural network 104 processes a first neural network input 202a and a second neural network input 204a.
  • the first neural network input 202 represents waypoint data, e.g., waypoint data 108, and indicates a set of previous locations of a vehicle.
  • the set of previous locations indicated by the waypoint data are a set of previous locations actually traveled by the vehicle before ti.
  • the previously traveled locations are represented in Figure 2 A as circular dots in the image of the first neural network input 202.
  • the second neural network input represents environmental data, e.g., environmental data 1 10, and navigation data, e.g., navigation data 112.
  • the second neural network input is an encoded representation of environment and navigation data, e.g., encoded representation 107, which results from an encoder neural network, e.g., encoder neural network 106, processing multiple channels of environment and navigation data.
  • the trajectory planning neural network 104 generates a set of scores for possible waypoint locations, and the waypoint selector 116 selects one of the locations as the waypoint for the current time step.
  • the selected waypoint is then stored in memory and added to a set of waypoints that define the planned trajectory of the vehicle.
  • Image 206a shows the first waypoint of the planned trajectory for time step ti as the triangular dot that follows the trail of circular dots representing previously traveled locations of the vehicle.
  • Figure 2B illustrates selection of a second waypoint of the planned trajectory at a second time step t2.
  • the first neural network input 202b is updated to represent waypoint data that includes the selected waypoint for the planned trajectory from the preceding time step ti.
  • the first neural network input 202b maintains indications of the set of previously traveled locations of the vehicle from before ti, but in other implementations, one or more (or all) of the previously traveled locations can be removed from the waypoint data at time step t2 and subsequent time steps.
  • the second neural network input 204b presents environment and navigation data.
  • the second neural network input is updated at each time step to reflect any recent changes to the environment or navigation data since the previous time step.
  • the same second neural network input is re-used at each time step, e.g., for efficiency and/or because more updated data may not be available.
  • the planned trajectory for a vehicle may be generated in such a short span of time that the environment and navigation data used at each of the time steps is still deemed current.
  • the trajectory planning neural network 104 generates a set of scores for possible waypoint locations, and the waypoint selector 116 selects one of the locations as the waypoint for the current time step t2.
  • the selected waypoint is then stored in memory and added to a set of waypoints that define the planned trajectory of the vehicle.
  • Image 206b shows the second waypoint of the planned trajectory for time step t2 as the second triangular dot that follows the trail of circular dots representing previously traveled locations of the vehicle.
  • Figure 2C illustrates selection of a third waypoint of the planned traj ectory at a third time step t3.
  • the first neural network input 202c is updated to represent waypoint data that includes the selected waypoint for the planned trajectory from the preceding times steps ti and t2.
  • the trajectory planning neural network 104 generates a set of scores for possible waypoint locations, and the waypoint selector 1 16 selects one of the locations as the waypoint for the current time step t3.
  • the selected waypoint is then stored in memory and added to a set of waypoints that define the planned trajectory of the vehicle.
  • Image 206c shows the third waypoint of the planned trajectory for time step t3 as the third triangular dot that follows the trail of circular dots representing previously traveled locations of the vehicle.
  • Figure 2D illustrates selection of a fourth waypoint of the planned trajectory at a fourth time step .
  • the first neural network input 202d is updated to represent waypoint data that includes the selected waypoint for the planned trajectory from the preceding times steps ti through t3.
  • the trajectory planning neural network 104 generates a set of scores for possible waypoint locations, and the waypoint selector 116 selects one of the locations as the waypoint for the current time step .
  • the selected waypoint is then stored in memory and added to a set of way points that define the planned trajectory of the vehicle.
  • Image 206d shows the fourth waypoint of the planned trajectory for time step as the fourth triangular dot that follows the trail of circular dots representing previously traveled locations of the vehicle.
  • Figure 4 is a flowchart of an example process 400 of using a neural network system to determine waypoints for a planned trajectory for a vehicle.
  • a trajectory management system obtains waypoint data, environmental data, and navigation data for a current time step in a series of time steps of a planned trajectory for a vehicle.
  • the waypoint data e.g., waypoint data 108
  • identifies a set of previous locations of the vehicle which may include previously traveled locations of the vehicle, planned locations of the vehicle (i.e., waypoints of the planned trajectory from preceding time steps), or both traveled locations and planned locations of the vehicle.
  • the environmental data e.g., environmental data 110, represents a current state of the environment of the vehicle such as objects in proximity of the vehicle, a roadgraph, and applicable traffic rules.
  • the navigation data represents a planned navigation route for the vehicle.
  • the trajectory management system generates a first neural network input from the waypoint data.
  • the first neural network input characterizes the waypoint data in a format that is suitable for a neural network system, e.g., neural network system 102, to process.
  • one or more encoder neural networks generate a second neural network input from the environmental data and the navigation data.
  • the trajectory management system may generate inputs to the encoder neural networks that characterize the environmental data and the navigation data in a format that is suitable or processing by the encoder neural networks.
  • the encoder neural networks may process the formatted inputs to generate an encoded representation of the environmental data and the navigation data, e.g., encoded representation 107.
  • the neural network system e.g., trajectory planning neural network 104, processes the first and second neural network inputs to generate scores for a set of locations to which the vehicle may travel.
  • each score represents a likelihood that a particular location is an optimal location as a waypoint for the vehicle's planned trajectory at the current time step, where each score corresponds to a different location from the set of locations.
  • the total number of scored locations is in the range 1-10, 1 -100, or 1 -1 ,000, and the physical distance in the real-world between adjacent locations is in the range 1 -12 inches, 1 -48 inches, or otherwise, depending on the resolution of input data.
  • a trajectory management system e.g., waypoint selector 1 16 of trajectory management system 1 14, selects a waypoint for the planned trajectory at the current time step.
  • the waypoint can be selected based on the set of scores generated by the trajectory planning neural network.
  • the waypoint selector selects a location as the waypoint for the current time step as a result of the score for the selected location indicating that it is the most optimal waypoint location among the set of possible locations (e.g., the location with the highest score).
  • a memory interface of the traj ectory management system e.g., memory interface subsystem 118, writes to memory an indication of the selected location as the waypoint in the planned trajectory of the vehicle for the current time step.
  • the indication of the selected location is recorded in a database of planned waypoints, e.g., database 124 in external memory 120.
  • the process 400 can be repeated for each of a series of time steps until a terminating condition is met.
  • the terminating condition is that a pre-defined number of waypoints have been selected for a planned trajectory.
  • the traj ectory management system 1 14 may be configured to generate planned trajectories of fixed size in terms of the number of waypoints in the planned trajectories.
  • the trajectory is deemed complete and the systems cease repeating process 400 to determine additional waypoints.
  • the number of waypoints for a planned trajectory is in the range 5 to 15.
  • the number of waypoints for a planned trajectory may be adjusted based on current conditions of the vehicle, vehicle speed, the complexity of the environment in vicinity of the vehicle, or combinations of these and other factors.
  • the waypoint data is updated to add the most recently selected waypoint, so that each previously selected waypoint since an initial time step is represented in the waypoint data for a current time step.
  • the traj ectory management system 114 and neural network system 102 may constantly update the planned trajectory for a vehicle. That is, the systems can begin generating a new trajectory using actual vehicle location information before the most-recently generated planned trajectory has been followed to its conclusion. For example, the systems may generate trajectories that plan the locations for a vehicle for four seconds out from the current time at which a given trajectory is created. The vehicle may follow a first of the four second trajectories for just a portion of the total length of the trajectory (e.g., 1.5-2 seconds) before it begins following a most recently generated trajectory.
  • the systems may generate trajectories that plan the locations for a vehicle for four seconds out from the current time at which a given trajectory is created. The vehicle may follow a first of the four second trajectories for just a portion of the total length of the trajectory (e.g., 1.5-2 seconds) before it begins following a most recently generated trajectory.
  • the earlier waypoints may be more reliable estimations than waypoints at later time steps in a trajectory. Therefore, by constantly handing control over to a most recently generated trajectory before a previous trajectory has been completely traveled, the vehicle may more frequently be maneuvered according to earlier waypoints in the planned trajectories, while still having additional waypoints available as a buffer if a next trajectory is not yet available.
  • FIG. 5 is a flowchart of an example process 500 for updating planned trajectories of a vehicle and transitioning control of the vehicle from an earlier generated trajectory to a more recently generated traj ectory.
  • the system At stage 502, the system generates a first set of waypoints that define a first planned trajectory.
  • a vehicle control system on the vehicle determines a set of control actions to maneuver the vehicle according to the first planned trajectory, and the vehicle executes the set of control actions to begin following the first planned trajectory at stage 504.
  • the vehicle computing systems While maneuvering according to the first planned trajectory, at stage 506 the vehicle computing systems generate a second set of waypoints defining a second planned traj ectory for the vehicle.
  • the first time step in the second planned trajectory may correspond in real -world times to a particular time step partway through the first planned trajectory.
  • the vehicle may terminate the first planned trajectory before it has traveled to all of its waypoints (stage 508) and begin to maneuver according to the second planned trajectory (stage 510).
  • Process 500 may be continuously repeated to generate updated trajectories for the vehicle and to transition to most recently updated trajectories as they become available.
  • Figure 6 depicts a block a diagram of an example computing system 600 that predicts movements (e.g., trajectories) of perception objects within a vicinity of an autonomous or semi-autonomous vehicle.
  • the system 600 and each component thereof, can be implemented on one or more computers in one or more locations. In some instances, the system 600 is fully implemented on the autonomous or semi-autonomous vehicle. By implementing the system 600 locally on the vehicle, the system 600 can minimize latency in generating predictions and providing them to a trajectory planning and control system of the vehicle.
  • the system 600 is configured to process information about perception objects that have been detected near the autonomous or semi-autonomous vehicle, and to predict occupancies of the objects over a set of future time steps.
  • the "occupancy" of an object at a given time step refers to the specific area within the environment of the autonomous or semi-autonomous vehicle that is predicted to be occupied by the object at that time step.
  • an object can be represented by a bounding box (e.g., a rectangular, square, circular, triangular, or oval-shaped box) that approximates the size and shape of the object.
  • the occupancy of the object can be defined by the area of the environment encompassed by the object's bounding box. In some implementations, the occupancy of an object can be represented by the location and heading (e.g., rotation/orientation) of the corresponding bounding box for the object.
  • the system 600 includes a neural network system 602, a trajectory management system 614, and an external memory 620.
  • the operation of the system 600 is similar to that of system 100 (FIG. 1), but rather than generating waypoints for the autonomous or semi-autonomous vehicle itself, the system 600 predicts occupancies for perception objects other than the subject autonomous or semi-autonomous vehicle.
  • the predicted occupancies of the perception objects can then be provided as input to the system 100/trajectory planning neural network 104 (e.g., as a channel of environmental data 110) to facilitate determination of optimal waypoints in a trajectory for the vehicle.
  • the predicted occupancy of a perception object at given time step can be conditioned on all previous occupancies of the object within a recent window of time steps (e.g., including observed occupancies detected based on sensor data, predicted occupancies, or both), and the current state of the environment.
  • the system 600 predicts occupancies for all or multiple perception objects in vicinity of the vehicle concurrently. In other implementations, the system 600 predicts occupancies for perception objects one at a time. By way of example, operation of the system 600 is discussed with respect to a single object in the following paragraphs.
  • the neural network system 602 processes occupancy data 608 for an object and environmental data 610 to generate a set of occupancy scores 628.
  • the set of occupancy scores 628 respectively indicate likelihoods (e.g., probabilities) that the object will have an occupancy at the time step that corresponds to the occupancy represented by the respective score.
  • the occupancy scores 628 include a set of location scores indicating a location of the bounding box for the object and a set of heading scores indicating a heading (e.g., orientation/rotation) of the bounding box.
  • the environmental data 610 can generally include any of the channels of environmental data 110 previously described with respect to Figure 1 such as a roadgraph and sensor data, and is first processed by an encoder neural network 606 to generate an encoded representation 607 of the environmental data 610.
  • the occupancy data 608 indicates previous occupancies of the object. At the initial time step for the first occupancy prediction, the occupancy data 608 indicates observed occupancies of the object over a series of previous time steps. At subsequent time steps, the occupancy data 608 indicates each previously predicted occupancy of the object.
  • the occupancy data 608 at the subsequent time steps can, in some implementations, further include indications of the observed occupancies of the object at the time steps that preceded the predictions.
  • the perception object trajectory prediction neural network 604 processes the occupancy data 608 and the encoded representation 607 of the environmental data 610 to generate occupancy scores 628.
  • the perception object trajectory prediction neural network 604 is a feedforward neural network that is trained using backpropagation and gradient descent, similar to the process 900 described in Figure 9.
  • the trajectory management system 614 includes an occupancy selector 616 and memory interface subsystem 618.
  • the occupancy selector 616 processes the occupancy scores 628 and determines a predicted occupancy for the object based on the scores. In some implementations, the occupancy selector 616 selects a predicted occupancy for the object at the time step as the occupancy corresponding to the most favorable score in the set of occupancy scores 628.
  • the memory interface subsystem 618 records (e.g., stores) the predicted occupancy in the occupancy history database 622 in external memory 620.
  • the memory interface subsystem 618 also accesses data indicating previous occupancies from the occupancy history database 622 and provides them as input occupancy data 608 to the neural network system 602 at each time step.
  • the external memory 620 maintains an index 624 of each perception object being tracked by the system 600, and the occupancy history 622 can include occupancy histories for each obj ect.
  • Figures 7A and 7B illustrate an example of perception object trajectory prediction using system 600 (for simplicity, only neural network 604 and occupancy selector 616 are shown).
  • Figure 7A shows the system 600 generating occupancy predictions for a first time step ti.
  • Input 702a represents occupancy data for a pair of perception objects represented by bounding boxes A and B.
  • the perception objects are a pair of vehicles in the vicinity of the autonomous or semi-autonomous vehicle (not shown in input 702a).
  • the input 702a indicates the occupancy (e.g., location and heading) of each object A and B.
  • the initial input 702a can include observed occupancies for each object over multiple previous time steps.
  • the perception obj ect trajectory prediction neural network 604 processes the occupancy data input 702a and encoded environmental data input 704a to generate a set of occupancy scores for each object, and the occupancy selector 616 determines a predicted occupancy for each object based on the occupancy scores.
  • the predicted occupancies of objects A and B for the next time step is shown in output image 706a.
  • FIG. 7B shows the system 600 generating occupancy predictions for objects A and B at the next time step t2.
  • the perception object traj ectory prediction neural network 604 processes input occupancy data 702b that indicates the predicted occupancies of the objects A and B from time step ti, and optionally includes the observed occupancies from one or more previous time steps.
  • the perception object trajectory prediction neural network 604 Based on occupancy data 702b and encoded environmental data 704b, the perception object trajectory prediction neural network 604 generates a set of occupancy scores for each object, and the occupancy selector 616 determines a predicted occupancy for each object based on the occupancy scores.
  • the predicted occupancies of objects A and B for time step t2 is shown in output image 706b. As indicated by the output image 706b, object A is predicted to begin a right turn, while object B is predicted to continue traveling toward the bottom of the image.
  • the system 600 is trained to predict occupancies by perception objects that have not yet been observed. For example, at time step t2, the system 600 predicts that another vehicle (represented by bounding box object C) is likely to be traveling downward in the same lane as object B. Because object C has not actually been observed, object C may not materialize in fact, although it is predicted that another vehicle will follow vehicle B.
  • the autonomous or semi-autonomous vehicle may account for the likelihood of these objects when planning its own trajectory. For example, the autonomous vehicle may avoid crossing object B's lane to make a left turn immediately after object B passes if another vehicle is expected to come down the lane before the autonomous vehicle can complete the turn.
  • FIG. 8 is a conceptual diagram of an example environment 800 for training a trajectory planning neural network system 102.
  • the environment 800 can include a neural network training system 802, a training data repository 804, and the trajectory planning neural network system 102.
  • the neural network training system 802 can include one or more computers in one or more locations.
  • the training system 802 is configured to train the trajectory planning neural network system 102 to score locations to which a vehicle may travel based on how likely each location optimizes criteria (e.g., minimizes loss) for vehicle operation in a planned trajectory for the vehicle.
  • the training system 802 employs supervised machine-learning techniques to train the trajectory planning neural network system 102 using training data from training data repository 804.
  • the training data includes training inputs 808 and target outputs 810. Further details concerning how the trajectory planning neural network system 102 can be trained are described with respect to Figure 9.
  • FIG. 9 depicts a flowchart of an example process 900 for training a trajectory planning neural network system.
  • the neural network system can include an encoder neural network for processing environmental data and navigation data, e.g., encoder neural network 106, and a trajectory planning neural network for scoring locations of possible travel for a vehicle, e.g., trajectory planning neural network 104.
  • the process 900 jointly trains the encoder neural network and the trajectory planning neural network.
  • the process 900 trains the trajectory planning neural network alone, while the encoder neural network is trained in a separate process.
  • the process 900 can be carried out by a neural network training system, e.g., system 802.
  • the trajectory planning neural network system may be implemented on a vehicle, in some implementations, the system is trained offline by a training system that is not located on a vehicle. For example, the neural network system may be trained offline to determine trained values of the parameters of the system. The trained values can then be transmitted to the vehicle to implement a trained neural network system on the vehicle for trajectory planning.
  • the trajectory planning neural network system can be trained by processing many samples of training data using the trajectory planning neural network system and, for each sample, adjusting the values of internal parameters of the network using an error between the predicted output generated by the network and a target output specified in the training sample.
  • the training system obtains a collection of training data sets (e.g., hundreds, thousands, or millions of training data sets).
  • Each training data set includes a first training input 904, second and, optionally, third training inputs 906, and a target output 908.
  • the first training input 904 characterizes waypoint data that represents a set of previous locations of a vehicle.
  • the first training input 904 is a representation of an image that depicts each of the previous locations of the vehicle in a grid of possible locations.
  • the second and, optionally, third training inputs 906 characterize one or more channels of environmental data for the vehicle, navigation data for the vehicle, or both.
  • a second training input characterizes one or more channels of environmental data
  • a third training input characterizes navigation data for the vehicle.
  • the second training input 906 is an encoded representation that combines both environmental data and navigation data and that was generated by the encoder neural network at an earlier time.
  • the training target outputs 908 of the training data sets represent the desired output of the trajectory planning neural network system that should result from processing the respective first and second training inputs 904, 906 of the training data sets. For example, if the first training input identifies a set of previous locations for a vehicle at each of time steps 1 through n-l, the training target output 908 can identify a particular location in a set of possible locations as the target planned location (waypoint) for the vehicle at the next time step, n. In some implementations, the training target output 908 is a vector of location scores that includes a first value (e.g., 1) for the target planned location and a second value (e.g., 0) for all other locations.
  • a first value e.g., 1
  • a second value e.g., 0
  • the training system trains the trajectory planning neural network system on the training data sets.
  • the training can include a series of sub-stages 912-718.
  • the training system selects a first training data set from the set of training data sets.
  • the trajectory planning neural network system processes the first training input and the second training input (and, optionally, the third training input) from the training data set to generate a predicted set of output scores.
  • the trajectory planning neural network system processes the training inputs in accordance with current values of internal parameters of the network.
  • the predicted set of output scores can include a respective score for each location in a set of all possible waypoint locations.
  • the training system determines an output error using the predicted set of output scores generated by the trajectory planning neural network system and the target output 908.
  • the training system then adjusts the current values of the parameters of the trajectory planning neural network system using the output error.
  • the training system uses machine-learning techniques to train the neural network system, such as stochastic gradient descent with backpropagation. For example, the training system can backpropagate gradients of a loss function that is based on the determined output error to adjust current values of the parameters of the neural network system to optimize the loss function.
  • the training system selects a next training data set from the collection of training data sets and returns to sub-stage 912 to continue the training process using the selected training data set.
  • the training system completes training of the neural network once a training termination condition is satisfied or no further training data sets are available.
  • the training system may employ none, one, or more of the following techniques.
  • the training inputs 904 and/or 906, the training target outputs 908, or both, for one or more training data sets are derived from records of human-operated vehicles driven by humans in a real-world environment.
  • the first training input can characterize waypoint data representing a set of locations traversed by a human-operated vehicle at a series of time steps from 1 through n-l.
  • the training target output 908 for the set of training data can then characterize data representing the location actually traversed by the human-operated vehicle at time step n, i.e., the time step that immediately follows the last time step represented by the waypoint data of the first training input.
  • the location traversed at time step n thus represents a planned target location of the vehicle (i.e., a waypoint at time step ri).
  • the trajectory planning neural network system can be trained to plan trajectories that mimic trajectories followed by human drivers. Therefore, the trained system may account for similar comfort and safety considerations that would be accounted for by a human driver. Additional training and inference features may also be implemented to minimize the possibility of executing risky driving behaviors that may be represented in some human driving data, and to ensure compliance with applicable legal restriction such as speed limits.
  • the second or third training inputs 906 can characterize navigation data that represents a planned navigation route that was prescribed for a human driver to follow, and which the human driver was following while driving through the locations indicated by the waypoint data of the first training input, the next location indicated by the training target output, or both. By having a driver follow a prescribed route, that route can then be used to create the navigation data represented in the second or third training inputs 906.
  • the environment data characterized by the second or third training inputs 906 is actual environment data for the vehicle at times when the vehicle drove through one or more of the locations identified by the first training input or the training target output.
  • the training inputs 904 and/or 906, the training target output 908, or both, of one or more training data sets are derived from results of one or more virtual vehicles driven in a simulated environment.
  • the first training input can characterize waypoint data representing a set of locations traversed by a virtual vehicle at a series of time steps from 1 through n-l .
  • the training target output 908 for the set of training data can then characterize data representing the location traversed by the virtual vehicle at time step n, i.e., the time step that immediately follows the last time step represented by the waypoint data of the first training input.
  • the location traversed at time step n thus represents a planned target location of the virtual vehicle (i.e., a waypoint at time step ri).
  • Simulated data can sometimes be used to train the trajectory planning neural network system when there is a scarcity of human driving data for particular driving scenarios.
  • simulated training data may be used at least in part (e.g., to supplement real-world driving data) if the quantity of real-world driving data available to generate training data is insufficient.
  • the second or third training inputs 906 can characterize navigation data that represents a planned navigation route that was prescribed for the virtual agent to follow, and which was followed by the virtual vehicle while driving through the locations indicated by the waypoint data of the first training input, the next location indicated by the training target output, or both.
  • navigation data represents a planned navigation route that was prescribed for the virtual agent to follow, and which was followed by the virtual vehicle while driving through the locations indicated by the waypoint data of the first training input, the next location indicated by the training target output, or both.
  • the environment data characterized by the second or third training inputs 906 represents the simulated environment of the virtual vehicle at times when the vehicle drove through one or more of the locations identified by the first training input 904 or the training target output 908.
  • the environment data represents a real- world environment of a vehicle that corresponds to the simulated environment driven in by the virtual vehicle.
  • the training system can train the trajectory planning neural network on a collection of training data sets that include some sets derived from records of human-operated vehicles driven in a real-world environment and other sets derived from results of virtual vehicles driven in a simulated environment.
  • the training system specifically selects training data sets to use in training the trajectory planning neural network system that model particular driving scenarios. For example, to ensure that the neural network system is exposed to a sufficient number of training samples for various high-risk or complex driving scenarios, the training system can select to train the neural network system on at least a minimum quantity of training data sets that model one or more particular driving scenarios. In some implementations, the training system oversamples training data sets that model driving behavior for one or more specified driving scenarios at a greater frequency than the specified driving scenarios occur in the real world.
  • the training data sets can be sampled (e.g., selected) from a pool of candidate training data sets that have been made available to the training system, e.g., training data sets stored in training data repository 604. Examples of driving scenarios that may be emphasized during a training session include lane merges, unprotected left turns, lane changes, impending collisions, and post-collision activity.
  • the training system may select to train the trajectory planning neural network system on training data sets that model different combinations of sensor channels in the environmental data of the training sets.
  • LIDAR one of the primary sensing components in many autonomous or semi-autonomous vehicles.
  • the vehicle's driving systems may expect that LIDAR inputs will be available during most normal operation of the vehicle. Nonetheless, on occasion, the LIDAR system may malfunction and at least temporarily drop out of service.
  • the trajectory planning neural network system can be trained on training data sets for which the environmental data characterized by the second or third inputs 906 does not include data representing a sensor channel that is normally expected to be present (e.g., LIDAR).
  • the input channel that is not available can be referred to as an absent channel.
  • the trajectory planning neural network system can be trained on a first group of training data sets for which the environmental data includes a particular sensor channel, and further on a second group of training data sets for which the environmental data lacks the particular sensor channel (i.e., to model drop-out of the particular sensor channel such that the particular sensor channel is absent).
  • the training system selects to train the trajectory planning neural network system on a collection of training data sets that model driving behavior that is determined to meet one or more criteria, to the exclusion of other available training data sets that model driving behavior that does not meet such criteria.
  • the criteria for filtering training data sets and distinguishing sets that are acceptable for use in training the neural network system from sets that are not can be based on legal restrictions and other safe- driving policies defined by the training system.
  • the training system may reject training data sets that model illegal driving behaviors such as exceeding posted speed limits, illegal U-turns, driving the wrong way on a one-way street, reckless driving, etc.
  • the training system may reject some training data sets that model driving behaviors that are legal, but that nonetheless violate a safe-driving policy, such as passing on the right or driving at a speed that is too far below a posted speed limit.
  • a safe-driving policy such as passing on the right or driving at a speed that is too far below a posted speed limit.
  • legal restrictions and safe-driving policies may also be enforced during the inference phrase with the trained trajectory planning neural network system, e.g., by modifying locations in a planned trajectory for a vehicle to comply with applicable legal restrictions and safe-driving policies. Enforcement during inference can be performed rather than, or in addition to, filtering training data sets during the training phase.
  • the trajectory planning neural network 104 can be trained to self-correct the trajectory of the vehicle when its travel begins to deviate from a planned trajectory. Deviation from a planned trajectory can occur, for example, when the vehicle controller is unable to guide the vehicle precisely along the locations of the waypoints that have been planned for it, thereby introducing error between the traveled locations and planned locations for the vehicle. These errors may cause the system to generate sub-optimal trajectories. To train the network 104 more robustly against these types of errors, the training system in some implementations can introduce perturbations to the observed and/or target locations of the vehicle in some training samples.
  • the perturbations can introduce an offset between the vehicle and the center of its lane of travel for at least one observed location of the vehicle, while the target waypoints gradually move the vehicle back toward the center of its lane of travel.
  • Figure 12 depicts two images 1202a and 1202b.
  • the first image 1202a shows a first series of waypoints 1204a representing previous locations and target locations for an autonomous or semi-autonomous vehicle over several time steps.
  • the first series of waypoints 1204a is substantially linear because the vehicle follows a trajectory that aligns with the center of its lane of travel.
  • the second image 1202b shows a second series of waypoints 1204b that have been perturbed so that the previous locations of the vehicle no longer fall along the center line.
  • the target locations for the vehicle gradually re-center to correct for the deviation. This can be accomplished, for example, by using an autonomous vehicle planner to re-center the vehicle.
  • the trajectory planning neural network 104 can be trained to determine waypoints for a trajectory that self- corrects and brings the vehicle back to the center of the lane.
  • the training system can enforce rules that provide for a diversity of training data to be used in training the trajectory planning neural network 104. For example, the training system may add perturbations to a pre-defined portion of training samples, or otherwise ensure that the network 104 is trained on a minimum number of samples that provide self-correction following deviation from an ideal trajectory. As another example, the training system may enforce rules that provide for a diversity in the dominant orientation of the roadway or travel direction of the vehicle. Training data may be naturally biased toward over-representation of training samples in which the dominant orientation of the roadway and vehicle travel direction is substantially vertical or horizontal.
  • the system may apply random rotations (e.g., yaw offsets) to a specified portion of the training samples so that the network 104 can learn to plan trajectories for any direction of travel.
  • random rotations e.g., yaw offsets
  • the number of previous vehicle locations represented in the waypoint data characterized by the first training inputs 904 can vary among different training data sets.
  • the first training inputs 904 of a first group of training data sets may each indicate fifteen vehicle locations.
  • the first training inputs 904 of a second group of training data sets may each indicate a total of twenty -five previous vehicle locations.
  • the training system trains the neural network system to determine waypoints in a planned traj ectory of a vehicle based on varying numbers of previous vehicle locations indicated in the waypoint data.
  • a batch of training data sets can include a collection of related training data sets that represent movements or locations of a particular vehicle at each of a series of consecutive time steps.
  • the system may backpropagate gradients of a loss function that is based on the determined output errors from each training data set in the batch to adjust current values of the parameters of the neural network system in a way that optimizes the loss function.
  • batch training allows the training system to optimize a loss function for a full or partial trajectory of a vehicle, rather than for individual locations.
  • the process 900 described in Figure 9 trains the trajectory planning neural network based on an output error that represents a loss between the predicted waypoint for the vehicle (also referred to as "self-driving car” or "SDC") and a target waypoint. In some implementations, training can be enhanced by accounting for other losses in addition to the waypoint loss.
  • Figure 10 illustrates an example scheme for training a trajectory planning neural network 104, perception object trajectory prediction neural network 604, SDC auxiliary neural network 1002, and encoder neural network 106.
  • networks 104, 106, 604, and 1002 are trained individually. In other implementations, two or more of the networks 104, 106, 604 and 1002 may be trained jointly with each other.
  • the losses referred to in Figure 10 are computed based on a function that compares a predicted metric to a target metric.
  • the predicted metric is generated by a neural network during the training phase (e.g., network 104 or 604), while the target metric represents ground truth and can be derived based on actual driving data or simulation data.
  • a target set of waypoints can represent locations that an SDC actually traveled during a drive in the real world over a period of time or that a virtual SDC traveled during a simulation.
  • the encoder neural network 106 provides encoded representations of environmental data and/or other rendered inputs 1004 to the other networks 104, 604, and 1002.
  • the perception object trajectory prediction neural network 604 can be trained to minimize an observed perception object loss 1028, a non- observed perception object loss 1026, or both.
  • the observed perception object loss 1028 is derived based on comparison of predicted occupancies for observed perception objects 1010 (e.g., the location and heading of bounding boxes for observed objects) and target occupancies for observed perception objects 1012.
  • the non-observed perception object loss 1026 is derived based on comparison of predicted occupancies for non-observed perception objects 1006 and target occupancies for non-observed perception objects 1008.
  • the non- observed perception objects generally refer to perception objects that were not actually detected in the input data as of the current time step, but that are predicted to appear over one or more time steps in the future.
  • the trajectory planning neural network 104 is trained to minimize a coarse SDC waypoint loss 1036, a fine SDC waypoint loss 1048, an SDC speed loss 1046, a collision loss 1038, an SDC box loss 1040, an on-route loss 1042, an on-road loss 1044, or a combination of all or some of these losses.
  • the training system generates the coarse SDC waypoint loss 1036 based on comparison of a coarse SDC waypoint 1016 predicted by trajectory planning neural network 104 (and selected by a waypoint selector based on scores output from the network 104) and a target SDC waypoint 1014.
  • the predicted SDC waypoint 1016 is referred to as "coarse" because its precision may be less than the precision of the target SDC waypoint 1014.
  • the coarse SDC waypoint 1016 may indicate the planned location for the vehicle at a "pixel" level of granularity in an image whose pixels represent corresponding locations in the environment of an SDC, whereas the target SDC waypoint 1014 may contain sufficient information to identify the target location for the vehicle at a "sub-pixel" level of granularity.
  • the training system generates the SDC speed loss 1046 based on comparison of a predicted SDC speed 1030 and a target SDC speed 1032.
  • the predicted SDC speed 1030 can be obtained from an auxiliary neural network 1002 based on encoded inputs (e.g., environmental data) and predicted waypoints from the trajectory planning neural network 104.
  • the training system generates the fine SDC waypoint loss 1048 based on comparison of a fine SDC waypoint 1034 predicted by the auxiliary neural network 1002 and the target SDC waypoint 1014.
  • the auxiliary neural network 1002 may refine the course SDC waypoint 1016 provided by trajectory planning neural network 104 based on an indication of the predicted SDC coarse waypoint 1016 and the encoded inputs (e.g., environmental data).
  • the fine SDC waypoint 1034 generally has greater precision than the coarse SDC waypoint. In some implementations, the fine SDC waypoint matches the level of granularity/resolution of the target SDC waypoint 1014.
  • the training system generates a collision loss 1038 based on comparison of the target occupancies for the observed perception objects 1012 and a predicted occupancy of the SDC 1020.
  • the trajectory planning neural network 104 (and system 100) is configured to predict/plan occupancies of the SDC, or occupancies can be derived from predicted waypoints along with information about the configuration (e.g., size and shape) of a corresponding bounding box for the SDC.
  • the training system can impose a high cost for collisions between the SDC and objects in the environment, as indicated by co- occupancy of any region of the environment by the SDC and a perception object.
  • a collision or co-occupancy can be detected if the bounding box for the SDC and the bounding box for the perception object overlap for one or more time steps.
  • the trajectory planning neural network 104 can be trained based on an SDC box loss 1040.
  • the SDC box loss 1040 is generated by the training system based on comparison of (e.g., difference between) the predicted occupancy of the SDC 1020 and the target occupancy of the SDC 1018.
  • the training algorithm can be implemented to minimize the loss resulting from differences in these predicted and target occupancies.
  • the training system generates an on-route loss 1042 for use in training the trajectory planning neural network 104.
  • the on-route loss 1042 is based on comparison of the predicted occupancy of the SDC 1020 and the target route for the SDC 1022. If all or some of the SDC deviates from the target route 1022 (e.g., if all or a portion of the bounding box for the SDC as indicated by the predicted occupancy 102 is located at least a defined amount off the target route 1022), the training system can generate a loss reflecting the difference.
  • the training system generates an on-road loss 1044.
  • the on-road loss 1044 imposes a cost for the SDC moving outside the physical boundaries of a roadway.
  • the on-road loss 1044 can be generated based on comparison of the predicted occupancy of the SDC 1020 and a target road mask 1024.
  • the target road mask 1024 is a mask that indicates the physical extent of a roadway. For example, image 1104 in Figure 11 shows a road mask for one road way in the vicinity of an SDC. Pixels in the image have a binary classification as either roadway or non-roadway.
  • Figure 11 also shows an image 1102 for a road graph of the same roadway represented by road mask 1104. Whereas the road graph indicated lane boundaries and centerlines and/or lane centers, the road mask 1104 more readily facilitates determination of the on-road loss 1044 by comparison of the area encompassed by the predicted SDC box to the classification of pixels in the corresponding area of the target road mask 1024.
  • Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory program carrier for execution by, or to control the operation of, data processing apparatus.
  • the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • the computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. The computer storage medium is not, however, a propagated signal.
  • the term "data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub programs, or portions of code.
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • an "engine,” or “software engine,” refers to a software implemented input/output system that provides an output that is different from the input.
  • An engine can be an encoded block of functionality, such as a library, a platform, a software development kit ("SDK”), or an object.
  • SDK software development kit
  • Each engine can be implemented on any appropriate type of computing device, e.g., servers, mobile phones, tablet computers, notebook computers, music players, e-book readers, laptop or desktop computers, PDAs, smart phones, or other stationary or portable devices, that includes one or more processors and computer readable media. Additionally, two or more of the engines may be implemented on the same computing device, or on different computing devices.
  • the processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • special purpose logic circuitry e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit.
  • a central processing unit will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • USB universal serial bus
  • Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto optical disks e.g., CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
  • LAN local area network
  • WAN wide area network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Software Systems (AREA)
  • Transportation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

Systems, methods, devices, and other techniques for planning a trajectory of a vehicle. A computing system can implement a trajectory planning neural network configured to, at each time step of multiple time steps: obtain a first neural network input and a second neural network input. The first neural network input can characterize a set of waypoints indicated by the waypoint data, and the second neural network input can characterize (a) environmental data that represents a current state of an environment of the vehicle and (b) navigation data that represents a planned navigation route for the vehicle. The trajectory planning neural network may process the first neural network input and the second neural network input to generate a set of output scores, where each output score in the set of output scores corresponds to a different location of a set of possible locations in a vicinity of the vehicle.

Description

NEURAL NETWORKS FOR VEHICLE TRAJECTORY PLANNING
TECHNICAL FIELD
[0001] This specification describes a computer-implemented neural network system configured to plan a trajectory for a vehicle.
BACKGROUND
[0002] Neural networks are machine-learning models that employ multiple layers of operations to predict one or more outputs from one or more inputs. Neural networks typically include one or more hidden layers situated between an input layer and an output layer. The output of each layer is used as input to another layer in the network, e.g., the next hidden layer or the output layer.
[0003] Each layer of a neural network specifies one or more transformation operations to be performed on inputs to the layer. The transformation operations can be characterized by values of internal parameters of the neural network. Some neural network layers have operations that are referred to as neurons. Each neuron receives one or more inputs and generates an output that is received by another neural network layer. Often, each neuron receives inputs from other neurons, and each neuron provides an output to one or more other neurons.
[0004] An architecture of a neural network specifies what layers are included in the network and their properties, as well as how the neurons of each layer of the network are connected. In other words, the architecture may specify which layers provide their output as input to which other layers and how the output is provided.
[0005] In general, the transformation operations of each layer of a neural network are performed by one or more computers at one or more locations that are configured to implement the transformation operations. Thus, a layer being described as performing operations means that the computers implementing the transformation operations of the layer perform the operations.
SUMMARY
[0006] This specification describes neural network systems that are configured to plan a trajectory for a vehicle. In some implementations, such systems are deployed on autonomous or semi-autonomous vehicles in order to guide movement of the vehicle as it travels toward a goal location or along an intended route. [0007] Autonomous and semi-autonomous vehicles use computing systems to make driving decisions and to at least partially effect control of the vehicle. A fully autonomous vehicle can include computer-based control systems that make fully autonomous driving decisions to effect fully autonomous control independent of a human driver, whereas a semi- autonomous vehicle can include computer control systems that make semi-autonomous driving decisions to effect semi-autonomous control that aids a human driver. In some implementations, the autonomous or semi-autonomous vehicle is an automobile, e.g., a sedan, a lorry, a pickup truck, a van, a sport utility vehicle, or a motorcycle. In other implementations, the vehicle is a watercraft, e.g., a boat, or an aircraft, e.g., an airplane or helicopter.
[0008] Autonomous and semi-autonomous vehicles may include one or more environmental sensing systems that monitor the environment of a vehicle. For example, a light detection and ranging (LIDAR) system, a radio detection and ranging (RADAR) system, a camera subsystem, or a combination of these and other sensing systems, may continuously sweep an area surrounding the vehicle on which the sensing systems are installed, e.g., a vicinity of the vehicle. The sensing systems generate sensor data from the sweeps that characterize aspects of the current environment of the vehicle. In some implementations, the vehicle's computing systems are configured to process sensor data from one or more sensing systems in real-time and to project the data onto a 2D-space to form an image. The image may represent the results of sweeps by one or more sensing systems.
[0009] In order to make effective driving decisions, the computing systems of an autonomous or semi-autonomous vehicle may process information derived from sensor data from the vehicle's sensing systems. For instance, information about a vehicle's environment can be processed, along with navigation data and information about previous locations of the vehicle, to determine a planned trajectory of the vehicle. The planned trajectory can indicate a series of waypoints that each represent a proposed location for the vehicle to maneuver to at a time in the near future. In some implementations, the system selects waypoints taking into account an intended route or destination of the vehicle, safety (e.g., collision avoidance), and ride comfort for passengers in the vehicle.
[0010] Some implementations of the subject matter disclosed herein include a computing system for planning a trajectory of a vehicle. The system can include a memory configured to store waypoint data indicating one or more waypoints, each waypoint representing a previously traveled location of the vehicle or a location in a planned trajectory for the vehicle, one or more computers, and one or more storage devices storing instructions that when executed cause the one or more computers to implement a trajectory planning neural network and a trajectory management system. The trajectory planning neural network can be configured to, at each time step of multiple time steps: obtain a first neural network input and a second neural network input, wherein (i) the first neural network input characterizes a set of waypoints indicated by the waypoint data, and (ii) the second neural network input characterizes (a) environmental data that represents a current state of an environment of the vehicle and (b) navigation data that represents a planned navigation route for the vehicle; and process the first neural network input and the second neural network input to generate a set of output scores, wherein each output score in the set of output scores corresponds to a different location of a set of possible locations in a vicinity of the vehicle and indicates a likelihood that the respective location is an optimal location for a next waypoint in the planned trajectory for the vehicle to follow the planned navigation route. The trajectory management system can be configured to, at each time step of the multiple time steps: select, based on the set of output scores generated by the trajectory planning neural network at the time step, one of the set of possible locations as the waypoint for the planned trajectory of the vehicle at the time step; and update the waypoint data by writing to the memory an indication of the selected one of the set of possible locations as the waypoint for the planned trajectory of the vehicle at the time step.
[0011] These and other implementations can optionally include one or more of the following features.
[0012] For an initial time step of the multiple time steps, the set of waypoints characterized by the first neural network input can include at least one waypoint that represents a previously traveled location of the vehicle at a time that precedes the multiple time steps.
[0013] For each time step of the multiple time steps after the initial time step, the set of waypoints characterized by the first neural network input can include the waypoints that were determined at each preceding time step of the multiple time steps.
[0014] For at least one time step of the multiple time steps after the initial time step, the set of waypoints characterized by the first neural network input can include: (i) one or more first waypoints that represent previously traveled locations of the vehicle at times that precede the multiple time steps, and (ii) one or more second waypoints that were determined at preceding time steps of the multiple time steps.
[0015] The trajectory planning neural network can be a feedforward neural network. [0016] The second neural network input at each time step of the multiple time steps can characterize the environmental data that represents the current state of the environment of the vehicle at the time step.
[0017] The second neural network input can characterize multiple channels of environmental data. The multiple channels of environmental data can include two or more of roadgraph data representing one or more roads in the vicinity of the vehicle, perception object data representing locations of objects that have been detected as being in the vicinity of the vehicle, speed limit data representing speed limits associated with the one or more roads in the vicinity of the vehicle, light detection and ranging (LIDAR) data representing a LIDAR image of the vicinity of the vehicle, radio detection and ranging (RADAR) data representing a RADAR image of the vicinity of the vehicle, camera data representing an optical image of the vicinity of the vehicle, or traffic artifacts data representing identified traffic artifacts in the vicinity of the vehicle.
[0018] The second neural network input at each time step of the multiple time steps can characterize the navigation data that represents the planned navigation route for the vehicle.
[0019] The second neural network input at each time step of the multiple time steps can characterize both the environmental data that represents the current state of the environment of the vehicle at the time step and the navigation data that represents the planned navigation route for the vehicle.
[0020] Each successive pair of time steps in the multiple time steps can represent a successive pair of real -world times that are separated by a fixed interval in the range 100 milliseconds to 500 milliseconds.
[0021] A vehicle control subsystem can be configured to determine control actions for the vehicle to take to cause the vehicle to maneuver along a planned trajectory defined by the waypoints for at least some of the multiple time steps.
[0022] The vehicle can be maneuvered along the planned trajectory as a result of executing at least some of the control actions determined by the vehicle control subsystem, wherein the control actions include at least one of steering, braking, or accelerating the vehicle.
[0023] Some implementations of the subject matter disclosed herein include a computer-implemented method for planning a trajectory of a vehicle. The method can include, for each time step in a series of time steps: obtaining a first neural network input that characterizes a set of waypoints that each represent a previous location of the vehicle or a location in a planned trajectory for the vehicle; obtaining a second neural network input that characterizes (i) environmental data that represents a current state of an environment of the vehicle and (ii) navigation data that represents a planned navigation route for the vehicle; providing the first neural network input and the second neural network input to a trajectory planning neural network and, in response, obtaining a set of output scores from the trajectory planning neural network, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle and indicating a likelihood that the respective location is an optimal location for a next waypoint in the planned trajectory for the vehicle to follow the planned navigation route; and selecting, based on the set of output scores, one of the possible locations in the vicinity of the vehicle as a waypoint for the planned trajectory of the vehicle at the time step.
[0024] These and other implementations can optionally include one or more of the following features.
[0025] For each time step in the series of time steps after an initial time step, the set of waypoints characterized by the first neural network input at the time step can represent the locations of the vehicle that were selected at each preceding time step in the series of time steps.
[0026] The set of waypoints characterized by the first neural network input at the initial time step can represent locations that the vehicle has traversed at particular times that precede the series of time steps.
[0027] For at least one of the series of time steps: the second neural network input can characterize multiple channels of environmental data, and the multiple channels of environmental data can include two or more of roadgraph data representing one or more roads in the vicinity of the vehicle, perception object data representing locations of objects that have been detected as being in the vicinity of the vehicle, speed limit data representing speed limits associated with the one or more roads in the vicinity of the vehicle, light detection and ranging (LIDAR) data representing a LIDAR image of the vicinity of the vehicle, radio detection and ranging (RADAR) data representing a RADAR image of the vicinity of the vehicle, camera data representing an optical image of the vicinity of the vehicle, or traffic artifacts data representing identified traffic artifacts in the vicinity of the vehicle.
[0028] At each time step in the series of time steps, the system can write an indication in memory of the selected one of the possible locations in the vicinity of the vehicle as the waypoint for the planned trajectory of the vehicle at the time step. [0029] The system can determine control actions for the vehicle to take to cause the vehicle to maneuver along a planned trajectory defined by the waypoints for at least some of the series of time steps.
[0030] Each successive pair of time steps in the series of time steps can represent a successive pair of real -world times that are separated by a fixed interval in the range 100 milliseconds to 500 milliseconds.
[0031] The environmental data can identify, for each of one or more obj ects detected in the environment of the vehicle, a current occupancy of the object. For one or more of the plurality of time steps, the current occupancies of the one or more objects detected in the environment can be predicted using a second neural network, e.g., a perception object occupancy prediction neural network.
[0032] Some implementations of the subject matter disclosed herein include a computer-implemented method for training a trajectory planning neural network system to determine waypoints for trajectories of vehicles. The method can include obtaining, by a neural network training system, multiple training data sets. Each training data set can include: (i) a first training input that characterizes a set of waypoints that represent respective locations of a vehicle at each of a series of first time steps, (ii) a second training input that characterizes at least one of (a) environmental data that represents a current state of an environment of the vehicle or (b) navigation data that represents a planned navigation route for the vehicle, and (iii) a target output characterizing a waypoint that represents a target location of the vehicle at a second time step that follows the series of first time steps. The neural network training system can train the trajectory planning neural network system on the multiple training data sets, including, for each training data set of the multiple training data sets: processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a set of output scores, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle; determining an output error using the target output and the set of output scores, and adjusting the current values of the parameters of the trajectory planning neural network system using the output error.
[0033] These and other implementations of the subject matter disclosed herein can optionally include one or more of the following features.
[0034] For each training data set in at least a subset of the multiple training data sets, the target output of the training data set can characterize a waypoint that represents an actual location of a human-operated vehicle at a time that corresponds to the second time step. [0035] For each training data set in at least the subset of the multiple training data sets, the second training input can characterize navigation data that represents a route that was prescribed for a driver of the human-operated vehicle to follow and that was traversed by the vehicle.
[0036] For each training data set in at least the subset of the multiple training data sets, the second training input can further characterize environmental data that represents the current state of the environment of the human-operated vehicle.
[0037] For each training data set in at least a subset of the multiple training data sets, (i) the target output of the training data set can characterize a waypoint that represents a location of a virtual vehicle that was driven in a simulated environment at a time that corresponds to the second time step, and (ii) the second training input can characterize navigation data that represents a route that was prescribed for an automated agent of the virtual vehicle to follow while driving in the simulated environment and that was traversed by the virtual vehicle.
[0038] The neural network training system can identify that the second subset of the multiple training data sets models driving behavior for one or more specified driving scenarios. In response to identifying that the second subset of the multiple training data sets models behavior for the one or more specified driving scenarios, the neural network training system can select the second subset of the multiple training data sets for inclusion in the multiple training data sets.
[0039] The one or more specified driving scenarios can include at least one of a lane merge scenario, an unprotected left turn scenario, a lane change scenario, or a collision scenario.
[0040] Total numbers of waypoints characterized by the first training inputs among particular ones of the multiple training data sets are different from each other.
[0041] Obtaining the multiple training data sets can include: (i) selecting a first subset of training data sets with which to train the trajectory planning neural network system based on an indication that the second training inputs of the first subset of training data sets characterize environmental data from a first set of sensor channels, and (ii) selecting a second subset of training data sets with which to train the trajectory planning neural network system based on an indication that the second training inputs of the second subset of training data sets characterize environmental data from a second set of sensor channels, wherein the second set of sensor channels includes at least one sensor channel that is not included in the first set of sensor channels. [0042] The at least one sensor channel that is included in the second set of sensor channels but not in the first set of sensors channels can be a light detection and ranging (LIDAR) sensor channel.
[0043] Obtaining the multiple training data sets can include oversampling a first subset of training data sets that model driving behavior for one or more specified driving scenarios at a greater frequency than the specified driving scenarios occur in the real-world.
[0044] Obtaining the multiple training data sets can include: identifying multiple candidate training data sets, filtering the multiple candidate training data sets based on one or more criteria, and selecting to train the trajectory planning neural network system on candidate training data sets that satisfy the one or more criteria, to the exclusion of candidate training data sets that do not satisfy the one or more criteria.
[0045] Filtering the multiple candidate training data sets based on the one or more criteria can include discarding candidate training data sets that model driving behavior that is determined to violate a legal restriction. The legal restriction is a speed limit.
[0046] For each training data set of the multiple training data sets, the second training input that characterizes at least one of the environmental data or the navigation data can have been generated by processing at least one of the environmental data or the navigation data using an encoder neural network.
[0047] For a group of training data sets selected from the multiple training data sets, the training system can: for each training data set in the group of training data sets, processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a respective set of output scores for the training data set; determining the output error using the target outputs and the respective sets of output scores of all the training data sets in the group of training data sets; and adjusting the current values of the parameters of the trajectory planning neural network system using the output error.
[0048] The group of training data sets can model driving behavior of a same vehicle over a series of time steps.
[0049] Some implementations of the subject matter disclosed herein include one or more non-transitory computer-readable media having instructions stored thereon that, when executed by data processing apparatus, cause the data processing apparatus to perform operations of any of the computer-implemented methods disclosed herein. Some implementations further include the data processing apparatus. [0050] Some implementations of the subject matter described herein can, in certain instances, realize one or more of the following advantages. First, a neural network system may generate a trajectory for a vehicle that satisfies criteria for vehicle navigation, such as criteria that improves passenger safety and comfort. For example, a planned trajectory for a vehicle may mimic or resemble trajectories that would be taken by human drivers. Second, a trajectory planning neural network system can be used to select waypoints in a planned trajectory for a vehicle. The neural network system may improve the selection of waypoints so that a planned trajectory that results from the selected waypoints meets safety and comfort objectives for passengers in a vehicle. Third, the complexity of the trajectory planning neural network system can be reduced by storing waypoints in a memory that is external to the neural network system. The neural network system can be conditioned on previously selected waypoints in a planned trajectory by processing an input that represents the previously selected waypoints, rather than maintaining such information in internal memory of the neural network system. The use of external memory can thus reduce the size and complexity of the neural network system as compared to other approaches, and may also reduce the complexity of training the neural network system and the computational expense required to determine waypoints for a planned trajectory. An external memory is also beneficial in preventing the neural network system from losing memory of previously selected waypoints over time. Even the most sophisticated and well-trained recurrent neural networks (RNNs), such as LSTM networks, are prone to losing memory over time.
Moreover, neural network systems with external memory tend to generalize better than RNNs to processing longer sequences.
[0051] Additional features and advantages will be apparent to a skilled artisan in view of the disclosure contained herein
DESCRIPTION OF DRAWINGS
[0052] Figure 1 is a block diagram of an example computing environment of an autonomous or semi-autonomous vehicle.
[0053] Figures 2A-2D conceptually illustrate example techniques for selecting waypoints in a planned trajectory for a vehicle.
[0054] Figure 3 shows a set of images representing example environmental data and navigation data that may be processed by a trajectory planning neural network system.
[0055] Figure 4 is a flowchart of an example process for using a neural network system to determine waypoints for a planned trajectory of a vehicle. [0056] Figure 5 is a flowchart of an example process for updating planned trajectories of a vehicle and transitioning control of the vehicle between planned trajectories.
[0057] Figure 6 is a block diagram depicting an environment of a neural network system for predicting movements of perception objects in the vicinity of an autonomous or semi-autonomous vehicle.
[0058] Figures 7A-7B conceptually illustrate example techniques for predicting occupancies (e.g., locations, headings) of perception objects in the vicinity of a vehicle.
[0059] Figure 8 is a conceptual diagram of an example environment of a neural network training system that trains a trajectory planning neural network system.
[0060] Figure 9 is a flowchart of an example process for training a trajectory planning neural network system.
[0061] Figure 10 is a diagram of losses in a training scheme for training a traj ectory planning neural network and other neural networks described herein.
[0062] Figure 11 depicts an example road graph and a corresponding road mask that can be processed by a trajectory planning neural network to determine waypoints for a planned trajectory of a vehicle.
[0063] Figure 12 conceptually illustrates perturbation of vehicle locations in training data for the trajectory planning neural network.
DETAILED DESCRIPTION
[0064] Figure 1 depicts a block diagram of an example computing environment 100 of an autonomous or semi-autonomous vehicle. The environment 100 includes various systems to provide automated or semi-automated control of a vehicle, such as an automobile (e.g., a street vehicle), a motorcycle, or a watercraft. The vehicle can be a physical vehicle in a real-world environment or a virtual vehicle in a simulated environment. In some implementations, systems 102, 114, 126, and 128 can be implemented as computer programs on one or more computers. The computers for these systems, along with the external memory 120, can be physically installed on a vehicle so that the systems are arranged to travel along with the vehicle and to perform local processing for generating and executing planned trajectories. In other implementations, one or more computers may be located off- board the vehicle, and the vehicle can communicate with these computers over a network (e.g., the Internet).
[0065] The environment 100 includes a neural network system 102, a trajectory management system 114, an external memory 120, a navigation planning system 126, and a vehicle control system 128. In general, the neural network system 102 is configured to process input data components 108, 110, and 112, to score a set of locations in a vicinity of a vehicle. The score for a given location can represent a likelihood that the location is an optimal location for inclusion as a waypoint in a planned trajectory for the vehicle. The trajectory management system 1 14 processes the scores from the neural network system 102 to select one of the locations as a waypoint for a given time step in the planned trajectory. The trajectory management system 1 14 also interfaces with external memory 120, which stores information about previous locations of the vehicle, e.g., locations previously traveled by the vehicle, locations previously selected for a planned trajectory (i.e., planned waypoints), or both. Similar operations can be performed over a series of time steps to iteratively select waypoints that collectively define a planned traj ectory for the vehicle.
[0066] In some implementations, the neural network system 102 and the traj ectory management system 1 14 are configured to interact with a navigation planning system 126, which generates navigation data 112 representing a planned navigation route of the vehicle. The planned navigation route can indicate one or more goal locations of the vehicle's travel and, optionally, a targeted path to arrive at the goal locations.
[0067] In further detail, the neural network system 102, in coordination with the trajectory management system 1 14, is configured to generate a planned trajectory for a vehicle by determining, at each of a series of time steps, a waypoint for the planned trajectory at the time step. At each time step, the system 102 processes neural network inputs that include waypoint data 108, environmental data 1 10, and navigation data 1 12 to generate a set of scores that each correspond to a different location in a vicinity of the vehicle. Each score can indicate a likelihood that its corresponding location is an optimal location for the vehicle to travel to at the current time step of the planned traj ectory. For example, each score can represent how well its corresponding location optimizes criteria for vehicle operation such as adherence to a planned route for the vehicle, passenger safety and comfort. A waypoint selector 1 16 of the trajectory management system 1 14 can then select one of the scored locations in the vicinity of the vehicle as a waypoint for the planned trajectory at the current time step based on the set of scores generated by the neural network system 102, e.g., by selecting the location corresponding to the highest score in the set of scores.
[0068] In some implementations, the inputs and outputs of the neural network system can be represented by an image, where each pixel in the image corresponds to a respective location of the environment in the vicinity of the vehicle. The image thus frames a window that at least partially surrounds the vehicle, where pixels nearer the outer edges of the image correspond to locations of the environment that are farther from the vehicle and pixels toward a center or other pre-defined location of the image correspond to locations of the environment that are nearer the vehicle. The image can have multiple channels, such that each pixel defines multiple attributes of the environment at its corresponding location. For example, a first channel may represent a road graph, a second channel may represent a route, and a third channel may represent the presence or absence of perception objects.
[0069] At each time step, the neural network system 102 processes a neural network input that includes waypoint data 108. The waypoint data 108 identifies a set of previous locations of the vehicle before the current time step. The previous locations identified by the waypoint data 108 can be previously traveled locations of the vehicle before the current time step (i.e., actual locations at which the vehicle was recently located), planned locations of the vehicle before the current time step (i.e., way points in the planned traj ectory that have already been generated (predicted) at time steps before the current time step), or both. For example, the neural network system 102 may take part in generating a planned trajectory for a vehicle that includes 20 way points, where each waypoint in the planned trajectory represents a planned location of the vehicle at a respective time step in a series of time steps (i.e., one time step for each waypoint). At the first time step, ti, all of the locations identified by the waypoint data 108 may be previously traveled locations at which the vehicle was actually driven at one or more time steps before ti. After the first time step (e.g., at time steps t2 through t2o), the waypoint data 108 may identify each of the waypoints (i.e., planned locations) from ti through the most recent time step that immediately precedes the current time step. For instance, at time step ts>, the waypoint data 108 may identify each of the planned locations of the vehicle from ti through te.
[0070] Optionally, the waypoint data 108 at a given time step after the initial time step ti can further include indications of one or more actual locations at which the vehicle was observed before ti. For example, a set of previously traveled locations indicated by the waypoint data 108 at time step ti can be maintained at each time step following ti. Thus, at each time step after ti, the planned location that was predicted at the immediately preceding time step is added to the waypoint data 108, but none of the traveled locations from time steps before ti are removed from the waypoint data 108. Alternatively, all or some of the previously traveled locations from before ti can be removed from the waypoint data 108 for time steps after the initial time step ti in a series of time steps for a planned trajectory. For example, the systems may adopt a sliding window approach in which traveled locations of the vehicle from time steps before ti are gradually phased out of the waypoint data 108 by removing the oldest remaining traveled location each time a new planned location is added to the waypoint data 108. In other implementations, the traveled locations indicated by the waypoint data 108 may all be removed from the waypoint data 108 immediately after ti or at another specified time step.
[0071] In some implementations, the real-world time interval represented by each successive time step in a series of time steps for a planned trajectory is fixed. For example, if the real-world time interval between successive time steps is 100 milliseconds, then a planned trajectory defined by 20 way points would represent planned locations of a vehicle over the next two seconds at each 100 millisecond interval. In some implementations, the real-world time interval between successive time steps in a planned trajectory is in the range 100 milliseconds to 500 milliseconds, and can preferably be in the range 100 milliseconds to 200 milliseconds.
[0072] The neural network system 102 can further process a neural network input that includes environmental data 1 10 to generate scores for the possible waypoint locations at each time step. In general, environmental data 110 is data that characterizes a current state of an environment of the vehicle. The environmental data 110 can characterize a wide range of environmental factors, and in some implementations, the data 1 10 can include multiple data channels that each represent a different environmental factor.
[0073] In some implementations, the environmental data 110 includes sensor data that characterizes information about the vehicle's current environment captured by one or more sensing subsystems on the vehicle. An autonomous or semi-autonomous vehicle may include multiple sensing subsystems for sensing information about the environment in proximity of the vehicle. For example, a first sensing subsystem may be a light detection and ranging (LIDAR) system that emits and detects reflections of laser light, and a second sensing subsystem may be a radio detection and ranging (RADAR) system that emits and detects reflections of radio waves. Additional sensing subsystems may also be provided on a vehicle, such as a camera system that detects reflections of visible light.
[0074] The sensing subsystems can generate sensor data that indicates, for example, a distance of reflected radiation (e.g., laser light, radio waves, or visible light), a direction of the reflected radiation, an intensity of the reflected radiation, or a combination of these. A given sensing subsystem can transmit one or more pulses of electromagnetic radiation in a particular direction and can measure the intensity of any reflections as well as the elapsed time between emitting the radiation and receiving the reflective signal. A distance between an object in the environment and the current position of the vehicle can be determined based on the elapsed time between emitting the radiation and receiving the reflective signal. The sensing subsystems can each continually sweep a particular space in angle, azimuth, or both. Sweeping in azimuth, for example, can allow a sensing subsystem to detect multiple objects along a same line of sight. In some implementations, the environmental data 110 includes a sensor input image, where the input image is a 2D projection of sensor data for a partial sweep, a single sweep, or multiple sweeps of one or more sensing subsystems of the vehicle. A LIDAR input image, for example, may be formed by generating a 2D projection of a LIDAR sweep around a vehicle. In some implementations, the sensor input image may characterize sweeps from multiple sensing subsystems. Figure 3 shows an example LIDAR input image 310.
[0075] The environmental data 110 may further include one or more non-sensor channels, i.e., channels that characterize information about the environment in a vicinity of the vehicle obtained from sources other than the vehicle's sensing systems. In some implementations, the environmental data 110 includes a roadgraph channel for data that describes roads in the vicinity of the vehicle. For example, Figure 3 shows an image 302 representing data from an example roadgraph channel. Roadgraph data can indicate the path of roads in the vicinity of the vehicle, lane boundaries, and other features of roads, parking lots, or other driving surfaces in the vicinity of the vehicle.
[0076] In some implementations, the environmental data 110 includes a speed limit channel. The speed limit channel indicates the speed limits that apply at each location of the roads in the vicinity of the vehicle (e.g., roads identified by the roadgraph data). The speed limits indicated by the speed limit channel may be a legal speed limit set by a governing authority in the location of the vehicle. Alternatively, the speed limits indicated by the speed limit channel may be modified from the legal speed limit according to certain criteria. For example, the speed limit may be capped based on a passenger preference, road conditions (e.g., slick or icy roads), or based on the road feature at issue (e.g., ramps, intersections, and sharp turns may be assigned speed limits that are lower than a posted limit). Figure 3 shows an example image 304 representing data from an example speed limit channel.
[0077] In some implementations, the environmental data 110 includes a traffic lights channel. The traffic lights channel indicates the locations of traffic lights in the vicinity of the vehicle (e.g., traffic lights for roads identified by the roadgraph data). The traffic lights channel can further include an indication of the current state of each traffic light, such as green, yellow, or red. Figure 3 shows an example image 308 representing data from an example traffic lights channel. The status of each light can be encoded by varying the intensity and/or color of pixels in the input image corresponding to the roadway (e.g., lane) controlled by that light.
[0078] In some implementations, the environmental data 110 includes one or more perception objects channels. Perception obj ects are objects that have been detected in the vicinity of the vehicle. For example, a perception objects neural network system (not shown in Figure 1) may process data from one or more sensor channels to identify objects located nearby the vehicle. Perception objects may include other vehicles, pedestrians, vegetation, signposts, buildings, or combinations of these and other types of objects. In some implementations, the perception objects channel includes data that indicates the locations, sizes, orientations, and/or types of perception objects in the vicinity of the vehicle.
Information from the perception objects channel can indicate locations to avoid in a planned trajectory of a vehicle for collision risk mitigation. Figure 3 shows an image 312
representing data from an example perception objects channel. In some cases, the environmental data 110 can include information about the predicted movements (e.g., trajectories) of identified perception objects in the vicinity of an autonomous or semi- autonomous vehicle. Figures 6 and 7, for example, illustrate a perception object trajectory prediction neural network 604 (also referred to as a perception obj ect occupancy prediction neural network 604) that operates in a similar manner to trajectory planning neural network 104, but that predicts locations of perception objects nearby the autonomous vehicle rather than determining way points for a trajectory of the vehicle as performed by the trajectory planning neural network 104.
[0079] The neural network system 102 is further configured to process a neural network input that includes navigation data 112. The navigation data 112 represents a planned navigation route for a vehicle. The planned navigation route may be generated independent of actual conditions of the vehicle or its environment. Accordingly, the planned navigation route is generally insufficient to indicate how the vehicle should be maneuvered along the route. The precise locations and movements for a vehicle traveling a route are instead indicated by the vehicle's planned trajectory. The planned trajectory accounts for the navigation route but also considers information about the vehicle's current environment such as objects located near the vehicle, precise lane boundaries, and traffic controls. For example, the navigation data 112 may indicate that the planned route for a vehicle runs through an intersection, but may not indicate current conditions of the intersection such as the presence of other vehicles, lane closures, or the status of a traffic light in the intersection. The planned trajectory, in contrast, may be generated so that the vehicle crosses the intersection in a safe, legal, and comfortable manner. For instance, the planned trajectory may indicate precise locations for the vehicle to follow in order to maneuver around an object, to change lanes, and to comply with applicable traffic laws, while maintaining the overall course of the planned navigation route.
[0080] In some implementations, the navigation data 112 specifies a portion of a navigation route that is within the current context of the vehicle. The current context of the vehicle represents an area surrounding the vehicle (in vicinity of the vehicle) that is represented in the roadgraph and other data channels processed by the neural network system 102. For example, the current context of the vehicle may be an area between 50-200 feet surrounding the vehicle. The navigation data 112 may indicate the route the vehicle should follow within this area surrounding the vehicle. If the vehicle is near its destination, the current context may include the final destination of the planned navigation route. The planned navigation route may traverse lane centers of navigable lanes within the current context of the vehicle. In contrast, the planned trajectory may call for deviations from the lane centers, e.g., to avoid object or to safely perform a maneuver.
[0081] The neural network system 102 processes at each time step the neural network inputs that include waypoint data 108, environmental data 110, and navigation data 112 to generate the set of scores for possible waypoint locations in a planned trajectory of the vehicle. In some implementations, the neural network system 102 may include an encoder neural network 106 as a first portion of the neural network system 102 and a trajectory planning neural network 104 as a second portion of the neural network system 102. The encoder neural network 106 receives neural network inputs for the environmental data 110 and navigation data 112, and processes the inputs to generate an encoded representation 107 of the environmental data 110 and the navigation data 112. In some implementations, the encoded representation 107 is a vector of values from a last hidden layer of the encoder neural network 106. The encoder neural network 106 can be a feed-forward neural network.
[0082] The trajectory planning neural network 104 is configured to receive as inputs waypoint data 108, and the encoded representation 107 of the environmental data 110 and the navigation data 112. The trajectory planning neural network 104 is further configured to generate, using the waypoint data 108 and the encoded representation 107 (or, in other implementations, using the environmental data 110 and navigation data 112 directly rather than the encoded representation 107), and in accordance with trained values of parameters of the neural network 104, a set of scores that each indicate a likelihood of a particular location being a "best" (e.g., most optimal) location for a waypoint along a planned trajectory of the vehicle at a current time step. The trajectory planning neural network 104 can be a feedforward neural network.
[0083] In some implementations, the trajectory planning neural network 104 can further process an input that identifies a count or index for the prediction (e.g., an indication of the current time step for which the prediction is to be made or of the most recent prediction). For example, an index n can be provided as input for the network 104 to predict the first waypoint, an index n+l can be provided as input for the network 104 to predict the second waypoint, an index n+2 can be provided as input for the network 104 to predict the third waypoint, and so on. The inclusion of a count or index for the sequence of waypoints as an input to the network 104 allows the network 104 to account for temporal factors in generating prediction scores for the next waypoint. For example, if the vehicle has stopped at a stop sign, the location of the vehicle may remain unchanged for several time steps. The count or index provides the network 104 with an indication of how long the vehicle has been stopped, thereby allowing the network 104 to account for this length in determining whether to advance the vehicle at a next time step or to leave the vehicle in place at the next time step. The network 104 may not advance the vehicle at an earlier time step, but may advance the vehicle at a later time step indicated by the count or index after establishing that the vehicle has stopped.
[0084] In some implementations, the count or index can also be processed by the network 104 or an auxiliary network 1002 to facilitate refinement of a coarse-resolution waypoint to a fine-resolution waypoint. For example, if the predicted time-step exiting the stop is not far enough to switch to the next coarse-resolution pixel/location, the vehicle may not move. But if the movement if sufficient to switch to the next fine-resolution
pixel/location, the vehicle may move appropriately. Using the index allows the N to selectively predict farther into the future.
[0085] Although a planned trajectory for a vehicle typically includes a collection of waypoints representing planned locations for the vehicle at each of a series of time steps, the neural network system 102 may not itself include memory to store information about which waypoints have been selected or traveled at preceding time steps. In order to condition the neural network system 102 at a given time step on the results of one or more preceding time steps, the environment 100 therefore includes a trajectory management system 114 and an external memory 120. These components 114 and 120 are configured to store indications of waypoints from preceding time steps and to update the waypoint data 108 that will be processed by the neural network system 102 at each time step based on the results of previous time steps.
[0086] The trajectory management system 114 includes a waypoint selector 116 and a memory interface subsystem 118. At each time step, the waypoint selector 116 receives the set of scores for possible waypoint locations generated by the trajectory planning neural network 104 for the time step. The waypoint selector 116 then applies one or more criteria to select a particular location from the set of possible locations as the waypoint of the planned trajectory for the current time step. For example, if the trajectory planning neural network 104 is trained to generate higher scores for locations having higher likelihoods of being an optimal waypoint for a trajectory, then the waypoint selector 116 may select the location corresponding to the highest score in the set as the waypoint for the current time step. The memory interface subsystem 118 then sends an instruction to the external memory 120 to cause an indication of the selected waypoint location to be written to the external memory 120.
[0087] The external memory 120 includes one or more computer-readable storage devices that store data indicating locations of a vehicle. In particular, the memory 120 can include a first database 122 that stores data indicating previously traveled locations of the vehicle and a second database 124 that stores data indicating waypoints of a planned trajectory of the vehicle. The previously traveled locations of the vehicle can be determined based on global positioning system (GPS) coordinates or other techniques for measuring the actual location of the vehicle while it is driven. The traveled locations stored in the first database 122 can include locations for each of at least n time steps preceding an initial time step at which the neural network system 102 and trajectory management system 114 begin generating a planned trajectory of the vehicle, where n is the number of previously traveled locations represented in the waypoint data 108 at the initial time step. The second database 124 can store data indicating each waypoint selected for each time step of the planned trajectory up to the current time step being processed by the neural network system 102 and the trajectory management system 114. For example, the second database 124 can create a table for a planned trajectory that stores the location of each selected waypoint in the trajectory and sequence information that sequences the waypoints relative to each other (e.g., values for each waypoint that represent the corresponding time step for the waypoint). At each time step, the waypoint selector 116 selects a particular location as the waypoint for that time step, and the memory interface subsystem 118 then writes an indication of the selected location as the waypoint for that time step in the second database 124. Moreover, at each time step, the trajectory management system 114 generates waypoint data 108 for the neural network system 102 to process at the time step. The system 114 generates waypoint data 108 by using the memory interface subsystem 118 to access the extemal memory 120 and reading data indicating one or more previous locations of the vehicle for one or more time steps preceding the current time step. The previous locations can include traveled locations from the first database 122, planned waypoints from the second database 124, or a combination of traveled locations and planned waypoints over a recent period of time.
[0088] The navigation planning system 126 is configured to determine a planned navigation route for a vehicle. The planned navigation route can indicate one or more goal locations of the vehicle's travel and, optionally, a targeted path to arrive at the goal locations. Figure 3 shows an image 306 representing an example planned navigation route for a vehicle. The neural network system 102 can process navigation data 112 that characterizes a planned navigation route generated by the planning system 126. In some implementations, the planning system 126 also controls an operational mode of a vehicle. For example, during a first mode for normal autonomous driving, the planning system 126 activates the trajectory management system 114 to use planned trajectories determined using the neural network system 102 as the basis for maneuvering the vehicle. However, during some rare
circumstances (e.g., impending collisions), the planning system 126 may override the trajectory management system 114 and activate a second operational mode of the vehicle that adopts a different strategy for maneuvering the vehicle. The planning system 126 may also allow a human operator to take manual control of the vehicle.
[0089] In order for a vehicle to maneuver along a planned trajectory, a sequence of control actions can be performed to cause the vehicle to follow the trajectory. The control actions can include steering, braking, and accelerating, for example. In some
implementations, a vehicle can include a vehicle control system 128 that facilitates an ability of the vehicle to maneuver along a planned trajectory. The vehicle control system 128 can receive data indicating a planned trajectory (including data indicating a set of waypoints for a series of time steps) from trajectory management system 114 directly, or indirectly via the navigation planning system 126. The vehicle control system 128 processes the planned trajectory and determines a set of control actions (e.g., steering, braking, accelerations) to perform over time to cause the vehicle to maneuver along the planned trajectory. As the vehicle maneuvers along a planned trajectory, it may arrive at each of the waypoints identified in the planned trajectory at an appropriate time based on the real-world time interval between successive time steps. In some implementations, the vehicle control system 128 determines control inputs to one or more control modules of the vehicle such as a steering module 130, braking module 132, and acceleration module 134. Based on such inputs, the control modules may carry out the set of control actions determined by the vehicle control system 128 so that the vehicle is maneuvered along the planned trajectory. For example, the waypoints of a planned trajectory may be provided as input to a finite-time Linear Quadratic Regulator (LQR) based optimizer to translate the planned trajectory into actual vehicle control inputs to cause the vehicle to maneuver along the planned trajectory. In some implementations, the optimizer may adhere to one or more additional constraints beyond the waypoints of the planned trajectory in determining control inputs, such as maximum allowable vehicle speed, vehicle turning radius, and/or maximum curvature to generate control inputs for maneuvers that can be legally and feasibly performed by the vehicle.
[0090] Turning to Figures 2A-2D, an example process is depicted for selecting waypoints in a planned trajectory for a vehicle.
[0091] Figure 2A illustrates selection of a first waypoint of the planned trajectory at an initial time step ti. At this time step, the trajectory planning neural network 104 processes a first neural network input 202a and a second neural network input 204a. The first neural network input 202 represents waypoint data, e.g., waypoint data 108, and indicates a set of previous locations of a vehicle. In some implementations, because no planned locations for the trajectory have been predicted yet by the initial time step ti, the set of previous locations indicated by the waypoint data are a set of previous locations actually traveled by the vehicle before ti. The previously traveled locations are represented in Figure 2 A as circular dots in the image of the first neural network input 202. Because the time interval between successive time steps is fixed, a greater distance between dots in the image indicate that the vehicle will require a greater average speed to travel between the locations corresponding to the dots than if the dots were more closely spaced. The second neural network input represents environmental data, e.g., environmental data 1 10, and navigation data, e.g., navigation data 112. In some implementations, the second neural network input is an encoded representation of environment and navigation data, e.g., encoded representation 107, which results from an encoder neural network, e.g., encoder neural network 106, processing multiple channels of environment and navigation data. The trajectory planning neural network 104 generates a set of scores for possible waypoint locations, and the waypoint selector 116 selects one of the locations as the waypoint for the current time step. The selected waypoint is then stored in memory and added to a set of waypoints that define the planned trajectory of the vehicle. Image 206a shows the first waypoint of the planned trajectory for time step ti as the triangular dot that follows the trail of circular dots representing previously traveled locations of the vehicle.
[0092] Figure 2B illustrates selection of a second waypoint of the planned trajectory at a second time step t2. At this time step, the first neural network input 202b is updated to represent waypoint data that includes the selected waypoint for the planned trajectory from the preceding time step ti. In the example shown in Figure 2B, the first neural network input 202b maintains indications of the set of previously traveled locations of the vehicle from before ti, but in other implementations, one or more (or all) of the previously traveled locations can be removed from the waypoint data at time step t2 and subsequent time steps. The second neural network input 204b presents environment and navigation data. In some implementations, the second neural network input is updated at each time step to reflect any recent changes to the environment or navigation data since the previous time step. In other implementations, the same second neural network input is re-used at each time step, e.g., for efficiency and/or because more updated data may not be available. For example, the planned trajectory for a vehicle may be generated in such a short span of time that the environment and navigation data used at each of the time steps is still deemed current. The trajectory planning neural network 104 generates a set of scores for possible waypoint locations, and the waypoint selector 116 selects one of the locations as the waypoint for the current time step t2. The selected waypoint is then stored in memory and added to a set of waypoints that define the planned trajectory of the vehicle. Image 206b shows the second waypoint of the planned trajectory for time step t2 as the second triangular dot that follows the trail of circular dots representing previously traveled locations of the vehicle.
[0093] The process continues similarly to add additional waypoints to the planned trajectory for one or more additional time steps. Figure 2C illustrates selection of a third waypoint of the planned traj ectory at a third time step t3. At this time step, the first neural network input 202c is updated to represent waypoint data that includes the selected waypoint for the planned trajectory from the preceding times steps ti and t2. The trajectory planning neural network 104 generates a set of scores for possible waypoint locations, and the waypoint selector 1 16 selects one of the locations as the waypoint for the current time step t3. The selected waypoint is then stored in memory and added to a set of waypoints that define the planned trajectory of the vehicle. Image 206c shows the third waypoint of the planned trajectory for time step t3 as the third triangular dot that follows the trail of circular dots representing previously traveled locations of the vehicle. [0094] Figure 2D illustrates selection of a fourth waypoint of the planned trajectory at a fourth time step . At this time step, the first neural network input 202d is updated to represent waypoint data that includes the selected waypoint for the planned trajectory from the preceding times steps ti through t3. The trajectory planning neural network 104 generates a set of scores for possible waypoint locations, and the waypoint selector 116 selects one of the locations as the waypoint for the current time step . The selected waypoint is then stored in memory and added to a set of way points that define the planned trajectory of the vehicle. Image 206d shows the fourth waypoint of the planned trajectory for time step as the fourth triangular dot that follows the trail of circular dots representing previously traveled locations of the vehicle.
[0095] Figure 4 is a flowchart of an example process 400 of using a neural network system to determine waypoints for a planned trajectory for a vehicle.
[0096] At stage 402, a trajectory management system, e.g., trajectory management system 114, obtains waypoint data, environmental data, and navigation data for a current time step in a series of time steps of a planned trajectory for a vehicle. The waypoint data, e.g., waypoint data 108, identifies a set of previous locations of the vehicle, which may include previously traveled locations of the vehicle, planned locations of the vehicle (i.e., waypoints of the planned trajectory from preceding time steps), or both traveled locations and planned locations of the vehicle. The environmental data, e.g., environmental data 110, represents a current state of the environment of the vehicle such as objects in proximity of the vehicle, a roadgraph, and applicable traffic rules. The navigation data represents a planned navigation route for the vehicle.
[0097] At stage 404, the trajectory management system generates a first neural network input from the waypoint data. The first neural network input characterizes the waypoint data in a format that is suitable for a neural network system, e.g., neural network system 102, to process.
[0098] At stage 406, one or more encoder neural networks generate a second neural network input from the environmental data and the navigation data. First, the trajectory management system may generate inputs to the encoder neural networks that characterize the environmental data and the navigation data in a format that is suitable or processing by the encoder neural networks. Second, the encoder neural networks may process the formatted inputs to generate an encoded representation of the environmental data and the navigation data, e.g., encoded representation 107. [0099] At stage 408, the neural network system, e.g., trajectory planning neural network 104, processes the first and second neural network inputs to generate scores for a set of locations to which the vehicle may travel. In some implementations, each score represents a likelihood that a particular location is an optimal location as a waypoint for the vehicle's planned trajectory at the current time step, where each score corresponds to a different location from the set of locations. In some implementations, the total number of scored locations is in the range 1-10, 1 -100, or 1 -1 ,000, and the physical distance in the real-world between adjacent locations is in the range 1 -12 inches, 1 -48 inches, or otherwise, depending on the resolution of input data.
[0100] At stage 410, a trajectory management system, e.g., waypoint selector 1 16 of trajectory management system 1 14, selects a waypoint for the planned trajectory at the current time step. The waypoint can be selected based on the set of scores generated by the trajectory planning neural network. In some implementations, the waypoint selector selects a location as the waypoint for the current time step as a result of the score for the selected location indicating that it is the most optimal waypoint location among the set of possible locations (e.g., the location with the highest score).
[0101] At stage 412, a memory interface of the traj ectory management system, e.g., memory interface subsystem 118, writes to memory an indication of the selected location as the waypoint in the planned trajectory of the vehicle for the current time step. In some implementations, the indication of the selected location is recorded in a database of planned waypoints, e.g., database 124 in external memory 120.
[0102] The process 400 can be repeated for each of a series of time steps until a terminating condition is met. In some implementations, the terminating condition is that a pre-defined number of waypoints have been selected for a planned trajectory. For example, the traj ectory management system 1 14 may be configured to generate planned trajectories of fixed size in terms of the number of waypoints in the planned trajectories. After the predefined number of waypoints have been determined, the trajectory is deemed complete and the systems cease repeating process 400 to determine additional waypoints. In some implementations, the number of waypoints for a planned trajectory is in the range 5 to 15. In some implementations, the number of waypoints for a planned trajectory may be adjusted based on current conditions of the vehicle, vehicle speed, the complexity of the environment in vicinity of the vehicle, or combinations of these and other factors. In each iteration of the process 400, the waypoint data is updated to add the most recently selected waypoint, so that each previously selected waypoint since an initial time step is represented in the waypoint data for a current time step.
[0103] In some implementations, the traj ectory management system 114 and neural network system 102 may constantly update the planned trajectory for a vehicle. That is, the systems can begin generating a new trajectory using actual vehicle location information before the most-recently generated planned trajectory has been followed to its conclusion. For example, the systems may generate trajectories that plan the locations for a vehicle for four seconds out from the current time at which a given trajectory is created. The vehicle may follow a first of the four second trajectories for just a portion of the total length of the trajectory (e.g., 1.5-2 seconds) before it begins following a most recently generated trajectory. Because the waypoints selected at earlier time steps in each trajectory have shorter-range dependencies, the earlier waypoints may be more reliable estimations than waypoints at later time steps in a trajectory. Therefore, by constantly handing control over to a most recently generated trajectory before a previous trajectory has been completely traveled, the vehicle may more frequently be maneuvered according to earlier waypoints in the planned trajectories, while still having additional waypoints available as a buffer if a next trajectory is not yet available.
[0104] Figure 5 is a flowchart of an example process 500 for updating planned trajectories of a vehicle and transitioning control of the vehicle from an earlier generated trajectory to a more recently generated traj ectory. At stage 502, the system generates a first set of waypoints that define a first planned trajectory. A vehicle control system on the vehicle determines a set of control actions to maneuver the vehicle according to the first planned trajectory, and the vehicle executes the set of control actions to begin following the first planned trajectory at stage 504. While maneuvering according to the first planned trajectory, at stage 506 the vehicle computing systems generate a second set of waypoints defining a second planned traj ectory for the vehicle. The first time step in the second planned trajectory may correspond in real -world times to a particular time step partway through the first planned trajectory. At the particular time step in the first planned trajectory, the vehicle may terminate the first planned trajectory before it has traveled to all of its waypoints (stage 508) and begin to maneuver according to the second planned trajectory (stage 510). Process 500 may be continuously repeated to generate updated trajectories for the vehicle and to transition to most recently updated trajectories as they become available.
[0105] Figure 6 depicts a block a diagram of an example computing system 600 that predicts movements (e.g., trajectories) of perception objects within a vicinity of an autonomous or semi-autonomous vehicle. The system 600, and each component thereof, can be implemented on one or more computers in one or more locations. In some instances, the system 600 is fully implemented on the autonomous or semi-autonomous vehicle. By implementing the system 600 locally on the vehicle, the system 600 can minimize latency in generating predictions and providing them to a trajectory planning and control system of the vehicle.
[0106] In general, the system 600 is configured to process information about perception objects that have been detected near the autonomous or semi-autonomous vehicle, and to predict occupancies of the objects over a set of future time steps. The "occupancy" of an object at a given time step refers to the specific area within the environment of the autonomous or semi-autonomous vehicle that is predicted to be occupied by the object at that time step. In some examples, an object can be represented by a bounding box (e.g., a rectangular, square, circular, triangular, or oval-shaped box) that approximates the size and shape of the object. Other vehicles that share the roadway with the autonomous or semi- autonomous vehicle, for example, may be represented by rectangular bounding boxes, whereas trees and shrubs may be represented by circular bounding boxes. The occupancy of the object can be defined by the area of the environment encompassed by the object's bounding box. In some implementations, the occupancy of an object can be represented by the location and heading (e.g., rotation/orientation) of the corresponding bounding box for the object.
[0107] The system 600 includes a neural network system 602, a trajectory management system 614, and an external memory 620. The operation of the system 600 is similar to that of system 100 (FIG. 1), but rather than generating waypoints for the autonomous or semi-autonomous vehicle itself, the system 600 predicts occupancies for perception objects other than the subject autonomous or semi-autonomous vehicle. The predicted occupancies of the perception objects can then be provided as input to the system 100/trajectory planning neural network 104 (e.g., as a channel of environmental data 110) to facilitate determination of optimal waypoints in a trajectory for the vehicle. The predicted occupancy of a perception object at given time step can be conditioned on all previous occupancies of the object within a recent window of time steps (e.g., including observed occupancies detected based on sensor data, predicted occupancies, or both), and the current state of the environment.
[0108] In some implementations, the system 600 predicts occupancies for all or multiple perception objects in vicinity of the vehicle concurrently. In other implementations, the system 600 predicts occupancies for perception objects one at a time. By way of example, operation of the system 600 is discussed with respect to a single object in the following paragraphs.
[0109] The neural network system 602 processes occupancy data 608 for an object and environmental data 610 to generate a set of occupancy scores 628. The set of occupancy scores 628 respectively indicate likelihoods (e.g., probabilities) that the object will have an occupancy at the time step that corresponds to the occupancy represented by the respective score. In some implementations, the occupancy scores 628 include a set of location scores indicating a location of the bounding box for the object and a set of heading scores indicating a heading (e.g., orientation/rotation) of the bounding box. The environmental data 610 can generally include any of the channels of environmental data 110 previously described with respect to Figure 1 such as a roadgraph and sensor data, and is first processed by an encoder neural network 606 to generate an encoded representation 607 of the environmental data 610. The occupancy data 608 indicates previous occupancies of the object. At the initial time step for the first occupancy prediction, the occupancy data 608 indicates observed occupancies of the object over a series of previous time steps. At subsequent time steps, the occupancy data 608 indicates each previously predicted occupancy of the object. The occupancy data 608 at the subsequent time steps can, in some implementations, further include indications of the observed occupancies of the object at the time steps that preceded the predictions. The perception object trajectory prediction neural network 604 processes the occupancy data 608 and the encoded representation 607 of the environmental data 610 to generate occupancy scores 628. In some implementations, the perception object trajectory prediction neural network 604 is a feedforward neural network that is trained using backpropagation and gradient descent, similar to the process 900 described in Figure 9.
[0110] The trajectory management system 614 includes an occupancy selector 616 and memory interface subsystem 618. The occupancy selector 616 processes the occupancy scores 628 and determines a predicted occupancy for the object based on the scores. In some implementations, the occupancy selector 616 selects a predicted occupancy for the object at the time step as the occupancy corresponding to the most favorable score in the set of occupancy scores 628. The memory interface subsystem 618 records (e.g., stores) the predicted occupancy in the occupancy history database 622 in external memory 620. The memory interface subsystem 618 also accesses data indicating previous occupancies from the occupancy history database 622 and provides them as input occupancy data 608 to the neural network system 602 at each time step. The external memory 620 maintains an index 624 of each perception object being tracked by the system 600, and the occupancy history 622 can include occupancy histories for each obj ect.
[0111] Figures 7A and 7B illustrate an example of perception object trajectory prediction using system 600 (for simplicity, only neural network 604 and occupancy selector 616 are shown). Figure 7A shows the system 600 generating occupancy predictions for a first time step ti. Input 702a represents occupancy data for a pair of perception objects represented by bounding boxes A and B. In this example, the perception objects are a pair of vehicles in the vicinity of the autonomous or semi-autonomous vehicle (not shown in input 702a). The input 702a indicates the occupancy (e.g., location and heading) of each object A and B. Although only one observed occupancy is shown for each object A and B, in other examples the initial input 702a can include observed occupancies for each object over multiple previous time steps. The perception obj ect trajectory prediction neural network 604 processes the occupancy data input 702a and encoded environmental data input 704a to generate a set of occupancy scores for each object, and the occupancy selector 616 determines a predicted occupancy for each object based on the occupancy scores. The predicted occupancies of objects A and B for the next time step is shown in output image 706a.
[0112] Figure 7B shows the system 600 generating occupancy predictions for objects A and B at the next time step t2. For this prediction, the perception object traj ectory prediction neural network 604 processes input occupancy data 702b that indicates the predicted occupancies of the objects A and B from time step ti, and optionally includes the observed occupancies from one or more previous time steps. Based on occupancy data 702b and encoded environmental data 704b, the perception object trajectory prediction neural network 604 generates a set of occupancy scores for each object, and the occupancy selector 616 determines a predicted occupancy for each object based on the occupancy scores. The predicted occupancies of objects A and B for time step t2 is shown in output image 706b. As indicated by the output image 706b, object A is predicted to begin a right turn, while object B is predicted to continue traveling toward the bottom of the image.
[0113] In some implementations, the system 600 is trained to predict occupancies by perception objects that have not yet been observed. For example, at time step t2, the system 600 predicts that another vehicle (represented by bounding box object C) is likely to be traveling downward in the same lane as object B. Because object C has not actually been observed, object C may not materialize in fact, although it is predicted that another vehicle will follow vehicle B. By providing occupancy predictions for non-observed objects, the autonomous or semi-autonomous vehicle may account for the likelihood of these objects when planning its own trajectory. For example, the autonomous vehicle may avoid crossing object B's lane to make a left turn immediately after object B passes if another vehicle is expected to come down the lane before the autonomous vehicle can complete the turn.
[0114] Figure 8 is a conceptual diagram of an example environment 800 for training a trajectory planning neural network system 102. The environment 800 can include a neural network training system 802, a training data repository 804, and the trajectory planning neural network system 102. The neural network training system 802 can include one or more computers in one or more locations. The training system 802 is configured to train the trajectory planning neural network system 102 to score locations to which a vehicle may travel based on how likely each location optimizes criteria (e.g., minimizes loss) for vehicle operation in a planned trajectory for the vehicle. In some implementations, the training system 802 employs supervised machine-learning techniques to train the trajectory planning neural network system 102 using training data from training data repository 804. The training data includes training inputs 808 and target outputs 810. Further details concerning how the trajectory planning neural network system 102 can be trained are described with respect to Figure 9.
[0115] Figure 9 depicts a flowchart of an example process 900 for training a trajectory planning neural network system. The neural network system can include an encoder neural network for processing environmental data and navigation data, e.g., encoder neural network 106, and a trajectory planning neural network for scoring locations of possible travel for a vehicle, e.g., trajectory planning neural network 104. In some implementations, the process 900 jointly trains the encoder neural network and the trajectory planning neural network. In other implementations, the process 900 trains the trajectory planning neural network alone, while the encoder neural network is trained in a separate process. The process 900 can be carried out by a neural network training system, e.g., system 802. Although the trajectory planning neural network system may be implemented on a vehicle, in some implementations, the system is trained offline by a training system that is not located on a vehicle. For example, the neural network system may be trained offline to determine trained values of the parameters of the system. The trained values can then be transmitted to the vehicle to implement a trained neural network system on the vehicle for trajectory planning.
[0116] In general, the trajectory planning neural network system can be trained by processing many samples of training data using the trajectory planning neural network system and, for each sample, adjusting the values of internal parameters of the network using an error between the predicted output generated by the network and a target output specified in the training sample.
[0117] At stage 902, the training system obtains a collection of training data sets (e.g., hundreds, thousands, or millions of training data sets). Each training data set includes a first training input 904, second and, optionally, third training inputs 906, and a target output 908.
[0118] The first training input 904 characterizes waypoint data that represents a set of previous locations of a vehicle. In some implementations, the first training input 904 is a representation of an image that depicts each of the previous locations of the vehicle in a grid of possible locations.
[0119] The second and, optionally, third training inputs 906 characterize one or more channels of environmental data for the vehicle, navigation data for the vehicle, or both. In some implementations, such as if the encoder neural network and trajectory planning neural network are trained jointly, a second training input characterizes one or more channels of environmental data and a third training input characterizes navigation data for the vehicle. In other implementations, such as if the trajectory planning neural network is trained separately, the second training input 906 is an encoded representation that combines both environmental data and navigation data and that was generated by the encoder neural network at an earlier time.
[0120] The training target outputs 908 of the training data sets represent the desired output of the trajectory planning neural network system that should result from processing the respective first and second training inputs 904, 906 of the training data sets. For example, if the first training input identifies a set of previous locations for a vehicle at each of time steps 1 through n-l, the training target output 908 can identify a particular location in a set of possible locations as the target planned location (waypoint) for the vehicle at the next time step, n. In some implementations, the training target output 908 is a vector of location scores that includes a first value (e.g., 1) for the target planned location and a second value (e.g., 0) for all other locations.
[0121] At stage 910, the training system trains the trajectory planning neural network system on the training data sets. The training can include a series of sub-stages 912-718.
[0122] At sub-stage 912, the training system selects a first training data set from the set of training data sets. At sub-stage 914, the trajectory planning neural network system processes the first training input and the second training input (and, optionally, the third training input) from the training data set to generate a predicted set of output scores. The trajectory planning neural network system processes the training inputs in accordance with current values of internal parameters of the network. The predicted set of output scores can include a respective score for each location in a set of all possible waypoint locations.
[0123] At sub-stage 916, the training system determines an output error using the predicted set of output scores generated by the trajectory planning neural network system and the target output 908. At sub-stage 918, the training system then adjusts the current values of the parameters of the trajectory planning neural network system using the output error. In some implementations, the training system uses machine-learning techniques to train the neural network system, such as stochastic gradient descent with backpropagation. For example, the training system can backpropagate gradients of a loss function that is based on the determined output error to adjust current values of the parameters of the neural network system to optimize the loss function.
[0124] After adjusting the current values of the parameters of the trajectory planning neural network system, the training system selects a next training data set from the collection of training data sets and returns to sub-stage 912 to continue the training process using the selected training data set. The training system completes training of the neural network once a training termination condition is satisfied or no further training data sets are available.
[0125] To improve the efficacy of training a trajectory planning neural network system, the training system may employ none, one, or more of the following techniques.
[0126] In some implementations, the training inputs 904 and/or 906, the training target outputs 908, or both, for one or more training data sets, are derived from records of human-operated vehicles driven by humans in a real-world environment. For example, for a given set of training data, the first training input can characterize waypoint data representing a set of locations traversed by a human-operated vehicle at a series of time steps from 1 through n-l. The training target output 908 for the set of training data can then characterize data representing the location actually traversed by the human-operated vehicle at time step n, i.e., the time step that immediately follows the last time step represented by the waypoint data of the first training input. The location traversed at time step n thus represents a planned target location of the vehicle (i.e., a waypoint at time step ri). By using training data that represents actual human driving behaviors, the trajectory planning neural network system can be trained to plan trajectories that mimic trajectories followed by human drivers. Therefore, the trained system may account for similar comfort and safety considerations that would be accounted for by a human driver. Additional training and inference features may also be implemented to minimize the possibility of executing risky driving behaviors that may be represented in some human driving data, and to ensure compliance with applicable legal restriction such as speed limits.
[0127] Further, for training data sets that are derived from records of human driving activity, the second or third training inputs 906 can characterize navigation data that represents a planned navigation route that was prescribed for a human driver to follow, and which the human driver was following while driving through the locations indicated by the waypoint data of the first training input, the next location indicated by the training target output, or both. By having a driver follow a prescribed route, that route can then be used to create the navigation data represented in the second or third training inputs 906. In some implementations, the environment data characterized by the second or third training inputs 906 is actual environment data for the vehicle at times when the vehicle drove through one or more of the locations identified by the first training input or the training target output.
[0128] In some implementations, the training inputs 904 and/or 906, the training target output 908, or both, of one or more training data sets are derived from results of one or more virtual vehicles driven in a simulated environment. For example, for a given set of training data, the first training input can characterize waypoint data representing a set of locations traversed by a virtual vehicle at a series of time steps from 1 through n-l . The training target output 908 for the set of training data can then characterize data representing the location traversed by the virtual vehicle at time step n, i.e., the time step that immediately follows the last time step represented by the waypoint data of the first training input. The location traversed at time step n thus represents a planned target location of the virtual vehicle (i.e., a waypoint at time step ri). Simulated data can sometimes be used to train the trajectory planning neural network system when there is a scarcity of human driving data for particular driving scenarios. For example, in order to train the neural network system to generate appropriate trajectories when faced with an impending collision scenario, simulated training data may be used at least in part (e.g., to supplement real-world driving data) if the quantity of real-world driving data available to generate training data is insufficient.
[0129] Further, for training data sets that are derived from results of an automated (virtual) agent driving a virtual vehicle in a simulated environment, the second or third training inputs 906 can characterize navigation data that represents a planned navigation route that was prescribed for the virtual agent to follow, and which was followed by the virtual vehicle while driving through the locations indicated by the waypoint data of the first training input, the next location indicated by the training target output, or both. In some
implementations, the environment data characterized by the second or third training inputs 906 represents the simulated environment of the virtual vehicle at times when the vehicle drove through one or more of the locations identified by the first training input 904 or the training target output 908. In other implementations, the environment data represents a real- world environment of a vehicle that corresponds to the simulated environment driven in by the virtual vehicle.
[0130] In some implementations, the training system can train the trajectory planning neural network on a collection of training data sets that include some sets derived from records of human-operated vehicles driven in a real-world environment and other sets derived from results of virtual vehicles driven in a simulated environment.
[0131] In some implementations, the training system specifically selects training data sets to use in training the trajectory planning neural network system that model particular driving scenarios. For example, to ensure that the neural network system is exposed to a sufficient number of training samples for various high-risk or complex driving scenarios, the training system can select to train the neural network system on at least a minimum quantity of training data sets that model one or more particular driving scenarios. In some implementations, the training system oversamples training data sets that model driving behavior for one or more specified driving scenarios at a greater frequency than the specified driving scenarios occur in the real world. The training data sets can be sampled (e.g., selected) from a pool of candidate training data sets that have been made available to the training system, e.g., training data sets stored in training data repository 604. Examples of driving scenarios that may be emphasized during a training session include lane merges, unprotected left turns, lane changes, impending collisions, and post-collision activity.
[0132] In some implementations, the training system may select to train the trajectory planning neural network system on training data sets that model different combinations of sensor channels in the environmental data of the training sets. For example, one of the primary sensing components in many autonomous or semi-autonomous vehicles is LIDAR. The vehicle's driving systems may expect that LIDAR inputs will be available during most normal operation of the vehicle. Nonetheless, on occasion, the LIDAR system may malfunction and at least temporarily drop out of service. Therefore, in order to ensure that the trajectory planning neural network system reacts rationally if a sensor channel drops out of service, the trajectory planning neural network system can be trained on training data sets for which the environmental data characterized by the second or third inputs 906 does not include data representing a sensor channel that is normally expected to be present (e.g., LIDAR). The input channel that is not available can be referred to as an absent channel. In some implementations, the trajectory planning neural network system can be trained on a first group of training data sets for which the environmental data includes a particular sensor channel, and further on a second group of training data sets for which the environmental data lacks the particular sensor channel (i.e., to model drop-out of the particular sensor channel such that the particular sensor channel is absent).
[0133] In some implementations, the training system selects to train the trajectory planning neural network system on a collection of training data sets that model driving behavior that is determined to meet one or more criteria, to the exclusion of other available training data sets that model driving behavior that does not meet such criteria. The criteria for filtering training data sets and distinguishing sets that are acceptable for use in training the neural network system from sets that are not can be based on legal restrictions and other safe- driving policies defined by the training system. As an example, the training system may reject training data sets that model illegal driving behaviors such as exceeding posted speed limits, illegal U-turns, driving the wrong way on a one-way street, reckless driving, etc. Similarly, the training system may reject some training data sets that model driving behaviors that are legal, but that nonetheless violate a safe-driving policy, such as passing on the right or driving at a speed that is too far below a posted speed limit. Notably, in some
implementations, legal restrictions and safe-driving policies may also be enforced during the inference phrase with the trained trajectory planning neural network system, e.g., by modifying locations in a planned trajectory for a vehicle to comply with applicable legal restrictions and safe-driving policies. Enforcement during inference can be performed rather than, or in addition to, filtering training data sets during the training phase.
[0134] In some implementations, the trajectory planning neural network 104 can be trained to self-correct the trajectory of the vehicle when its travel begins to deviate from a planned trajectory. Deviation from a planned trajectory can occur, for example, when the vehicle controller is unable to guide the vehicle precisely along the locations of the waypoints that have been planned for it, thereby introducing error between the traveled locations and planned locations for the vehicle. These errors may cause the system to generate sub-optimal trajectories. To train the network 104 more robustly against these types of errors, the training system in some implementations can introduce perturbations to the observed and/or target locations of the vehicle in some training samples. The perturbations can introduce an offset between the vehicle and the center of its lane of travel for at least one observed location of the vehicle, while the target waypoints gradually move the vehicle back toward the center of its lane of travel. For example, Figure 12 depicts two images 1202a and 1202b. The first image 1202a shows a first series of waypoints 1204a representing previous locations and target locations for an autonomous or semi-autonomous vehicle over several time steps. The first series of waypoints 1204a is substantially linear because the vehicle follows a trajectory that aligns with the center of its lane of travel. The second image 1202b, in contrast, shows a second series of waypoints 1204b that have been perturbed so that the previous locations of the vehicle no longer fall along the center line. The target locations for the vehicle gradually re-center to correct for the deviation. This can be accomplished, for example, by using an autonomous vehicle planner to re-center the vehicle. By introducing training samples that include perturbations (e.g., deviations from an ideal trajectory and corrections), the trajectory planning neural network 104 can be trained to determine waypoints for a trajectory that self- corrects and brings the vehicle back to the center of the lane.
[0135] In some implementations, the training system can enforce rules that provide for a diversity of training data to be used in training the trajectory planning neural network 104. For example, the training system may add perturbations to a pre-defined portion of training samples, or otherwise ensure that the network 104 is trained on a minimum number of samples that provide self-correction following deviation from an ideal trajectory. As another example, the training system may enforce rules that provide for a diversity in the dominant orientation of the roadway or travel direction of the vehicle. Training data may be naturally biased toward over-representation of training samples in which the dominant orientation of the roadway and vehicle travel direction is substantially vertical or horizontal. However, to expose the network 104 to sufficient quantity of samples where the orientation of the roadways and travel direction falls between these two cardinal directions, the system may apply random rotations (e.g., yaw offsets) to a specified portion of the training samples so that the network 104 can learn to plan trajectories for any direction of travel.
[0136] The number of previous vehicle locations represented in the waypoint data characterized by the first training inputs 904 can vary among different training data sets. For example, the first training inputs 904 of a first group of training data sets may each indicate fifteen vehicle locations. However, the first training inputs 904 of a second group of training data sets may each indicate a total of twenty -five previous vehicle locations. By varying the number of vehicle locations in the waypoint data during training, the training system trains the neural network system to determine waypoints in a planned traj ectory of a vehicle based on varying numbers of previous vehicle locations indicated in the waypoint data.
[0137] Although the training process 900 described with respect to Figure 9 implied that the current values of parameters of the neural network system were adjusted after processing each training data set, in other implementations the training system trains the neural network system on batches of training data sets. A batch of training data sets can include a collection of related training data sets that represent movements or locations of a particular vehicle at each of a series of consecutive time steps. With batch training, the system may backpropagate gradients of a loss function that is based on the determined output errors from each training data set in the batch to adjust current values of the parameters of the neural network system in a way that optimizes the loss function. In some implementations, batch training allows the training system to optimize a loss function for a full or partial trajectory of a vehicle, rather than for individual locations.
[0138] The process 900 described in Figure 9 trains the trajectory planning neural network based on an output error that represents a loss between the predicted waypoint for the vehicle (also referred to as "self-driving car" or "SDC") and a target waypoint. In some implementations, training can be enhanced by accounting for other losses in addition to the waypoint loss. Figure 10 illustrates an example scheme for training a trajectory planning neural network 104, perception object trajectory prediction neural network 604, SDC auxiliary neural network 1002, and encoder neural network 106. In some implementations networks 104, 106, 604, and 1002 are trained individually. In other implementations, two or more of the networks 104, 106, 604 and 1002 may be trained jointly with each other.
Generally, the losses referred to in Figure 10 are computed based on a function that compares a predicted metric to a target metric. The predicted metric is generated by a neural network during the training phase (e.g., network 104 or 604), while the target metric represents ground truth and can be derived based on actual driving data or simulation data. For example, a target set of waypoints can represent locations that an SDC actually traveled during a drive in the real world over a period of time or that a virtual SDC traveled during a simulation. The encoder neural network 106 provides encoded representations of environmental data and/or other rendered inputs 1004 to the other networks 104, 604, and 1002.
[0139] In some implementations, the perception object trajectory prediction neural network 604 can be trained to minimize an observed perception object loss 1028, a non- observed perception object loss 1026, or both. The observed perception object loss 1028 is derived based on comparison of predicted occupancies for observed perception objects 1010 (e.g., the location and heading of bounding boxes for observed objects) and target occupancies for observed perception objects 1012. The non-observed perception object loss 1026 is derived based on comparison of predicted occupancies for non-observed perception objects 1006 and target occupancies for non-observed perception objects 1008. The non- observed perception objects generally refer to perception objects that were not actually detected in the input data as of the current time step, but that are predicted to appear over one or more time steps in the future.
[0140] In some implementations, the trajectory planning neural network 104 is trained to minimize a coarse SDC waypoint loss 1036, a fine SDC waypoint loss 1048, an SDC speed loss 1046, a collision loss 1038, an SDC box loss 1040, an on-route loss 1042, an on-road loss 1044, or a combination of all or some of these losses.
[0141] The training system generates the coarse SDC waypoint loss 1036 based on comparison of a coarse SDC waypoint 1016 predicted by trajectory planning neural network 104 (and selected by a waypoint selector based on scores output from the network 104) and a target SDC waypoint 1014. The predicted SDC waypoint 1016 is referred to as "coarse" because its precision may be less than the precision of the target SDC waypoint 1014. For example, the coarse SDC waypoint 1016 may indicate the planned location for the vehicle at a "pixel" level of granularity in an image whose pixels represent corresponding locations in the environment of an SDC, whereas the target SDC waypoint 1014 may contain sufficient information to identify the target location for the vehicle at a "sub-pixel" level of granularity.
[0142] The training system generates the SDC speed loss 1046 based on comparison of a predicted SDC speed 1030 and a target SDC speed 1032. The predicted SDC speed 1030 can be obtained from an auxiliary neural network 1002 based on encoded inputs (e.g., environmental data) and predicted waypoints from the trajectory planning neural network 104.
[0143] The training system generates the fine SDC waypoint loss 1048 based on comparison of a fine SDC waypoint 1034 predicted by the auxiliary neural network 1002 and the target SDC waypoint 1014. The auxiliary neural network 1002 may refine the course SDC waypoint 1016 provided by trajectory planning neural network 104 based on an indication of the predicted SDC coarse waypoint 1016 and the encoded inputs (e.g., environmental data). The fine SDC waypoint 1034 generally has greater precision than the coarse SDC waypoint. In some implementations, the fine SDC waypoint matches the level of granularity/resolution of the target SDC waypoint 1014.
[0144] The training system generates a collision loss 1038 based on comparison of the target occupancies for the observed perception objects 1012 and a predicted occupancy of the SDC 1020. In some implementations, the trajectory planning neural network 104 (and system 100) is configured to predict/plan occupancies of the SDC, or occupancies can be derived from predicted waypoints along with information about the configuration (e.g., size and shape) of a corresponding bounding box for the SDC. The training system can impose a high cost for collisions between the SDC and objects in the environment, as indicated by co- occupancy of any region of the environment by the SDC and a perception object. In some implementations a collision or co-occupancy can be detected if the bounding box for the SDC and the bounding box for the perception object overlap for one or more time steps.
[0145] Further, the trajectory planning neural network 104 can be trained based on an SDC box loss 1040. The SDC box loss 1040 is generated by the training system based on comparison of (e.g., difference between) the predicted occupancy of the SDC 1020 and the target occupancy of the SDC 1018. The training algorithm can be implemented to minimize the loss resulting from differences in these predicted and target occupancies.
[0146] In some implementations, the training system generates an on-route loss 1042 for use in training the trajectory planning neural network 104. The on-route loss 1042 is based on comparison of the predicted occupancy of the SDC 1020 and the target route for the SDC 1022. If all or some of the SDC deviates from the target route 1022 (e.g., if all or a portion of the bounding box for the SDC as indicated by the predicted occupancy 102 is located at least a defined amount off the target route 1022), the training system can generate a loss reflecting the difference.
[0147] In some implementations, the training system generates an on-road loss 1044. The on-road loss 1044 imposes a cost for the SDC moving outside the physical boundaries of a roadway. The on-road loss 1044 can be generated based on comparison of the predicted occupancy of the SDC 1020 and a target road mask 1024. The target road mask 1024 is a mask that indicates the physical extent of a roadway. For example, image 1104 in Figure 11 shows a road mask for one road way in the vicinity of an SDC. Pixels in the image have a binary classification as either roadway or non-roadway. If all or a portion of the bounding box for the SDC (e.g., as indicated by the predicted SDC occupancy 1020) overlaps non- roadway pixels in the road mask, the training system generates a loss to encourage the trajectory planning neural network 104 to learn to predict waypoints that cause the SDC to travel on the roadway. Noticeably, Figure 11 also shows an image 1102 for a road graph of the same roadway represented by road mask 1104. Whereas the road graph indicated lane boundaries and centerlines and/or lane centers, the road mask 1104 more readily facilitates determination of the on-road loss 1044 by comparison of the area encompassed by the predicted SDC box to the classification of pixels in the corresponding area of the target road mask 1024. [0148] Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. The computer storage medium is not, however, a propagated signal.
[0149] The term "data processing apparatus" encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
[0150] A computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. [0151] As used in this specification, an "engine," or "software engine," refers to a software implemented input/output system that provides an output that is different from the input. An engine can be an encoded block of functionality, such as a library, a platform, a software development kit ("SDK"), or an object. Each engine can be implemented on any appropriate type of computing device, e.g., servers, mobile phones, tablet computers, notebook computers, music players, e-book readers, laptop or desktop computers, PDAs, smart phones, or other stationary or portable devices, that includes one or more processors and computer readable media. Additionally, two or more of the engines may be implemented on the same computing device, or on different computing devices.
[0152] The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
[0153] Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
[0154] Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. [0155] To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
[0156] Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), e.g., the Internet.
[0157] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
[0158] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
[0159] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
[0160] Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.

Claims

CLAIMS What is claimed is:
1. A computing system for planning a trajectory of a vehicle, the system comprising: a memory configured to store waypoint data indicating one or more waypoints, each waypoint representing a previously traveled location of the vehicle or a location in a planned trajectory for the vehicle; and
one or more computers and one or more storage devices storing instructions that when executed cause the one or more computers to implement:
a trajectory planning neural network configured to, at each time step of a plurality of time steps:
obtain a first neural network input and a second neural network input, wherein
(i) the first neural network input characterizes a set of waypoints indicated by the waypoint data, and
(ii) the second neural network input characterizes (a) environmental data that represents a current state of an environment of the vehicle and (b) navigation data that represents a planned navigation route for the vehicle; and
process the first neural network input and the second neural network input to generate a set of output scores, wherein each output score in the set of output scores corresponds to a different location of a set of possible locations in a vicinity of the vehicle and indicates a likelihood that the respective location is an optimal location for a next waypoint in the planned trajectory for the vehicle to follow the planned navigation route; and a trajectory management system configured to, at each time step of the plurality of time steps:
select, based on the set of output scores generated by the trajectory planning neural network at the time step, one of the set of possible locations as the waypoint for the planned trajectory of the vehicle at the time step; and
update the waypoint data by writing to the memory an indication of the selected one of the set of possible locations as the waypoint for the planned trajectory of the vehicle at the time step.
2. The computing system of claim 1 , wherein for an initial time step of the plurality of time steps, the set of waypoints characterized by the first neural network input includes at least one waypoint that represents a previously traveled location of the vehicle at a time that precedes the plurality of time steps.
3. The computing system of claim 2, wherein for each time step of the plurality of time steps after the initial time step, the set of waypoints characterized by the first neural network input includes the waypoints that were determined at each preceding time step of the plurality of time steps.
4. The computing system of any of claims 2-3, wherein for at least one time step of the plurality of time steps after the initial time step, the set of waypoints characterized by the first neural network input includes:
(i) one or more first waypoints that represent previously traveled locations of the vehicle at times that precede the plurality of time steps, and
(ii) one or more second waypoints that were determined at preceding time steps of the plurality of time steps.
5. The computing system of any of claims 1 -4, wherein the trajectory planning neural network is a feedforward neural network.
6. The computing system of any of claims 1-5, wherein the second neural network input at each time step of the plurality of time steps characterizes the environmental data that represents the current state of the environment of the vehicle at the time step.
7. The computing system of claim 6, wherein:
the second neural network input characterizes multiple channels of environmental data, and
the multiple channels of environmental data include two or more of roadgraph data representing one or more roads in the vicinity of the vehicle, perception object data representing locations of objects that have been detected as being in the vicinity of the vehicle, speed limit data representing speed limits associated with the one or more roads in the vicinity of the vehicle, light detection and ranging (LIDAR) data representing a LIDAR image of the vicinity of the vehicle, radio detection and ranging (RADAR) data representing a RADAR image of the vicinity of the vehicle, camera data representing an optical image of the vicinity of the vehicle, or traffic artifacts data representing identified traffic artifacts in the vicinity of the vehicle.
8. The computing system of any of claims 1-7, wherein the second neural network input at each time step of the plurality of time steps characterizes the navigation data that represents the planned navigation route for the vehicle.
9. The computing system of any of claims 1-8, wherein the second neural network input at each time step of the plurality of time steps characterizes both the environmental data that represents the current state of the environment of the vehicle at the time step and the navigation data that represents the planned navigation route for the vehicle.
10. The computing system of any of claims 1-9, wherein each successive pair of time steps in the plurality of time steps represents a successive pair of real-world times that are separated by a fixed interval in the range 100 milliseconds to 500 milliseconds.
11. The computing system of any of claims 1-10, further comprising a vehicle control subsystem configured to determine control actions for the vehicle to take to cause the vehicle to maneuver along a planned trajectory defined by the waypoints for at least some of the plurality of time steps.
12. The computing system of claim 1 1, further comprising maneuvering the vehicle along the planned trajectory as a result of executing at least some of the control actions determined by the vehicle control subsystem, wherein the control actions include at least one of steering, braking, or accelerating the vehicle.
13. A computer-implemented method for planning a trajectory of a vehicle, the method comprising:
for each time step in a series of time steps:
obtaining a first neural network input that characterizes a set of waypoints that each represent a previous location of the vehicle or a location in a planned trajectory for the vehicle; obtaining a second neural network input that characterizes (i) environmental data that represents a current state of an environment of the vehicle and (ii) navigation data that represents a planned navigation route for the vehicle;
providing the first neural network input and the second neural network input to a trajectory planning neural network and, in response, obtaining a set of output scores from the trajectory planning neural network, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle and indicating a likelihood that the respective location is an optimal location for a next waypoint in the planned trajectory for the vehicle to follow the planned navigation route; and
selecting, based on the set of output scores, one of the possible locations in the vicinity of the vehicle as a waypoint for the planned trajectory of the vehicle at the time step.
14. The method of claim 13, wherein, for each time step in the series of time steps after an initial time step, the set of waypoints characterized by the first neural network input at the time step represents the locations of the vehicle that were selected at each preceding time step in the series of time steps.
15. The method of claim 14, wherein the set of waypoints characterized by the first neural network input at the initial time step represents locations that the vehicle has traversed at particular times that precede the series of time steps.
16. The method of any of claims 13-15, wherein for at least one of the series of time steps:
the second neural network input characterizes multiple channels of environmental data, and
the multiple channels of environmental data include two or more of roadgraph data representing one or more roads in the vicinity of the vehicle, perception object data representing locations of objects that have been detected as being in the vicinity of the vehicle, speed limit data representing speed limits associated with the one or more roads in the vicinity of the vehicle, light detection and ranging (LIDAR) data representing a LIDAR image of the vicinity of the vehicle, radio detection and ranging (RADAR) data representing a RADAR image of the vicinity of the vehicle, camera data representing an optical image of the vicinity of the vehicle, or traffic artifacts data representing identified traffic artifacts in the vicinity of the vehicle.
17. The method of any of claims 13-16, further comprising, at each time step in the series of time steps, writing an indication in memory of the selected one of the possible locations in the vicinity of the vehicle as the waypoint for the planned traj ectory of the vehicle at the time step.
18. The method of any of claims 13-17, further comprising determining control actions for the vehicle to take to cause the vehicle to maneuver along a planned trajectory defined by the waypoints for at least some of the series of time steps.
19. The method of any of claims 13-18, wherein each successive pair of time steps in the series of time steps represents a successive pair of real-world times that are separated by a fixed interval in the range 100 milliseconds to 500 milliseconds.
20. One or more non-transitory computer-readable media having instructions stored thereon that, when executed by data processing apparatus, cause the data processing apparatus to perform operations comprising:
for each time step in a series of time steps:
obtaining a first neural network input that characterizes a set of waypoints that each represent a previous location of the vehicle or a location in a planned traj ectory for the vehicle;
obtaining a second neural network input that characterizes at least one of (i) environmental data that represents a current state of an environment of the vehicle or (ii) navigation data that represents a planned navigation route for the vehicle;
providing the first neural network input and the second neural network input to a trajectory planning neural network and, in response, obtaining a set of output scores from the trajectory planning neural network, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle and indicating a likelihood that the respective location is an optimal location for a next waypoint in the planned trajectory for the vehicle to follow the planned navigation route; and
selecting, based on the set of output scores, one of the possible locations in the vicinity of the vehicle as a waypoint for the planned trajectory of the vehicle at the time step.
21. A computer-implemented method for training a trajectory planning neural network system to determine waypoints for trajectories of vehicles, the method comprising:
obtaining, by a neural network training system, a plurality of training data sets, wherein each training data set includes:
(i) a first training input that characterizes a set of waypoints that represent respective locations of a vehicle at each of a series of first time steps,
(ii) a second training input that characterizes at least one of (a) environmental data that represents a current state of an environment of the vehicle or (b) navigation data that represents a planned navigation route for the vehicle, and
(iii) a target output characterizing a waypoint that represents a target location of the vehicle at a second time step that follows the series of first time steps;
training, by the neural network training system, the traj ectory planning neural network system on the plurality of training data sets, including, for each training data set of the plurality of training data sets:
processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a set of output scores, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle;
determining an output error using the target output and the set of output scores, and
adjusting the current values of the parameters of the trajectory planning neural network system using the output error.
22. The method of claim 21 , wherein, for each training data set in at least a subset of the plurality of training data sets, the target output of the training data set characterizes a waypoint that represents an actual location of a human-operated vehicle at a time that corresponds to the second time step.
23. The method of claim 22, wherein, for each training data set in at least the subset of the plurality of training data sets, the second training input characterizes navigation data that represents a route that was prescribed for a driver of the human-operated vehicle to follow and that was traversed by the vehicle.
24. The method of claim 23, wherein, for each training data set in at least the subset of the plurality of training data sets, the second training input further characterizes environmental data that represents the current state of the environment of the human-operated vehicle.
25. The method of any of claims 21-24, wherein, for each training data set in at least a subset of the plurality of training data sets:
the target output of the training data set characterizes a waypoint that represents a location of a virtual vehicle that was driven in a simulated environment at a time that corresponds to the second time step; and
the second training input characterizes navigation data that represents a route that was prescribed for an automated agent of the virtual vehicle to follow while driving in the simulated environment and that was traversed by the virtual vehicle.
26. The method of any of claims 21-25, wherein:
for each training data set in at least a first subset of the plurality of training data sets, the target output of the training data set characterizes a waypoint that represents an actual location of a human-operated vehicle at a time that corresponds to the second time step; and for each training data set in at least a second subset of the plurality of training data sets, the target output of the training data set characterizes a waypoint that represents a location of a virtual vehicle that was driven in a simulated environment at another time that corresponds to the second time step.
27. The method of claim 26, further comprising:
identifying that the second subset of the plurality of training data sets models driving behavior for one or more specified driving scenarios; and
in response to identifying that the second subset of the plurality of training data sets models behavior for the one or more specified driving scenarios, selecting the second subset of the plurality of training data sets for inclusion in the plurality of training data sets.
28. The method of claim 27, wherein the one or more specified driving scenarios include at least one of a lane merge scenario, an unprotected left turn scenario, a lane change scenario, or a collision scenario.
29. The method of any of claims 21-28, wherein total numbers of waypoints characterized by the first training inputs among particular ones of the plurality of training data sets are different from each other.
30. The method of any of claims 21-29, wherein obtaining the plurality of training data sets comprises:
selecting a first subset of training data sets with which to train the trajectory planning neural network system based on an indication that the second training inputs of the first subset of training data sets characterize environmental data from a first set of sensor channels; and
selecting a second subset of training data sets with which to train the trajectory planning neural network system based on an indication that the second training inputs of the second subset of training data sets characterize environmental data from a second set of sensor channels, wherein the second set of sensor channels includes at least one sensor channel that is not included in the first set of sensor channels.
31. The method of claim 30, wherein the at least one sensor channel that is included in the second set of sensor channels but not in the first set of sensors channels is a light detection and ranging (LIDAR) sensor channel.
32. The method of any of claims 21-31 , wherein obtaining the plurality of training data sets comprises oversampling a first subset of training data sets that model driving behavior for one or more specified driving scenarios at a greater frequency than the specified driving scenarios occur in the real-world.
33. The method of any of claims 21-32, wherein obtaining the plurality of training data sets comprises:
identifying a plurality of candidate training data sets;
filtering the plurality of candidate training data sets based on one or more criteria; and selecting to train the trajectory planning neural network system on candidate training data sets that satisfy the one or more criteria, to the exclusion of candidate training data sets that do not satisfy the one or more criteria.
34. The method of claim 33, wherein filtering the plurality of candidate training data sets based on the one or more criteria comprises discarding candidate training data sets that model driving behavior that is determined to violate a legal restriction.
35. The method of claim 33, wherein the legal restriction is a speed limit.
36. The method of any of claims 21-35, wherein, for each training data set of the plurality of training data sets, the second training input that characterizes at least one of the environmental data or the navigation data was generated by processing at least one of the environmental data or the navigation data using an encoder neural network.
37. The method of any of claims 21-36, comprising, for a group of training data sets selected from the plurality of training data sets:
for each training data set in the group of training data sets, processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a respective set of output scores for the training data set;
determining the output error using the target outputs and the respective sets of output scores of all the training data sets in the group of training data sets; and
adjusting the current values of the parameters of the trajectory planning neural network system using the output error.
38. The method of claim 37, wherein the group of training data sets model driving behavior of a same vehicle over a series of time steps.
39. A system comprising:
one or more data processing apparatuses; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more data processing apparatuses, cause the one or more data processing apparatuses to perform operations comprising:
obtaining, by a neural network training system, a plurality of training data sets, wherein each training data set includes:
(i) a first training input that characterizes a set of waypoints that represent respective locations of a vehicle at each of a series of first time steps, (ii) a second training input that characterizes at least one of (a) environmental data that represents a current state of an environment of the vehicle or (b) navigation data that represents a planned navigation route for the vehicle, and
(iii) a target output characterizing a waypoint that represents a target location of the vehicle at a second time step that follows the series of first time steps;
training, by the neural network training system, a trajectory planning neural network system on the plurality of training data sets, including, for each training data set of the plurality of training data sets:
processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a set of output scores, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle;
determining an output error using the target output and the set of output scores, and
adjusting the current values of the parameters of the trajectory planning neural network system using the output error.
40. One or more non-transitory machine-readable media having instructions stored thereon that, when executed by data processing apparatus, cause the data processing apparatus to perform operations comprising:
obtaining, by a neural network training system, a plurality of training data sets, wherein each training data set includes:
(i) a first training input that characterizes a set of waypoints that represent respective locations of a vehicle at each of a series of first time steps,
(ii) a second training input that characterizes at least one of (a) environmental data that represents a current state of an environment of the vehicle or (b) navigation data that represents a planned navigation route for the vehicle, and
(iii) a target output characterizing a waypoint that represents a target location of the vehicle at a second time step that follows the series of first time steps;
training, by the neural network training system, a trajectory planning neural network system on the plurality of training data sets, including, for each training data set of the plurality of training data sets:
processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a set of output scores, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle;
determining an output error using the target output and the set of output scores, and
adjusting the current values of the parameters of the trajectory planning neural network system using the output error.
41. The computing system of any of claims 1-12, wherein the environmental data identifies, for each of one or more objects detected in the environment of the vehicle, a current occupancy of the object.
42. The computing system of claim 41, wherein for one or more of the plurality of time steps, the current occupancies of the one or more objects detected in the environment were predicted using a perception object occupancy prediction neural network.
43. The method of any of claims 13-19, wherein the environmental data identifies, for each of one or more objects detected in the environment of the vehicle, a current occupancy of the object.
44. The method of claim 43, wherein for one or more of the plurality of time steps, the current occupancies of the one or more objects detected in the environment were predicted using a perception object occupancy prediction neural network.
PCT/US2018/044172 2017-07-27 2018-07-27 Neural networks for vehicle trajectory planning WO2019023628A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/628,086 US20200174490A1 (en) 2017-07-27 2018-07-27 Neural networks for vehicle trajectory planning

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15/662,007 2017-07-27
US15/662,031 US11256983B2 (en) 2017-07-27 2017-07-27 Neural networks for vehicle trajectory planning
US15/662,031 2017-07-27
US15/662,007 US10883844B2 (en) 2017-07-27 2017-07-27 Neural networks for vehicle trajectory planning

Publications (1)

Publication Number Publication Date
WO2019023628A1 true WO2019023628A1 (en) 2019-01-31

Family

ID=63165570

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/044172 WO2019023628A1 (en) 2017-07-27 2018-07-27 Neural networks for vehicle trajectory planning

Country Status (1)

Country Link
WO (1) WO2019023628A1 (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583151A (en) * 2019-02-20 2019-04-05 百度在线网络技术(北京)有限公司 The driving trace prediction technique and device of vehicle
CN109814575A (en) * 2019-02-22 2019-05-28 百度在线网络技术(北京)有限公司 Vehicle lane change route planning method, device and terminal
CN110163439A (en) * 2019-05-24 2019-08-23 长安大学 A kind of city size taxi trajectory predictions method based on attention mechanism
CN110610271A (en) * 2019-09-17 2019-12-24 北京理工大学 Multi-vehicle track prediction method based on long and short memory network
US20200166896A1 (en) * 2018-11-26 2020-05-28 Uber Technologies, Inc. Deep reinforcement learning based models for hard-exploration problems
CN111428571A (en) * 2020-02-28 2020-07-17 宁波吉利汽车研究开发有限公司 Vehicle guiding method, device, equipment and storage medium
CN111858790A (en) * 2020-04-03 2020-10-30 北京嘀嘀无限科技发展有限公司 Detour reminding method and device, electronic equipment and medium
CN112034833A (en) * 2019-05-15 2020-12-04 百度(美国)有限责任公司 Online agent to plan open space trajectories for autonomous vehicles
CN112146680A (en) * 2019-06-28 2020-12-29 百度(美国)有限责任公司 Determining vanishing points based on feature maps
WO2021003379A1 (en) * 2019-07-03 2021-01-07 Waymo Llc Agent trajectory prediction using anchor trajectories
CN112389436A (en) * 2020-11-25 2021-02-23 中汽院智能网联科技有限公司 Safety automatic driving track-changing planning method based on improved LSTM neural network
US10937178B1 (en) * 2019-05-09 2021-03-02 Zoox, Inc. Image-based depth data and bounding boxes
CN112487038A (en) * 2020-12-15 2021-03-12 平安国际智慧城市科技股份有限公司 Water source monitoring track data processing method, device, equipment and storage medium
US20210108936A1 (en) * 2019-10-09 2021-04-15 Argo AI, LLC Methods and systems for topological planning in autonomous driving
US10984543B1 (en) 2019-05-09 2021-04-20 Zoox, Inc. Image-based depth data and relative depth data
CN112721952A (en) * 2021-01-29 2021-04-30 重庆长安汽车股份有限公司 Complex road section driving control method and device of L3-grade automatic driving automobile, automobile and computer storage medium
CN112862155A (en) * 2021-01-05 2021-05-28 东莞理工学院 Urban mobility prediction method and prediction system
CN113093743A (en) * 2021-03-30 2021-07-09 西北农林科技大学 Navigation control method based on virtual radar model and deep neural network
CN113222122A (en) * 2021-06-01 2021-08-06 重庆大学 High-quality neural network system suitable for singlechip
US11087494B1 (en) 2019-05-09 2021-08-10 Zoox, Inc. Image-based depth data and localization
WO2021158264A2 (en) 2019-10-04 2021-08-12 Waymo Llc Spatio-temporal pose/object database
CN113474231A (en) * 2019-02-05 2021-10-01 辉达公司 Combined prediction and path planning for autonomous objects using neural networks
US11203362B1 (en) 2020-09-01 2021-12-21 Motional Ad Llc Scoring autonomous vehicle trajectories using reasonable crowd data
CN113895461A (en) * 2021-11-17 2022-01-07 中国第一汽车股份有限公司 Vehicle lateral control method, device, vehicle and medium
CN113906271A (en) * 2019-04-12 2022-01-07 辉达公司 Neural network training using ground truth data augmented with map information for autonomous machine applications
CN114510018A (en) * 2020-10-25 2022-05-17 动态Ad有限责任公司 Metric back propagation for subsystem performance evaluation
CN114684197A (en) * 2022-03-31 2022-07-01 广州文远知行科技有限公司 Detection method, device and equipment for obstacle avoidance scene and storage medium
US11403853B2 (en) * 2019-08-30 2022-08-02 Waymo Llc Occupancy prediction neural networks
CN115009291A (en) * 2022-05-23 2022-09-06 暨南大学 Automatic driving aid decision-making method and system based on network evolution replay buffer area
WO2023089635A1 (en) * 2021-11-17 2023-05-25 Ola Electric Mobility Private Limited Method and system to facilitate navigation planning for vehicles
US11694356B2 (en) 2019-11-15 2023-07-04 Argo AI, LLC Methods and systems for joint pose and shape estimation of objects from sensor data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016156236A1 (en) * 2015-03-31 2016-10-06 Sony Corporation Method and electronic device
US20160325753A1 (en) * 2015-05-10 2016-11-10 Mobileye Vision Technologies Ltd. Road profile along a predicted path
US9663111B2 (en) * 2014-05-30 2017-05-30 Ford Global Technologies, Llc Vehicle speed profile prediction using neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9663111B2 (en) * 2014-05-30 2017-05-30 Ford Global Technologies, Llc Vehicle speed profile prediction using neural networks
WO2016156236A1 (en) * 2015-03-31 2016-10-06 Sony Corporation Method and electronic device
US20160325753A1 (en) * 2015-05-10 2016-11-10 Mobileye Vision Technologies Ltd. Road profile along a predicted path

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MARIUSZ BOJARSKI ET AL: "End to End Learning for Self-Driving Cars", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 25 April 2016 (2016-04-25), XP080697746 *
TOMAR RANJEET SINGH ET AL: "Neural Network Based Lane Change Trajectory Prediction in Autonomous Vehicles", 31 August 2011, MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2015 : 18TH INTERNATIONAL CONFERENCE, MUNICH, GERMANY, OCTOBER 5-9, 2015; PROCEEDINGS; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER INTERNATIONAL PUBLISHING, CH, ISBN: 978-3-319-24946-9, ISSN: 0302-9743, XP047367124 *

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200166896A1 (en) * 2018-11-26 2020-05-28 Uber Technologies, Inc. Deep reinforcement learning based models for hard-exploration problems
US11829870B2 (en) * 2018-11-26 2023-11-28 Uber Technologies, Inc. Deep reinforcement learning based models for hard-exploration problems
CN113474231A (en) * 2019-02-05 2021-10-01 辉达公司 Combined prediction and path planning for autonomous objects using neural networks
CN109583151B (en) * 2019-02-20 2023-07-21 阿波罗智能技术(北京)有限公司 Method and device for predicting running track of vehicle
CN109583151A (en) * 2019-02-20 2019-04-05 百度在线网络技术(北京)有限公司 The driving trace prediction technique and device of vehicle
CN109814575A (en) * 2019-02-22 2019-05-28 百度在线网络技术(北京)有限公司 Vehicle lane change route planning method, device and terminal
CN113906271A (en) * 2019-04-12 2022-01-07 辉达公司 Neural network training using ground truth data augmented with map information for autonomous machine applications
US11087494B1 (en) 2019-05-09 2021-08-10 Zoox, Inc. Image-based depth data and localization
US10984543B1 (en) 2019-05-09 2021-04-20 Zoox, Inc. Image-based depth data and relative depth data
US10937178B1 (en) * 2019-05-09 2021-03-02 Zoox, Inc. Image-based depth data and bounding boxes
US11748909B2 (en) 2019-05-09 2023-09-05 Zoox, Inc. Image-based depth data and localization
CN112034833A (en) * 2019-05-15 2020-12-04 百度(美国)有限责任公司 Online agent to plan open space trajectories for autonomous vehicles
CN110163439A (en) * 2019-05-24 2019-08-23 长安大学 A kind of city size taxi trajectory predictions method based on attention mechanism
CN112146680B (en) * 2019-06-28 2024-03-29 百度(美国)有限责任公司 Determining vanishing points based on feature maps
CN112146680A (en) * 2019-06-28 2020-12-29 百度(美国)有限责任公司 Determining vanishing points based on feature maps
WO2021003379A1 (en) * 2019-07-03 2021-01-07 Waymo Llc Agent trajectory prediction using anchor trajectories
US11618481B2 (en) 2019-07-03 2023-04-04 Waymo Llc Agent trajectory prediction using anchor trajectories
US12097889B2 (en) 2019-07-03 2024-09-24 Waymo Llc Agent trajectory prediction using anchor trajectories
US11772654B2 (en) 2019-08-30 2023-10-03 Waymo Llc Occupancy prediction neural networks
US11403853B2 (en) * 2019-08-30 2022-08-02 Waymo Llc Occupancy prediction neural networks
US20220343657A1 (en) * 2019-08-30 2022-10-27 Waymo Llc Occupancy prediction neural networks
CN110610271A (en) * 2019-09-17 2019-12-24 北京理工大学 Multi-vehicle track prediction method based on long and short memory network
CN110610271B (en) * 2019-09-17 2022-05-13 北京理工大学 Multi-vehicle track prediction method based on long and short memory network
WO2021158264A2 (en) 2019-10-04 2021-08-12 Waymo Llc Spatio-temporal pose/object database
EP4022254A4 (en) * 2019-10-04 2023-09-06 Waymo Llc Spatio-temporal pose/object database
US20210108936A1 (en) * 2019-10-09 2021-04-15 Argo AI, LLC Methods and systems for topological planning in autonomous driving
US11754408B2 (en) * 2019-10-09 2023-09-12 Argo AI, LLC Methods and systems for topological planning in autonomous driving
US11694356B2 (en) 2019-11-15 2023-07-04 Argo AI, LLC Methods and systems for joint pose and shape estimation of objects from sensor data
CN111428571A (en) * 2020-02-28 2020-07-17 宁波吉利汽车研究开发有限公司 Vehicle guiding method, device, equipment and storage medium
CN111428571B (en) * 2020-02-28 2024-04-19 宁波吉利汽车研究开发有限公司 Vehicle guiding method, device, equipment and storage medium
CN111858790A (en) * 2020-04-03 2020-10-30 北京嘀嘀无限科技发展有限公司 Detour reminding method and device, electronic equipment and medium
US11755015B2 (en) 2020-09-01 2023-09-12 Motional Ad Llc Scoring autonomous vehicle trajectories using reasonable crowd data
GB2598408A (en) * 2020-09-01 2022-03-02 Motional Ad Llc Scoring autonomous vehicle trajectories using reasonable crowd data
GB2598408B (en) * 2020-09-01 2023-02-08 Motional Ad Llc Scoring autonomous vehicle trajectories using reasonable crowd data
GB2616392B (en) * 2020-09-01 2024-04-10 Motional Ad Llc Scoring autonomous vehicle trajectories using reasonable crowd data
US11203362B1 (en) 2020-09-01 2021-12-21 Motional Ad Llc Scoring autonomous vehicle trajectories using reasonable crowd data
GB2607835B (en) * 2020-09-01 2023-11-15 Motional Ad Llc Scoring autonomous vehicle trajectories using reasonable crowd data
GB2616392A (en) * 2020-09-01 2023-09-06 Motional Ad Llc Scoring autonomous vehicle trajectories using reasonable crowd data
CN114510018A (en) * 2020-10-25 2022-05-17 动态Ad有限责任公司 Metric back propagation for subsystem performance evaluation
CN112389436A (en) * 2020-11-25 2021-02-23 中汽院智能网联科技有限公司 Safety automatic driving track-changing planning method based on improved LSTM neural network
CN112487038A (en) * 2020-12-15 2021-03-12 平安国际智慧城市科技股份有限公司 Water source monitoring track data processing method, device, equipment and storage medium
CN112862155A (en) * 2021-01-05 2021-05-28 东莞理工学院 Urban mobility prediction method and prediction system
CN112721952A (en) * 2021-01-29 2021-04-30 重庆长安汽车股份有限公司 Complex road section driving control method and device of L3-grade automatic driving automobile, automobile and computer storage medium
CN112721952B (en) * 2021-01-29 2022-07-05 重庆长安汽车股份有限公司 Complex road section driving control method and device for L3-grade automatic driving automobile, automobile and computer storage medium
CN113093743A (en) * 2021-03-30 2021-07-09 西北农林科技大学 Navigation control method based on virtual radar model and deep neural network
CN113222122A (en) * 2021-06-01 2021-08-06 重庆大学 High-quality neural network system suitable for singlechip
WO2023089635A1 (en) * 2021-11-17 2023-05-25 Ola Electric Mobility Private Limited Method and system to facilitate navigation planning for vehicles
CN113895461A (en) * 2021-11-17 2022-01-07 中国第一汽车股份有限公司 Vehicle lateral control method, device, vehicle and medium
CN114684197A (en) * 2022-03-31 2022-07-01 广州文远知行科技有限公司 Detection method, device and equipment for obstacle avoidance scene and storage medium
CN115009291A (en) * 2022-05-23 2022-09-06 暨南大学 Automatic driving aid decision-making method and system based on network evolution replay buffer area

Similar Documents

Publication Publication Date Title
US20200174490A1 (en) Neural networks for vehicle trajectory planning
US11256983B2 (en) Neural networks for vehicle trajectory planning
US10883844B2 (en) Neural networks for vehicle trajectory planning
WO2019023628A1 (en) Neural networks for vehicle trajectory planning
US12097889B2 (en) Agent trajectory prediction using anchor trajectories
US11726477B2 (en) Methods and systems for trajectory forecasting with recurrent neural networks using inertial behavioral rollout
US11755014B2 (en) Jointly learnable behavior and trajectory planning for autonomous vehicles
CN113366497B (en) Agent Prioritization for Autonomous Vehicles
US10579063B2 (en) Machine learning for predicting locations of objects perceived by autonomous vehicles
US10496099B2 (en) Systems and methods for speed limit context awareness
US11586931B2 (en) Training trajectory scoring neural networks to accurately assign scores
US11829149B1 (en) Determining respective impacts of agents
US12051249B2 (en) Behavior prediction of surrounding agents
US20210200230A1 (en) Conditional behavior prediction for autonomous vehicles
US12030523B2 (en) Agent trajectory planning using neural networks
CN114061581A (en) Ranking agents in proximity to autonomous vehicles by mutual importance
CN115158309A (en) Factor trajectory prediction using context sensitive fusion
US20230082365A1 (en) Generating simulated agent trajectories using parallel beam search
US11753043B2 (en) Predicting crossing behavior of agents in the vicinity of an autonomous vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18753021

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18753021

Country of ref document: EP

Kind code of ref document: A1