US20190064815A1 - Systems and Methods for Prioritizing Object Prediction for Autonomous Vehicles - Google Patents
Systems and Methods for Prioritizing Object Prediction for Autonomous Vehicles Download PDFInfo
- Publication number
- US20190064815A1 US20190064815A1 US15/811,865 US201715811865A US2019064815A1 US 20190064815 A1 US20190064815 A1 US 20190064815A1 US 201715811865 A US201715811865 A US 201715811865A US 2019064815 A1 US2019064815 A1 US 2019064815A1
- Authority
- US
- United States
- Prior art keywords
- priority
- objects
- predicted future
- computing system
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000008447 perception Effects 0.000 claims description 83
- 230000003993 interaction Effects 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 14
- 230000001133 acceleration Effects 0.000 claims description 6
- 238000012913 prioritisation Methods 0.000 abstract 1
- 230000015654 memory Effects 0.000 description 20
- 238000010801 machine learning Methods 0.000 description 15
- 238000013528 artificial neural network Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000004044 response Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000000306 recurrent effect Effects 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 239000004020 conductor Substances 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000001429 visible spectrum Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 230000035484 reaction time Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0088—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/06—Improving the dynamic response of the control system, e.g. improving the speed of regulation or avoiding hunting or overshoot
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0027—Planning or execution of driving tasks using trajectory prediction for other traffic participants
- B60W60/00276—Planning or execution of driving tasks using trajectory prediction for other traffic participants for two or more other traffic participants
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G06N99/005—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0112—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/06—Improving the dynamic response of the control system, e.g. improving the speed of regulation or avoiding hunting or overshoot
- B60W2050/065—Improving the dynamic response of the control system, e.g. improving the speed of regulation or avoiding hunting or overshoot by reducing the computational load on the digital processor of the control computer
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/402—Type
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0129—Traffic data processing for creating historical data or processing based on historical data
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0133—Traffic data processing for classifying traffic situation
Definitions
- the present disclosure relates generally to autonomous vehicles. More particularly, the present disclosure relates to systems and methods for determining a priority classification for objects that are perceived by autonomous vehicles and predicting a future location for the objects based at least in part on the respective priority classification for each object.
- An autonomous vehicle is a vehicle that is capable of sensing its environment and navigating with minimal or no human input.
- an autonomous vehicle can observe its surrounding environment using a variety of sensors and can attempt to comprehend the environment by performing various processing techniques on data collected by the sensors. Given knowledge of its surrounding environment, the autonomous vehicle can identify an appropriate motion path through such surrounding environment.
- the method can include obtaining, by a computing system comprising one or more processors, state data descriptive of at least a current or past state of a plurality of objects that are perceived by an autonomous vehicle.
- the method can further include determining, by the computing system, a priority classification for each object in the plurality of objects based at least in part on the respective state data for each object.
- the method can further include determining, by the computing system, an order at which the computing system determines a predicted future state for each object based at least in part on the priority classification for each object.
- the method can further include determining, by the computing system, the predicted future state for each object based at least in part on the determined order.
- the computing system can include a perception system comprising one or more processors.
- the perception system can be configured to generate, for each of a plurality of consecutive time frames, state data descriptive of at least a current state of each of a plurality of objects that are perceived by an autonomous vehicle.
- the computing system can further include a priority classification system comprising one or more processors.
- the priority classification system can be configured to, for each of the plurality of consecutive time frames, classify each object in the plurality of objects as either high-priority or low-priority based at least in part on the respective state data for each object.
- the computing system can further include a prediction system comprising one or more processors.
- the prediction system can be configured to, for each of the plurality of consecutive time frames receive the priority classification for each respective object, determine, for the current time frame, a predicted future state for each object classified as high-priority, and provide the predicted future state for each object classified as high-priority for the current time frame to a motion planning system implemented by the one or more processors.
- the autonomous vehicle can include one or more processors and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the one or more processors to perform operations.
- the operations can include obtaining state data descriptive of at least a current or past state of a plurality of objects that are perceived by the autonomous vehicle.
- the operations can further include determining a priority classification for each object in the plurality of objects based at least in part on the respective state data for each object.
- the operations can further include determining an order at which the computing system determines a predicted future state for each object based at least in part on the priority classification for each object.
- the operations can further include determining the predicted future state for each object based at least in part on the determined order.
- FIG. 1 depicts a block diagram of an example autonomous vehicle according to example aspects of the present disclosure
- FIG. 2 depicts an example perception system according to example aspects of the present disclosure
- FIG. 3 depicts an example prediction system according to example aspects of the present disclosure
- FIG. 4 depicts a block diagram of an example object prediction process according to example aspects of the present disclosure
- FIG. 5 depicts a block diagram of an example computing system according to example aspects of the present disclosure.
- FIG. 6 depicts a flow chart diagram of an example method to determine a motion plan for an autonomous vehicle according to example aspects of the present disclosure.
- an autonomous vehicle can include or otherwise use a prediction system to predict the future locations of the objects such as, for example, other vehicles, pedestrians, bicyclists, etc. based at least in part on perception information that describes current and/or past states of the objects and/or the surrounding environment.
- the autonomous vehicle can include or otherwise use a priority classification system to classify a respective priority of each object perceived by the perception system. For example, in some implementations, each object can be classified as either high-priority or low-priority.
- the prediction system can determine a predicted future state for each object based at least in part on the priority classification for each object. For example, in some implementations, the order at which the computing system determines the predicted future state for each object can be determined based on the priority classification for each object, such as, for example, by determining the predicted future state for objects classified as high-priority before predicted future states are determined for objects classified as low-priority.
- the prediction system can determine in a current time frame the predicted future state for each object classified as high-priority in the current time frame. However, the prediction system can wait and determine (e.g., in a subsequent time frame or at least subsequent to providing the predicted future states for each object classified as high-priority to a motion planning system) the predicted future state for each object classified as low-priority in the current time frame.
- predicted future states for high-priority objects can be delivered to the motion planning system in an advanced fashion (e.g., “ahead of the schedule”), thereby allowing the motion planning system additional time to determine a motion plan relative to the high-priority objects and the vehicle additional time to implement the determined motion plan.
- the autonomous vehicle can be controlled to react more quickly relative to objects classified as high-priority. For example, the additional time gained through advancing the predicted future states by the prediction system can enable to vehicle to come to a stop more quickly or otherwise make improved maneuvers which enhance passenger and vehicle safety.
- the type of prediction system used for determining the predicted future state for each object can be determined based on the priority classification for each object. For example, in some implementations a high-fidelity prediction system can be used for objects classified as high-priority, whereas a low-fidelity prediction system can be used for objects classified as low-priority.
- the priority classification system described herein can include or leverage one or more machine-learned models that assist in classifying each object perceived by the autonomous vehicle.
- the priority classification system can include a machine-learned object classifier configured to classify each perceived object, such as by classifying each object as high-priority or low-priority.
- the use of machine-learned models can improve the speed, quality, and/or accuracy of object priority classification.
- the improved ability to classify objects according to priority can allow for more efficient use of prediction system resources by, for example, allowing for future states of higher priority objects to be predicted before lower priority objects. Further, this can allow for the predicted future states for higher priority objects to be provided to a motion planning system sooner, reducing overall latency for determining a motion plan, thereby reducing autonomous vehicle response times and enhancing passenger safety and vehicle efficiency.
- an autonomous vehicle can include a computing system that assists in controlling the autonomous vehicle.
- the autonomous vehicle can be a ground-based autonomous vehicle (e.g., car, truck, bus, etc.), an air-based autonomous vehicle (e.g., airplane, drone, helicopter, or other aircraft), or other types of vehicles (e.g., watercraft).
- the computing system can include a perception system, a prediction system, and a motion planning system that cooperate to perceive the surrounding environment of the autonomous vehicle and determine a motion plan for controlling the motion of the autonomous vehicle accordingly.
- the perception system can perceive one or more objects that are proximate to an autonomous vehicle, and provide state data indicative of the one or more objects to the prediction system.
- the prediction system can then determine a predicted future state for each object perceived by the perception system.
- the motion planning system can then determine a motion plan for the autonomous vehicle based on the predicted future states for the objects. In this way, an autonomous vehicle can perceive objects proximate to the autonomous vehicle, and, in response, control the autonomous vehicle accordingly.
- an autonomous vehicle can perform each of the perception, prediction, and motion planning steps sequentially using data obtained in a plurality of consecutive time frames.
- the perception system can receive sensor data for the time frame N; the perception system can concurrently generate and provide state data to the prediction system for one or more objects perceived by the perception system for a time frame N ⁇ 1; the prediction system can concurrently determine a predicted future state for each object perceived by the perception system for a time frame N ⁇ 2; and a motion planning system can concurrently determine a motion plan for the autonomous vehicle using predicted future states for a time frame N ⁇ 3.
- a motion plan for the autonomous vehicle can be iteratively determined using data from each of a plurality of consecutive time frames.
- each of the perception, prediction, and motion planning systems may require the preceding system to complete a respective analysis of data for a time frame before each system can subsequently analyze the data for the time frame.
- the perception system may need to complete an analysis of data obtained from one or more sensors on the autonomous vehicle in order to generate the state data used by the prediction system.
- the prediction system may need to complete an analysis of state data for a time frame to determine a predicted future state for each object before the motion planning system can determine a motion plan for the autonomous vehicle.
- the overall time from when an object is sensed by a sensor until a motion plan is determined in response to the object may be dependent upon each system completing its respective analysis of the object along with all other objects perceived at the same time as the object.
- the systems and methods according to example aspects of the present disclosure can allow for determining a priority classification for objects perceived by an autonomous vehicle and determining a predicted future state for each object based at least in part on the priority classification for each object, thereby enabling higher priority objects to be analyzed before lower priority objects.
- the perception system can receive sensor data from one or more sensors that are coupled to or otherwise included within the autonomous vehicle.
- the one or more sensors can include a Light Detection and Ranging (LIDAR) system, a Radio Detection and Ranging (RADAR) system, one or more cameras (e.g., visible spectrum cameras, infrared cameras, etc.), and/or other sensors.
- the sensor data can include information that describes the location of objects within the surrounding environment of the autonomous vehicle.
- the sensor data can be obtained at a plurality of consecutive time frames. Based on sensor data received from the one or more sensors and/or the map data, the perception system can identify one or more objects that are proximate to the autonomous vehicle at each time frame.
- the perception system can segment the sensor data (e.g., the LIDAR data) into discrete object polygons and/or track objects frame-to-frame (e.g., iteratively over a number of consecutive time frames or periods).
- the sensor data e.g., the LIDAR data
- the LIDAR data can segment the sensor data into discrete object polygons and/or track objects frame-to-frame (e.g., iteratively over a number of consecutive time frames or periods).
- the perception system can generate, for each object, state data that describes a current state of such object (also referred to as one or more features of the object).
- state data for each object can describe an estimate of the object's: location (also referred to as position); speed (also referred to as velocity); acceleration; heading; yaw rate; orientation; size/footprint (e.g., as represented by a bounding polygon or other shape); type/class (e.g., vehicle, pedestrian, bicycle); distance from the autonomous vehicle; minimum path to interaction with the autonomous vehicle; a minimum time duration to interaction with the autonomous vehicle; and/or other state information and/or covariances of the above-described forms of state information.
- certain state data for an object can be used to determine one or more other features for the object. For example, in some implementations, an object's position, speed, acceleration, and/or heading can be used to determine a minimum path to interaction with the autonomous vehicle or a minimum time duration to interaction with the autonomous vehicle.
- the perception system can provide the state data to the priority classification system and/or the prediction system (e.g., iteratively for each time frame).
- the autonomous vehicle can further include a priority classification system configured to classify each object perceived by the autonomous vehicle.
- the priority classification system can be included in or otherwise incorporated into the perception system.
- the priority classification system can be included in or otherwise incorporated into the prediction system.
- the priority classification system can classify objects perceived by the perception system based on the state data for each object. For example, the priority classification system can classify each object into one of a plurality of priority categories and/or rank each object relative to each other object. The relative priority classification and/or rank for each object can be determined based on the state data for each object.
- the priority classification for each object can be indicative of an importance of the object to a determination for a motion plan for the autonomous vehicle.
- the priority classification assigned to each object can be based on a plurality of factors, such as how likely an object is to interact with the autonomous vehicle, how soon an object is likely to interact with the autonomous vehicle, whether an object is likely to impact a motion plan for the autonomous vehicle, etc. For example, a vehicle traveling at a high rate of speed towards the autonomous vehicle can be classified as a higher priority object than a vehicle traveling away from the autonomous vehicle.
- the priority classification can be based on one or more heuristic processes.
- one or more thresholds can be used to classify objects based on one or more features of the object. For example, a minimum time duration, a minimum path, or a minimum distance to interaction with the autonomous vehicle can be used to classify the objects based on how far away the objects are from the autonomous vehicle or how soon the objects will likely interact with the autonomous vehicle.
- a heading and/or velocity can be used to classify objects. For example objects traveling on headings away from the autonomous vehicle can be classified as lower priority than objects traveling towards the autonomous vehicle, and objects traveling at higher speeds towards the autonomous vehicle can be classified as higher priority than objects traveling at lower speeds towards the autonomous vehicle.
- Other features can be used as well, such as object type (e.g., vehicle, bicycle, pedestrian, etc.), object size, position, or any other feature described herein.
- each object can be classified as either high-priority or low-priority.
- the priority classification system can classify each object as either high-priority or low-priority based on the respective state data for each object.
- a predicted future state for each high-priority object can be determined before a predicted future state is determined for any low-priority object.
- the ratio of high-priority objects and low-priority objects can be determined based at least in part on a velocity of the autonomous vehicle. For example, in some implementations, in order to reduce the overall latency for determining a motion plan at higher speeds, fewer objects may be classified as high-priority than at lower speeds. For example, one or more thresholds or ranges can be used to determine a ratio of high-priority objects to low-priority objects based on a velocity of the autonomous vehicle. Each object can then be classified as either high-priority or low-priority based on the ratio of high-priority objects to low-priority objects.
- the priority classification systems and methods described herein can include or leverage one or more machine-learned models that assist in classifying the objects.
- the priority classification system can include a machine-learned object priority classifier to classify each object perceived by the autonomous vehicle.
- the machine-learned object priority classifier can classify each object as either high-priority or low-priority.
- the machine-learned models included in or employed by the priority classification systems described herein can be trained using log data collected during actual operation of autonomous vehicles on travelways (e.g., roadways).
- the log data can include sensor data and/or state data for various objects perceived by an autonomous vehicle (e.g., the perception system of an autonomous vehicle) and also the resulting future state for each object that occurred subsequent and/or contemporaneous to collection of the sensor data and/or generation of the state data.
- the log data can include a large number of real-world examples of objects paired with the data collected and/or generated by the autonomous vehicle (e.g., sensor data, map data, perception data, etc.) contemporaneous to such perception, such as whether the object became more or less likely to interact with the autonomous vehicle in the resulting future state of the object.
- Training the machine-learned models on such real-world log data can enable the machine-learned models to determine object classifications which better mirror or mimic real-world object behavior.
- the prediction system can determine a predicted future state for each object based at least in part on the priority classification for each object. For example, the order at which the prediction system determines the predicted future state for each object can be based at least in part on the priority classification assigned to the objects. For example, in some implementations, predicted future states for higher priority objects can be determined before predicted future states for lower priority objects are determined. In some implementations, the predicted future state for each object classified as high-priority can be determined before the predicted future state is determined for any objects classified as low-priority. In some implementations, the predicted future state for each object can be determined based upon an object's relative priority as compared to each other object. For example, each object perceived by the autonomous vehicle can be assigned a relative priority rank (e.g., for Y objects, a rank of 1 to Y), and a predicted future state can be determined based on the priority rank of the objects.
- a relative priority rank e.g., for Y objects, a rank of 1 to Y
- the prediction system can predict the future locations of the objects based at least in part on perception information (e.g., the state data for each object) received from the perception system, map data, sensor data, and/or any other data that describes the past and/or current state of the objects, the autonomous vehicle, the surrounding environment, and/or relationship(s) therebetween.
- perception information e.g., the state data for each object
- map data e.g., the map data
- sensor data e.g., the prediction system
- any other data that describes the past and/or current state of the objects, the autonomous vehicle, the surrounding environment, and/or relationship(s) therebetween.
- the prediction system can estimate the future motion of actors or other objects over a planning horizon which corresponds to the period of time for which a motion plan for the autonomous vehicle is generated.
- the prediction system can attach probability likelihoods to each predicted motion or other future location of the objects.
- the prediction system can receive the priority classification for each respective object perceived by the autonomous vehicle and the respective state data for each object for a plurality of consecutive time frames.
- the perception system can provide state data for a plurality of objects at a plurality of consecutive time frames
- the priority classification system can provide a respective priority classification for each object for each of the plurality of consecutive time frames.
- the prediction system can determine a predicted future state for each object classified as high-priority for the current time frame.
- the terms “current” or “most recently obtained” when used in reference to a time frame refers to the time frame most recently provided to a particular system (e.g., perception system, prediction system). For example, using state data for the current time frame, a predicted future location can be determined for each high-priority object.
- the predicted future states for each object classified as high-priority can then be provided to the motion planning system.
- a motion plan can be determined for the autonomous vehicle.
- the prediction system can determine a predicted future state for the current timeframe for each object classified as low-priority.
- each object perceived by an autonomous vehicle can have a predicted future state determined by the prediction system for each time frame.
- the prediction system can provide a predicted future state for a previous sequential timeframe for each object classified as low-priority to the motion planning system concurrently with the predicted future states for each object classified as high-priority for the current timeframe. For example, as soon as a predicted future state for the current time frame has been determined for each high-priority object, the predicted future states for the current time frame for each high-priority object can be provided to the motion planning system along with the predicted future state for the previous sequential time frame for each low-priority object. For example, a predicted future state for each object classified as low-priority can be determined by the prediction system by selecting, obtaining, or otherwise using a predicted future state for each low-priority object from a previous sequential time frame.
- a full set of predicted future states comprising current predicted future states for high-priority objects and previous sequential predicted future states for low-priority objects can be provided to the motion planning system as soon as the prediction system determines a predicted future state for all high-priority objects. This can reduce the overall latency for determining a motion plan for a vehicle, thereby reducing the response time for an autonomous vehicle and increasing passenger safety.
- the prediction system can include a low-fidelity prediction system and a high-fidelity prediction system.
- the terms “low-fidelity” and “high-fidelity” refer to a relative computational intensity of the prediction system or algorithms used by the respective prediction system.
- a high-fidelity prediction system can include or otherwise leverage one or more machine-learned models in order to predict a future location for each object.
- the prediction system can be a goal-oriented prediction system that generates one or more potential goals, selects one or more of the most likely potential goals, and develops one or more trajectories by which the object can achieve the one or more selected goals.
- the prediction system can include a scenario generation system that generates and/or scores the one or more goals for an object and a scenario development system that determines the one or more trajectories by which the object can achieve the goals.
- the prediction system can include a machine-learned goal-scoring model, a machine-learned trajectory development model, and/or other machine-learned models.
- a high-fidelity prediction system can be used to determine a predicted future state for objects classified as high-priority.
- a low-fidelity prediction system can include one or more state forward integration models. For example, a low-fidelity prediction system can predict a future state for an object by forward integrating a current state. For example, a low-fidelity prediction system can use a current position, a current velocity, and a current heading of an object to determine a predicted future location for the object at a future time period. In some implementations, a low-fidelity prediction system can be used to determine a predicted future state for objects classified as low-priority.
- the systems and methods according to example aspects of the present disclosure can allow for determining a priority classification for objects perceived by an autonomous vehicle.
- the systems and methods of the present disclosure can determine a respective priority classification for each object perceived by an autonomous vehicle.
- the order at which a predicted future state is determined for each object can then be determined based at least upon the respective priority classification for each object.
- the ability to classify objects according to a respective priority can allow for computational resources to be focused on higher-priority objects.
- one technical effect and benefit of the present disclosure is reduced latency for determining a predicted future location for higher priority objects which are more likely to impact a motion plan for an autonomous vehicle than low-priority objects.
- the present disclosure provides techniques that enable a computing system to determine a motion plan for an autonomous vehicle as soon as a predicted future location for all high-priority objects has been determined.
- the present disclosure can allow for a reduction in the time required for an autonomous vehicle to perceive an object and determine a motion plan in response to the object.
- the present disclosure can allow for higher fidelity prediction systems to be used to determine predicted future locations for higher priority objects, and lower fidelity prediction systems to be used to determine predicted future locations for lower priority objects. This can allow for more efficient use of computing resources on board an autonomous vehicle.
- the present disclosure also provides additional technical effects and benefits, including, for example, enhancing passenger safety.
- the systems and methods according to example aspects of the present disclosure can allow for reduced reaction times for determining a motion plan in response to an object perceived by the autonomous vehicle. This can allow an autonomous vehicle to come to a stop more quickly, navigate around the object, or otherwise respond to the object more quickly, thereby reducing the likelihood of an autonomous vehicle colliding with the object.
- FIG. 1 depicts a block diagram of an example autonomous vehicle 10 according to example aspects of the present disclosure.
- the autonomous vehicle 10 can include one or more sensors 101 , a vehicle computing system 102 , and one or more vehicle controls 107 .
- the vehicle computing system 102 can assist in controlling the autonomous vehicle 10 .
- the vehicle computing system 102 can receive sensor data from the one or more sensors 101 , attempt to comprehend the surrounding environment by performing various processing techniques on data collected by the sensors 101 , and generate an appropriate motion path through such surrounding environment.
- the vehicle computing system 102 can control the one or more vehicle controls 107 to operate the autonomous vehicle 10 according to the motion path.
- the vehicle computing system 102 can include one or more processors 112 and a memory 114 .
- the one or more processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a computing device, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected.
- the memory 114 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof.
- the memory 114 can store data 116 and instructions 118 which can be executed by the processor 112 to cause vehicle computing system 102 to perform operations.
- the vehicle computing system 102 can include a perception system 103 , a prediction system 104 , and a motion planning system 105 that cooperate to perceive the surrounding environment of the autonomous vehicle 10 and determine a motion plan for controlling the motion of the autonomous vehicle 10 accordingly.
- the perception system 103 can receive sensor data from the one or more sensors 101 that are coupled to or otherwise included within the autonomous vehicle 10 .
- the one or more sensors 101 can include a Light Detection and Ranging (LIDAR) system, a Radio Detection and Ranging (RADAR) system, one or more cameras (e.g., visible spectrum cameras, infrared cameras, etc.), and/or other sensors.
- LIDAR Light Detection and Ranging
- RADAR Radio Detection and Ranging
- the sensor data can include information that describes the location of objects within the surrounding environment of the autonomous vehicle 10 .
- the sensor data can include the location (e.g., in three-dimensional space relative to the LIDAR system) of a number of points that correspond to objects that have reflected a ranging laser.
- a LIDAR system can measure distances by measuring the Time of Flight (TOF) that it takes a short laser pulse to travel from the sensor to an object and back, calculating the distance from the known speed of light.
- TOF Time of Flight
- the sensor data can include the location (e.g., in three-dimensional space relative to the RADAR system) of a number of points that correspond to objects that have reflected a ranging radio wave.
- radio waves e.g., pulsed or continuous
- transmitted by the RADAR system can reflect off an object and return to a receiver of the RADAR system, giving information about the object's location and speed.
- a RADAR system can provide useful information about the current speed of an object.
- various processing techniques e.g., range imaging techniques such as, for example, structure from motion, structured light, stereo triangulation, and/or other techniques
- range imaging techniques such as, for example, structure from motion, structured light, stereo triangulation, and/or other techniques
- Other sensor systems can identify the location of points that correspond to objects as well.
- the one or more sensors 101 can include a positioning system.
- the positioning system can determine a current position of the vehicle 10 .
- the positioning system can be any device or circuitry for analyzing the position of the vehicle 10 .
- the positioning system can determine position by using one or more of inertial sensors, a satellite positioning system, based on IP address, by using triangulation and/or proximity to network access points or other network components (e.g., cellular towers, WiFi access points, etc.) and/or other suitable techniques.
- the position of the vehicle 10 can be used by various systems of the vehicle computing system 102 .
- the one or more sensors 101 can be used to collect sensor data that includes information that describes the location (e.g., in three-dimensional space relative to the autonomous vehicle 10 ) of points that correspond to objects within the surrounding environment of the autonomous vehicle 10 .
- the sensors 101 can be located at various different locations on the autonomous vehicle 10 .
- one or more cameras and/or LIDAR sensors can be located in a pod or other structure that is mounted on a roof of the autonomous vehicle 10 while one or more RADAR sensors can be located in or behind the front and/or rear bumper(s) or body panel(s) of the autonomous vehicle 10 .
- camera(s) can be located at the front or rear bumper(s) of the vehicle 10 as well. Other locations can be used as well.
- the perception system 103 can retrieve or otherwise obtain map data 126 that provides detailed information about the surrounding environment of the autonomous vehicle 10 .
- the map data 126 can provide information regarding: the identity and location of different travelways (e.g., roadways), road segments, buildings, or other items or objects (e.g., lampposts, crosswalks, curbing, etc.); the location and directions of traffic lanes (e.g., the location and direction of a parking lane, a turning lane, a bicycle lane, or other lanes within a particular roadway or other travelway); traffic control data (e.g., the location and instructions of signage, traffic lights, or other traffic control devices); and/or any other map data that provides information that assists the vehicle computing system 102 in comprehending and perceiving its surrounding environment and its relationship thereto.
- travelways e.g., roadways
- road segments e.g., buildings, or other items or objects (e.g., lampposts, crosswalks, curbing, etc.)
- traffic lanes
- the perception system 103 can identify one or more objects that are proximate to the autonomous vehicle 10 based on sensor data received from the one or more sensors 101 and/or the map data 126 . In particular, in some implementations, the perception system 103 can determine, for each object, state data that describes a current state of such object (also referred to as features of the object).
- the state data for each object can describe an estimate of the object's: current location (also referred to as position); current speed (also referred to as velocity); current acceleration; current heading; current orientation; size/shape/footprint (e.g., as represented by a bounding shape such as a bounding polygon or polyhedron); type/class (e.g., vehicle versus pedestrian versus bicycle versus other); yaw rate; distance from the autonomous vehicle; minimum path to interaction with the autonomous vehicle; minimum time duration to interaction with the autonomous vehicle; and/or other state information.
- current location also referred to as position
- current speed also referred to as velocity
- current acceleration current heading
- type/class e.g., vehicle versus pedestrian versus bicycle versus other
- yaw rate distance from the autonomous vehicle; minimum path to interaction with the autonomous vehicle; minimum time duration to interaction with the autonomous
- the perception system 103 can determine state data for each object over a number of iterations. In particular, the perception system 103 can update the state data for each object at each iteration. Thus, the perception system 103 can detect and track objects (e.g., vehicles) that are proximate to the autonomous vehicle 10 over time.
- objects e.g., vehicles
- the prediction system 104 can receive the state data from the perception system 103 and predict one or more future locations for each object based on such state data. For example, the prediction system 104 can predict where each object will be located within the next 5 seconds, 10 seconds, 20 seconds, etc. As one example, an object can be predicted to adhere to its current trajectory according to its current speed. As another example, other, more sophisticated prediction techniques or modeling can be used.
- the motion planning system 105 can determine a motion plan for the autonomous vehicle 10 based at least in part on the predicted one or more future locations for the object and/or the state data for the object provided by the perception system 103 . Stated differently, given information about the current locations of objects and/or predicted future locations of proximate objects, the motion planning system 105 can determine a motion plan for the autonomous vehicle 10 that best navigates the autonomous vehicle 10 relative to the objects at such locations. In some implementations, the motion planning system 105 can determine the motion plan for the autonomous vehicle using one or more adjusted vehicle parameters, as described herein.
- the motion planning system 105 can evaluate one or more cost functions and/or one or more reward functions for each of one or more candidate motion plans for the autonomous vehicle 10 .
- the cost function(s) can describe a cost (e.g., over time) of adhering to a particular candidate motion plan while the reward function(s) can describe a reward for adhering to the particular candidate motion plan.
- the reward can be of opposite sign to the cost.
- the motion planning system 105 can determine a total cost (e.g., a sum of the cost(s) and/or reward(s) provided by the cost function(s) and/or reward function(s)) of adhering to a particular candidate pathway.
- the motion planning system 105 can select or determine a motion plan for the autonomous vehicle 10 based at least in part on the cost function(s) and the reward function(s). For example, the motion plan that minimizes the total cost can be selected or otherwise determined.
- the motion planning system 105 can provide the selected motion plan to a vehicle controller 106 that controls one or more vehicle controls 107 (e.g., actuators or other devices that control gas flow, steering, braking, etc.) to execute the selected motion plan.
- vehicle controls 107 e.g., actuators or other devices that control gas flow, steering, braking, etc.
- the vehicle computing system 102 can also include a priority classification system 150 configured to classify one or more objects perceived by the autonomous vehicle 10 .
- the priority classification system 150 can receive state data descriptive of one or more objects perceived by the autonomous vehicle 10 from the perception system 103 . The priority classification system 150 can then classify each object based at least in part on the respective state data for each object.
- the priority classification for each object can be based on an object's position, velocity, and/or heading. For example, objects that are closer to the autonomous vehicle can be given a higher priority classification. Similarly, objects that are traveling in a direction towards the autonomous vehicle and/or towards a position at which the autonomous vehicle will be at a forthcoming time period can be given a higher priority classification. In some implementations, objects that are traveling at higher speeds, such as objects traveling at higher speeds towards the autonomous vehicle, can be given a higher priority classification than objects traveling at lower speeds.
- the priority classification can be based on a likelihood that an object will interact with the autonomous vehicle or otherwise be of importance to determining a motion plan for the autonomous vehicle. For example, objects traveling in an opposite direction as the autonomous vehicle can be given a lower priority classification than objects traveling in a direction that will interact with a motion path of the autonomous vehicle.
- the priority classification can be based on an object type. For example, in some implementations, pedestrians can be assigned a higher priority than other objects, such as a static (i.e., non-moving) vehicle. Similarly, other object types and/or classes can be used to determine a priority classification for each object.
- the priority classification for each object can be based on a minimum path to interaction with the autonomous vehicle or a minimum time duration to interaction with the autonomous vehicle.
- a minimum path to interaction with the autonomous vehicle can correspond to a distance along one or more travelways that the object would have to traverse in order to interact with the object.
- a vehicle traveling along a highway in an opposite direction as the autonomous vehicle may need to exit the highway, turn around, re-enter the highway, and overtake the autonomous vehicle in order to interact with the autonomous vehicle.
- the vehicle is likely to have a long minimum path to interaction and/or minimum time duration to interaction with the autonomous vehicle.
- a vehicle approaching an intersection at a perpendicular path of travel to the autonomous vehicle is likely to have a shorter minimum path to interaction and/or minimum time duration to interaction with the autonomous vehicle.
- the vehicle approaching the intersection can be given a higher priority classification than the vehicle traveling in the opposite direction.
- the priority classification system 150 can classify each object as high-priority or low-priority. For example, each object can be classified according to a binary classification in which each object is either a high-priority or low-priority object. For example, objects which have a minimum path to interaction and/or minimum time duration to interaction with an autonomous vehicle that is less than a threshold can be classified as high-priority objects. Similarly, objects which have a minimum path to interaction and/or minimum time duration to interaction that exceeds the threshold can be classified as low-priority objects. In some implementations, objects of a particular type (e.g., pedestrians) can always be classified as high-priority objects. In some implementations, objects which have been determined to be unlikely to interact with the autonomous vehicle or the determination of a motion plan for the autonomous vehicle can be classified as low-priority objects.
- a binary classification in which each object is either a high-priority or low-priority object. For example, objects which have a minimum path to interaction and/or minimum time
- the priority classification system 150 can classify each object relative to other objects perceived by the autonomous vehicle. For example, in some implementations, each object can be assigned a relative priority in relation to each other object perceived by the autonomous vehicle. For example, each object can be assigned a relative priority rank based on the respective priority of the object. For example, if an autonomous vehicle perceives Y objects within a surrounding environment of the autonomous vehicle, each object can be assigned a relative rank on a scale of 1 to Y. In this way, each object can be assigned a priority classification relative to each other object perceived by the autonomous vehicle.
- the priority classification system 150 can classify each object based on a velocity of the autonomous vehicle.
- a ratio of high-priority objects to low-priority objects can be determined based on a velocity of the vehicle. For example, at higher velocities, it may be preferable to limit the number of high-priority objects in order to reduce and/or minimize the number of high-priority objects for which the prediction system 104 must determine a future predicted state in a current time frame in order to reduce and/or minimize a latency for determining a motion plan in response to such objects. In such a case, fewer objects may be classified as high-priority objects than at lower velocities.
- the ratio of high-priority objects to low-priority objects can be determined based on one or more threshold velocities. For example, for a first velocity range of 1 to X, a ratio of 1 high-priority object to Y low-priority objects can be used, whereas for a second velocity range of X to 2X, a ratio of 1 high priority object to 2Y low-priority objects can be used. In other implementations, other pre-determined ratios can be used.
- each object can be classified as either high-priority or low-priority such that the ratio of high-priority objects to low-priority objects generally conforms to the pre-determined ratio (i.e., the ratio of high-priority to low-priority objects is within a threshold variance of the pre-determined ratio).
- a machine-learned model can be used to determine the priority classification for each object based on the respective state data for each object.
- the machine-learned model can be configured to classify each object as either high-priority or low-priority and provide the priority classification for each object to the prediction system 104 .
- the respective state data for each object can be input into the machine-learned model, and data indicative of a respective priority classification for the object can be received as an output of the machine-learned model.
- the machine-learned model can be trained based at least in part on training data that comprises annotated vehicle data logs that were previously collected during previous autonomous vehicle driving sessions.
- vehicle data logs can be recorded during one or more autonomous vehicle driving sessions, which can include state data for objects perceived by the autonomous vehicle.
- the vehicle data logs can be annotated by a human reviewer in order to help train the machine-learned model.
- objects can be labeled either high-priority or low-priority.
- the machine-learned model can then be trained to determine a priority classification for objects based on the training data.
- the vehicle computing system 102 can determine a predicted future state for each object based at least in part on the priority classification for each object.
- the priority classification system 150 can be configured to provide the priority classification for each object perceived by the perception system 103 to the prediction system 104 .
- the prediction system 104 can then determine a predicted future state for each object based at least in part on the priority classification for each object.
- the order at which the computing system determines the predicted future state for each object can be based at least in part on the priority classification assigned to each object. For example, in some implementations, predicted future states for all objects classified as high-priority can be determined before predicted future states are determined for any low-priority objects. In some implementations, a predicted future state can be determined for each object according to a respective priority rank for each object. For example, for Y objects, each object can be assigned a relative priority rank of 1 to Y, and a predicted future state for each object can be determined based on the relative priority rank for each object (i.e., starting with 1 and ending with Y).
- a future location prediction system can be selected based at least in part on the priority classification for each object. For example, in some implementations, a low-fidelity prediction system can be used to determine a predicted future state for low-priority objects, and a high-fidelity prediction system can be used to determine a predicted future state for high-priority objects.
- Each of the perception system 103 , the prediction system 104 , the motion planning system 105 , the vehicle controller 106 , and the priority classification system 150 can include computer logic utilized to provide desired functionality.
- each of the perception system 103 , the prediction system 104 , the motion planning system 105 , the vehicle controller 106 , and the priority classification system 150 can be implemented in hardware, firmware, and/or software controlling a general purpose processor.
- each of the perception system 103 , the prediction system 104 , the motion planning system 105 , the vehicle controller 106 , and the priority classification system 150 includes program files stored on a storage device, loaded into a memory and executed by one or more processors.
- each of the perception system 103 , the prediction system 104 , the motion planning system 105 , the vehicle controller 106 , and the priority classification system 150 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media.
- FIG. 2 a block diagram depicting an example perception system 103 according to example aspects of the present disclosure is shown. Elements that are the same or similar to those shown in FIG. 1 are referred to with the same reference numerals.
- the priority classification system 150 can be implemented as a subpart of the perception system 103 .
- the perception system 103 can receive sensor data from one or more sensors 101 (as shown in FIG. 1 ) and map data 126 .
- the perception system 103 can generate state data for each object perceived by the autonomous vehicle 10 , perform a priority classification for each object using the priority classification system 150 , and provide the state data and respective priority classification for each object to the prediction system 104 .
- FIG. 3 a block diagram depicting an example prediction system 104 according to example aspects of the present disclosure is shown. Elements that are the same or similar to those shown in FIGS. 1 and 2 are referred to with the same reference numerals.
- the priority classification system 150 can be implemented as a subpart of the prediction system 104 .
- the perception system 103 can receive sensor data from one or more sensors 101 (as shown in FIG. 1 ) and map data 126 .
- the perception system 103 can then provide the state data indicative of one or more objects to the prediction system 104 .
- the prediction system 104 can then determine a priority classification for each object using the priority classification system 150 , and determine a predicted future state for each object based at least in part on the priority classification for each object.
- the prediction system 104 and then provide the predicted future states for each object to the motion planning system 105 .
- the priority classification system 150 can be implemented as a stand-alone priority classification system 150 , or as a sub system of either a perception system 103 or a prediction system 104 .
- a vehicle computing system can iteratively determine a motion plan using data obtained in a plurality of consecutive time frames.
- each of the perception, prediction, and motion planning systems illustrated in FIGS. 1-3 can concurrently perform analysis on data from a plurality of consecutive time frames.
- the perception system can receive sensor data for the time frame N; the perception system can concurrently generate and provide state data to the prediction system for one or more objects perceived by the perception system for a time frame N ⁇ 1; the prediction system can concurrently determine a predicted future state for each object perceived by the perception system for a time frame N ⁇ 2; and a motion planning system can concurrently determine a motion plan for the autonomous vehicle using predicted future states for a time frame N ⁇ 3.
- each of the perception, prediction, and motion planning systems can receive and perform a respective analysis of data received from an upstream system, resulting in the motion planning system determining a motion plan using predicted future states for a time frame N ⁇ 2. In this way, a motion plan for the autonomous vehicle can be iteratively determined using data from each of a plurality of consecutive time frames.
- block 410 represents analysis by a perception system for data from a frame N.
- the perception system's analysis of data for frame N can include a plurality of objects 411 A-J.
- Each of the objects can have associated state data descriptive of the object generated by the perception system.
- the perception system can generate state data describing a position, velocity, acceleration, heading, size, type, yaw rate, or other state data descriptive of the object as described herein.
- the state data descriptive of the objects 411 A-J generated by the perception system for frame N can be provided to the prediction system once the perception system has completed its analysis.
- the prediction system can also receive a priority classification for each object.
- each object can be classified as either high-priority (“HP”) or low-priority (“LP”).
- HP high-priority
- LP low-priority
- the priority classification for each object can be determined based on the respective state data for each object. Further, in some implementations, the priority classification can be determined by a machine-learned model.
- the prediction system can receive the respective priority classifications for each object as well as the respective state data describing each object from the perception system.
- the perception system can then determine a predicted future state for each object based at least in part on the respective priority classification for each object.
- the prediction system can first determine a predicted future state for each object classified as high-priority.
- the prediction system can first determine a predicted future state for HP objects 431 A-D.
- the prediction system can determine a predicted future state for each object classified as high-priority based at least in part on the state data obtained for the most recent time frame (Frame N).
- the prediction system can provide the predicted future state for each object classified as high-priority for the current timeframe to the motion planning system. For example, as shown by the arrow from the dashed block 440 to the block 450 , once the prediction system has determined a predicted future state for each high-priority object HP 431 A-D, the prediction system can provide the predicted future states for the objects HP 431 A-D to the motion planning system. In this way, the motion planning system can begin determining a motion plan in an advanced fashion (e.g., “ahead of schedule”).
- the prediction system can determine a predicted future state for each object identified as low-priority. For example, after the prediction system has provided the high-priority objects HP 431 A-D to the motion planning system, the prediction system can determine a predicted future state for each low priority object LP 431 E-J. In this way, each object perceived in a particular frame (e.g., frame N) can have a predicted future state determined by the prediction system.
- the prediction system can further be configured to provide a predicted future state for the previous sequential timeframe for each object classified as low-priority to the motion planning system concurrently with the predicted future state for each object classified as high-priority for the current timeframe.
- a predicted future state for a low-priority object can be determined by selecting, obtaining, or otherwise determining a predicted future state for the object based on state data obtained for a previous sequential time frame.
- the prediction system can have previously determined a predicted future state for objects 421 A-J, including high-priority objects HP 421 A-D and low-priority objects LP 421 E-J.
- the prediction system could concurrently determine predicted future states for high-priority objects HP 421 A-D and low-priority objects LP 421 E 4 for time frame N ⁇ 1 in block 420 .
- each high-priority object HP 421 A-D can respectively correspond to each high-priority object HP 431 A-D for the time frame N ⁇ 1
- each low-priority object LP 421 E-J can respectively correspond to each low-priority object LP 431 E 4 for the time frame N ⁇ 1.
- the prediction system when the prediction system provides the predicted future states for high-priority objects HP 431 A-D to the motion planning system, the prediction system can be configured to concurrently provide a previously determined predicted future state for each low-priority object (i.e., LP 431 E 4 ) for the previous sequential time frame (i.e., LP 421 E 4 ).
- a full set of predicted future states comprising the predicted future states for all high-priority objects (HP 431 A-D) for a current time frame and a previously determined predicted future state for all low-priority objects (LP 421 E 4 ) for a previous sequential time frame can be concurrently provided to a motion planning system as soon as the prediction system has determined a predicted future state for each object classified as high-priority (HP 431 A-D).
- An advantage provided by the object prediction process depicted in FIG. 4 is that the time required to determine a motion plan for an autonomous vehicle can be reduced.
- the motion planning system can receive a predicted future state for each object much sooner, thereby allowing a motion plan to be determined ahead of schedule.
- the reduction in time for the prediction system to determine a predicted future state for each object can correspond to the ratio of high-priority objects to low-priority objects. For example, as depicted in FIG.
- the prediction system would only need to determine a predicted future state for high-priority objects HP 431 A-D (i.e., 4 out of 10 objects) for time frame N before providing the predicted future states for each object 431 A-D and 421 E-J to the motion planning system, allowing for a reduction of approximately 60% of the required processing time.
- low-priority objects can be classified as such based on their negligible impact on a motion plan
- using a predicted future state for a low-priority object from a previous sequential time frame can allow for a net increase in passenger and autonomous vehicle safety.
- low-priority objects such as objects positioned far away from an autonomous vehicle and traveling away from the autonomous vehicle
- high-priority objects such as objects travelling towards the autonomous vehicle or positioned near the autonomous vehicle, may be much more likely to impact a motion plan for the autonomous vehicle.
- the autonomous vehicle can respond to high-priority objects in a quicker fashion, thereby reducing the likelihood of unsafe conditions, such as a collision.
- the example computing system 100 can include a computing system 102 (e.g., a vehicle computing system 102 on an autonomous vehicle 10 ) and a machine learning computing system 130 that are communicatively coupled over one or more communication networks 180 .
- a computing system 102 e.g., a vehicle computing system 102 on an autonomous vehicle 10
- a machine learning computing system 130 that are communicatively coupled over one or more communication networks 180 .
- the computing system 102 can include one or more processor(s) 112 and memory 114 .
- the one or more processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected.
- the memory 114 can include one or more non-transitory computer-readable storage media, such as RAM, ROM, EEPROM, EPROM, one or more memory devices, flash memory devices, etc., and combinations thereof.
- the memory 114 can store information that can be accessed by the one or more processors 112 .
- the memory 114 e.g., one or more non-transitory computer-readable storage mediums, memory devices
- the memory 114 can store data 116 that can be obtained, received, accessed, written, manipulated, created, and/or stored.
- the memory 114 can also store computer-readable instructions 118 that can be executed by the one or more processors 112 .
- the instructions can be software written in any suitable programming language or can be implemented in hardware. Additionally, or alternatively, the instructions can be executed in logically and/or virtually separate threads on processor(s) 112 .
- the memory 114 can store instructions that when executed by the one or more processors 112 cause the one or more processors 112 to perform any of the operations and/or functions described herein.
- the computing system 102 can also include a network interface 128 used to communicate with one or more systems or devices, including systems or devices that are remotely located from the computing system 102 .
- the network interface 128 can include any circuits, components, software, etc. for communicating with one or more networks (e.g., 180 ).
- the network interface 128 can include, for example, one or more of a communications controller, receiver, transceiver, transmitter, port, conductors, software and/or hardware for communicating data.
- the computing system 102 can also include a perception system 103 , a prediction system 104 , a motion planning system 105 , a vehicle controller 106 , and a priority classification system 150 , as described herein.
- Each of the perception system 103 , the prediction system 104 , the motion planning system 105 , the vehicle controller 106 , and the priority classification system 150 can include computer logic utilized to provide desired functionality.
- each of the perception system 103 , the prediction system 104 , the motion planning system 105 , the vehicle controller 106 , and the priority classification system 150 can be implemented in hardware, firmware, and/or software controlling a general purpose processor.
- each of the perception system 103 , the prediction system 104 , the motion planning system 105 , the vehicle controller 106 , and the priority classification system 150 can include program files stored on a storage device, loaded into a memory and executed by one or more processors.
- each of the perception system 103 , the prediction system 104 , the motion planning system 105 , the vehicle controller 106 , and the priority classification system 150 can include one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media.
- the prediction system 104 can include a low-fidelity prediction system 122 and a high-fidelity prediction system 124 .
- a high-fidelity prediction system 124 can include or otherwise leverage one or more machine-learned models in order to predict a future location for each object.
- the high-fidelity prediction system 124 can be a goal-oriented prediction system that generates one or more potential goals, selects one or more of the most likely potential goals, and develops one or more trajectories by which the object can achieve the one or more selected goals.
- the high-fidelity prediction system 124 can include a scenario generation system that generates and/or scores the one or more goals for an object and a scenario development system that determines the one or more trajectories by which the object can achieve the goals.
- the high-fidelity prediction system 124 can include a machine-learned goal-scoring model, a machine-learned trajectory development model, and/or other machine-learned models.
- a low-fidelity prediction system 122 can include one or more state forward-integration models. For example, a low-fidelity prediction system 122 can predict a future state for an object by forward integrating a current state. For example, a low-fidelity prediction system can use a current position, a current velocity, and a current heading of an object to determine a predicted future location for the object at a future time period.
- the computing system 102 can determine a predicted future state for each object based at least in part on the priority classification for the object by selecting a future location prediction system based at least in part on the priority classification for the object and determining the predicted future state for the object using the selected future location prediction system.
- the low-fidelity prediction system 122 can be used to determine a predicted future state for objects classified as low-priority
- the high-fidelity prediction system 124 can be used to determine a predicted future state for objects classified as high-priority.
- An advantage provided by using a low-fidelity prediction system and a high-fidelity prediction system to determine the predicted future state for each object based at least in part on the priority classification for each object is that computing resources can be allocated more efficiently.
- low-priority objects which have been classified as such due to their likely negligible impact on a vehicle motion plan may not require a sophisticated prediction system, such as the high-fidelity prediction system 124 , in order to determine a predicted future state for such objects.
- low-priority objects located far away from an autonomous vehicle and/or travelling in a direction away from the autonomous vehicle may have little to no impact on the motion plan for the autonomous vehicle.
- the granularity provided by a goal-oriented prediction system as described herein may provide little to no benefit over a low-fidelity prediction model 122 , such as a simple state forward-integration model. Accordingly, by first determining a priority classification for each object, computational resources can be more efficiently allocated for determining predicted future states for each object.
- the priority classification system 150 can store or include one or more machine-learned models 120 .
- the machine-learned model 120 can be or can otherwise include various machine-learned models such as decision tree-based models, support vector machines, k-Nearest Neighbor models, neural networks (e.g., deep neural networks), or other multi-layer non-linear models.
- Example neural networks include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), or other forms of neural networks.
- the one or more machine-learned models 120 can include a machine-learned object priority classifier.
- a machine-learned object priority classifier can be configured to classify objects perceived by the perception system 103 as either high-priority or low-priority.
- the machine-learned object priority classifier can be configured to rank objects according to a respective object priority, as described herein.
- the computing system 102 can determine a priority classification for each object using the machine-learned model 120 .
- the computing system 102 can obtain data descriptive of the machine-learned model, input the respective state data for each object perceived by the perception system 103 into the machine-learned model 120 , and receive data indicative of a respective priority classification for each object as an output of the machine-learned model.
- the machine-learned model 120 and/or the priority classification system 150 can be configured to provide the respective priority classification for each object to the prediction system 104 .
- the vehicle computing system 102 can receive the one or more machine-learned models 120 from the machine learning computing system 130 over network 180 and can store the one or more machine-learned models 120 in the memory 114 . The vehicle computing system 102 can then use or otherwise implement the one or more machine-learned models 120 (e.g., by processor(s) 112 ).
- certain operations described herein can be performed by a machine learning computing system 130 that is remotely located to the computing system 102 and in communication with the computing system 102 over one or more wireless networks 180 (e.g., cellular data networks, satellite communication networks, wide area networks, etc.).
- the machine learning computing system 130 can include one or more server computing devices.
- the server computing devices can be arranged according to a parallel computing architecture, a sequential computing architecture, or combinations thereof.
- the machine learning computing system 130 can include one or more processors 132 and a memory 134 .
- the one or more processors 132 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected.
- the memory 134 can include one or more non-transitory computer-readable storage media, such as RAM, ROM, EEPROM, EPROM, one or more memory devices, flash memory devices, etc., and combinations thereof.
- the memory 134 can store information that can be accessed by the one or more processors 132 .
- the memory 134 e.g., one or more non-transitory computer-readable storage mediums, memory devices
- the memory 134 can store data 136 that can be obtained, received, accessed, written, manipulated, created, and/or stored.
- the machine learning computing system 130 can obtain data from one or more memory device(s) that are remote from the system 130 .
- the memory 134 can also store computer-readable instructions 138 that can be executed by the one or more processors 132 .
- the instructions 138 can be software written in any suitable programming language or can be implemented in hardware. Additionally, or alternatively, the instructions 138 can be executed in logically and/or virtually separate threads on processor(s) 132 .
- the memory 134 can store instructions 138 that when executed by the one or more processors 132 cause the one or more processors 132 to perform any of the operations and/or functions described herein.
- the machine learning computing system 130 can also include a network interface 164 used to communicate with one or more systems or devices, including systems or devices that are remotely located from the machine learning computing system 130 .
- the network interface 164 can include any circuits, components, software, etc. for communicating with one or more networks (e.g., 180 ).
- the network interface 164 can include, for example, one or more of a communications controller, receiver, transceiver, transmitter, port, conductors, software and/or hardware for communicating data.
- the machine learning computing system 130 includes one or more server computing devices. If the machine learning computing system 130 includes multiple server computing devices, such server computing devices can operate according to various computing architectures, including, for example, sequential computing architectures, parallel computing architectures, or some combination thereof.
- the machine learning computing system 130 can include one or more machine-learned models 140 .
- the machine learned model(s) 140 can be or can otherwise include various machine-learned models such as decision tree-based models, support vector machines, k-Nearest Neighbor models, neural networks (e.g., deep neural networks), or other multi-layer non-linear models.
- Example neural networks include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), or other forms of neural networks.
- the machine learning computing system 130 can communicate with the computing system 102 according to a client-server relationship.
- the machine learning computing system 140 can implement the machine-learned model(s) 140 to provide a web service to the computing system 102 .
- the web service can provide object priority classifications to the computing system 102 .
- machine-learned models 110 can be located and used at the computing system 102 and/or machine-learned models 140 can be located and used at the machine learning computing system 130 .
- the machine learning computing system 130 and/or the computing system 102 can train the machine-learned models 110 and/or 140 through use of a model trainer 160 .
- the model trainer 160 can train the machine-learned models 110 and/or 140 using one or more training or learning algorithms.
- One example training technique is backwards propagation of errors.
- the model trainer 160 can perform supervised training techniques using a set of labeled training data 162 .
- the model trainer 160 can perform unsupervised training techniques using a set of unlabeled training data 162 .
- the model trainer 160 can perform a number of generalization techniques to improve the generalization capability of the models being trained. Generalization techniques include weight decays, dropouts, or other techniques.
- the model trainer 160 can train a machine-learned model 110 and/or 140 based on a set of training data 162 .
- the training data 162 can include, for example, vehicle data logs from previously completed autonomous vehicle driving sessions.
- the vehicle data logs can include, for example, sensor data obtained by one or more sensors of the autonomous vehicle, state data descriptive of one or more objects perceived by the perception system 103 of the autonomous vehicle, predicted future states for objects perceived by the autonomous vehicle determined by the prediction system 104 , previous motion plans determined by the motion planning system 105 , or other vehicle data as described herein.
- the model trainer 160 can be configured to train the machine-learned models 110 and/or 140 by determining whether objects perceived by the autonomous vehicle impacted a motion plan of the autonomous vehicle.
- the training data 162 can include vehicle data logs that include object priority classification labels recorded by a human reviewer can be used to train the machine-learned model(s) 110 and/or 140 .
- a human reviewer can review the vehicle data logs and label object priority classifications for objects perceived by the perception system 103 .
- the model trainer 160 includes computer logic utilized to provide desired functionality, and can be implemented in hardware, firmware, and/or software controlling a general purpose processor.
- the model trainer 160 includes program files stored on a storage device, loaded into a memory and executed by one or more processors.
- the model trainer 160 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media.
- the network(s) 180 can be any type of network or combination of networks that allows for communication between devices.
- the network(s) 180 can include one or more of a local area network, wide area network, the Internet, secure network, cellular network, mesh network, peer-to-peer communication link and/or some combination thereof and can include any number of wired or wireless links. Communication over the network(s) 180 can be accomplished, for instance, via a network interface using any type of protocol, protection scheme, encoding, format, packaging, etc.
- FIG. 5 illustrates one example computing system 100 that can be used to implement the present disclosure.
- the computing system 102 can include the model trainer 160 and the training dataset 162 .
- the machine-learned models 110 can be both trained and used locally at the computing system 102 .
- the computing system 102 is not connected to other computing systems.
- components illustrated and/or discussed as being included in one of the computing systems 102 or 130 can instead be included in another of the computing systems 102 or 130 .
- Such configurations can be implemented without deviating from the scope of the present disclosure.
- the use of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components.
- Computer-implemented operations can be performed on a single component or across multiple components.
- Computer-implements tasks and/or operations can be performed sequentially or in parallel.
- Data and instructions can be stored in a single memory device or across multiple memory devices.
- FIG. 6 an example method ( 600 ) to determine a predicted future state for objects perceived by an autonomous vehicle based at least in part on a priority classification for the objects according to example aspects of the present disclosure is depicted.
- FIG. 6 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement.
- the various steps of method ( 600 ) can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.
- the method ( 600 ) can be implemented by a computing system, such as a computing system comprising one or more computing devices.
- the method ( 600 ) can include obtaining, by a computing system, state data descriptive of at least a current or past state of a plurality of objects that are perceived by an autonomous vehicle.
- the state data can include data descriptive of one or more features of an object, such as a position, a velocity, an acceleration, a heading, a yaw rate, a shape, a size, a type, a distance from the autonomous vehicle, a minimum path to interaction with the autonomous vehicle, a minimum time duration to interaction with the autonomous vehicle, any other state data described herein, or any state data descriptive of an object perceived by an autonomous vehicle.
- the state data can be obtained from a perception system of the autonomous vehicle configured to generate the state data based on sensor data obtained from one or more sensors of the autonomous vehicle.
- the method ( 600 ) can include determining, by the computing system, a priority classification for each object in the plurality of objects based at least in part on the respective state data for each object.
- the priority classification for each object can be determined by a priority classification system.
- the priority classification can be either a high-priority or low-priority classification for each object.
- the priority classification can be a respective priority rank for each object relative to each other object perceived by the autonomous vehicle.
- the priority classification for each object can be determined by a machine-learned model.
- the method can include obtaining data descriptive of a machine-learned model.
- data descriptive of the machine-learned model can be obtained from a memory (e.g., non-transitory computer readable media) of the computing system.
- the machine-learned model can be a machine-learned object priority classifier configured to classify each object as either high-priority or low-priority based on the respective state data for each object.
- the method ( 600 ) can include inputting the respective state data for an object into the machine-learned model.
- state data generated by a perception system can be received by a machine-learned model of a priority classification system.
- the respective state data for each object can be input into the machine-learned model in order to determine a priority classification for each respective object.
- the method ( 600 ) can include receiving data indicative of a respective priority classification as an output of the machine-learned model.
- the machine-learned model can be a machine-learned object priority classifier configured to classify each object as either high-priority or low-priority, and the machine-learned model can output a respective high-priority or low-priority classification based on the respective state data for each object.
- the method ( 600 ) can include determining, by the computing system, an order at which the computing system determines a predicted future state for each object based at least in part on the priority classification for each object.
- each object can be classified as either high-priority or low-priority, and the order can be determined such that each high-priority object has a predicted future state determined before a predicted future state is determined for any low-priority objects.
- determining an order at which the computing system determines a predicted future state for each object can be based on the priority rank assigned to each object. For example, the highest ranked object can have a predicted future state determined first, with each successive ranked object successively determined according to the respective priority rank for each object.
- the method ( 600 ) can include determining, by the computing system, the predicted future state for each object based at least in part on the determined order.
- the prediction system can determine an order in which a predicted future state for each object classified as high-priority before determining a predicted future state for each object classified as low-priority.
- the prediction system can be configured to provide the predicted future states for each high-priority object to a motion planning system.
- determining a predicted future state for each object classified as high-priority can include determining a future state for each high-priority object based at least in part on state data obtained for the most recent time frame.
- determining a predicted future state for each object classified as low-priority can include determining a predicted future state for the object that was determined based on state data obtained for a previous sequential time frame.
- the prediction system can provide a previously-determined future predicted state for each low-priority object for a previous sequential time frame to a motion planning system at the same time that the prediction system provides a future predicted state for each high-priority object for a current time frame to the motion planning system.
- determining a predicted future state for each object based at least in part on the determined order can include selecting a future location prediction system at least in part on the priority classification for the object and determining the predicted future state for the object using the selected future location prediction system.
- a prediction system can include a low-fidelity prediction system and a high-fidelity prediction system.
- the low-fidelity protection system can be used to determine a predicted future state for each low-priority object
- the high-fidelity prediction system can be used to determine a predicted future state for each high-priority object.
- the method ( 600 ) can include determining a motion plan for the autonomous vehicle based at least in part on the predicted future state for at least one of the objects.
- a motion planning system can receive one or more predicted future states for one or more objects perceived by the autonomous vehicle, and can determine a motion plan for the autonomous vehicle based at least in part on the predicted future states for the one or more objects.
- the systems and methods according to example aspects of the present disclosure can allow for determining a priority classification for objects perceived by an autonomous vehicle, determining a predicted future state for each object based on the respective priority classification for each object, and determining a motion plan for the autonomous vehicle based at least in part on the predicted future states.
- the technology discussed herein makes reference to servers, databases, software applications, and other computer-based systems, as well as actions taken and information sent to and from such systems.
- the inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components.
- processes discussed herein can be implemented using a single device or component or multiple devices or components working in combination.
- Databases and applications can be implemented on a single system or distributed across multiple systems. Distributed components can operate sequentially or in parallel.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Computing Systems (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Traffic Control Systems (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
Abstract
Description
- The present application is based on and claims priority to U.S. Provisional Application 62/549,407 having a filing date of Aug. 23, 2017, which is incorporated by reference herein.
- The present disclosure relates generally to autonomous vehicles. More particularly, the present disclosure relates to systems and methods for determining a priority classification for objects that are perceived by autonomous vehicles and predicting a future location for the objects based at least in part on the respective priority classification for each object.
- An autonomous vehicle is a vehicle that is capable of sensing its environment and navigating with minimal or no human input. In particular, an autonomous vehicle can observe its surrounding environment using a variety of sensors and can attempt to comprehend the environment by performing various processing techniques on data collected by the sensors. Given knowledge of its surrounding environment, the autonomous vehicle can identify an appropriate motion path through such surrounding environment.
- Aspects and advantages of embodiments of the present disclosure will be set forth in part in the following description, or can be learned from the description, or can be learned through practice of the embodiments.
- One example aspect of the present disclosure is directed to a computer-implemented method. The method can include obtaining, by a computing system comprising one or more processors, state data descriptive of at least a current or past state of a plurality of objects that are perceived by an autonomous vehicle. The method can further include determining, by the computing system, a priority classification for each object in the plurality of objects based at least in part on the respective state data for each object. The method can further include determining, by the computing system, an order at which the computing system determines a predicted future state for each object based at least in part on the priority classification for each object. The method can further include determining, by the computing system, the predicted future state for each object based at least in part on the determined order.
- Another example aspect of the present disclosure is directed to a computing system. The computing system can include a perception system comprising one or more processors. The perception system can be configured to generate, for each of a plurality of consecutive time frames, state data descriptive of at least a current state of each of a plurality of objects that are perceived by an autonomous vehicle. The computing system can further include a priority classification system comprising one or more processors. The priority classification system can be configured to, for each of the plurality of consecutive time frames, classify each object in the plurality of objects as either high-priority or low-priority based at least in part on the respective state data for each object. The computing system can further include a prediction system comprising one or more processors. The prediction system can be configured to, for each of the plurality of consecutive time frames receive the priority classification for each respective object, determine, for the current time frame, a predicted future state for each object classified as high-priority, and provide the predicted future state for each object classified as high-priority for the current time frame to a motion planning system implemented by the one or more processors.
- Another example aspect of the present disclosure is directed to an autonomous vehicle. The autonomous vehicle can include one or more processors and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations can include obtaining state data descriptive of at least a current or past state of a plurality of objects that are perceived by the autonomous vehicle. The operations can further include determining a priority classification for each object in the plurality of objects based at least in part on the respective state data for each object. The operations can further include determining an order at which the computing system determines a predicted future state for each object based at least in part on the priority classification for each object. The operations can further include determining the predicted future state for each object based at least in part on the determined order.
- Other aspects of the present disclosure are directed to various systems, apparatuses, non-transitory computer-readable media, user interfaces, and electronic devices.
- These and other features, aspects, and advantages of various embodiments of the present disclosure will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate example embodiments of the present disclosure and, together with the description, serve to explain the related principles.
- Detailed discussion of embodiments directed to one of ordinary skill in the art is set forth in the specification, which makes reference to the appended figures, in which:
-
FIG. 1 depicts a block diagram of an example autonomous vehicle according to example aspects of the present disclosure; -
FIG. 2 depicts an example perception system according to example aspects of the present disclosure; -
FIG. 3 depicts an example prediction system according to example aspects of the present disclosure; -
FIG. 4 depicts a block diagram of an example object prediction process according to example aspects of the present disclosure; -
FIG. 5 depicts a block diagram of an example computing system according to example aspects of the present disclosure; and -
FIG. 6 depicts a flow chart diagram of an example method to determine a motion plan for an autonomous vehicle according to example aspects of the present disclosure. - Generally, the present disclosure is directed to systems and methods for determining a priority classification for objects that are perceived by autonomous vehicles and predicting a future location for the objects based at least in part on the respective priority classification for each object. In particular, an autonomous vehicle can include or otherwise use a prediction system to predict the future locations of the objects such as, for example, other vehicles, pedestrians, bicyclists, etc. based at least in part on perception information that describes current and/or past states of the objects and/or the surrounding environment. In some implementations, the autonomous vehicle can include or otherwise use a priority classification system to classify a respective priority of each object perceived by the perception system. For example, in some implementations, each object can be classified as either high-priority or low-priority. The prediction system can determine a predicted future state for each object based at least in part on the priority classification for each object. For example, in some implementations, the order at which the computing system determines the predicted future state for each object can be determined based on the priority classification for each object, such as, for example, by determining the predicted future state for objects classified as high-priority before predicted future states are determined for objects classified as low-priority.
- As one example, in a system that operates iteratively over a number of time frames, the prediction system can determine in a current time frame the predicted future state for each object classified as high-priority in the current time frame. However, the prediction system can wait and determine (e.g., in a subsequent time frame or at least subsequent to providing the predicted future states for each object classified as high-priority to a motion planning system) the predicted future state for each object classified as low-priority in the current time frame. In such fashion, predicted future states for high-priority objects can be delivered to the motion planning system in an advanced fashion (e.g., “ahead of the schedule”), thereby allowing the motion planning system additional time to determine a motion plan relative to the high-priority objects and the vehicle additional time to implement the determined motion plan. In such fashion, the autonomous vehicle can be controlled to react more quickly relative to objects classified as high-priority. For example, the additional time gained through advancing the predicted future states by the prediction system can enable to vehicle to come to a stop more quickly or otherwise make improved maneuvers which enhance passenger and vehicle safety.
- According to another aspect of the present disclosure, in some implementations, the type of prediction system used for determining the predicted future state for each object can be determined based on the priority classification for each object. For example, in some implementations a high-fidelity prediction system can be used for objects classified as high-priority, whereas a low-fidelity prediction system can be used for objects classified as low-priority.
- According to another aspect of the present disclosure, the priority classification system described herein can include or leverage one or more machine-learned models that assist in classifying each object perceived by the autonomous vehicle. As an example, in some implementations, the priority classification system can include a machine-learned object classifier configured to classify each perceived object, such as by classifying each object as high-priority or low-priority. The use of machine-learned models can improve the speed, quality, and/or accuracy of object priority classification. The improved ability to classify objects according to priority can allow for more efficient use of prediction system resources by, for example, allowing for future states of higher priority objects to be predicted before lower priority objects. Further, this can allow for the predicted future states for higher priority objects to be provided to a motion planning system sooner, reducing overall latency for determining a motion plan, thereby reducing autonomous vehicle response times and enhancing passenger safety and vehicle efficiency.
- More particularly, in some implementations, an autonomous vehicle can include a computing system that assists in controlling the autonomous vehicle. The autonomous vehicle can be a ground-based autonomous vehicle (e.g., car, truck, bus, etc.), an air-based autonomous vehicle (e.g., airplane, drone, helicopter, or other aircraft), or other types of vehicles (e.g., watercraft). In some implementations, the computing system can include a perception system, a prediction system, and a motion planning system that cooperate to perceive the surrounding environment of the autonomous vehicle and determine a motion plan for controlling the motion of the autonomous vehicle accordingly. For example, the perception system can perceive one or more objects that are proximate to an autonomous vehicle, and provide state data indicative of the one or more objects to the prediction system. The prediction system can then determine a predicted future state for each object perceived by the perception system. The motion planning system can then determine a motion plan for the autonomous vehicle based on the predicted future states for the objects. In this way, an autonomous vehicle can perceive objects proximate to the autonomous vehicle, and, in response, control the autonomous vehicle accordingly.
- In some implementations, an autonomous vehicle can perform each of the perception, prediction, and motion planning steps sequentially using data obtained in a plurality of consecutive time frames. For example, for a time frame N, the perception system can receive sensor data for the time frame N; the perception system can concurrently generate and provide state data to the prediction system for one or more objects perceived by the perception system for a time frame N−1; the prediction system can concurrently determine a predicted future state for each object perceived by the perception system for a time frame N−2; and a motion planning system can concurrently determine a motion plan for the autonomous vehicle using predicted future states for a time frame N−3. Thus, a motion plan for the autonomous vehicle can be iteratively determined using data from each of a plurality of consecutive time frames.
- However, in such an implementation, each of the perception, prediction, and motion planning systems may require the preceding system to complete a respective analysis of data for a time frame before each system can subsequently analyze the data for the time frame. For example, for each consecutive time frame, the perception system may need to complete an analysis of data obtained from one or more sensors on the autonomous vehicle in order to generate the state data used by the prediction system. Similarly, the prediction system may need to complete an analysis of state data for a time frame to determine a predicted future state for each object before the motion planning system can determine a motion plan for the autonomous vehicle. Thus, the overall time from when an object is sensed by a sensor until a motion plan is determined in response to the object may be dependent upon each system completing its respective analysis of the object along with all other objects perceived at the same time as the object.
- In contrast, the systems and methods according to example aspects of the present disclosure can allow for determining a priority classification for objects perceived by an autonomous vehicle and determining a predicted future state for each object based at least in part on the priority classification for each object, thereby enabling higher priority objects to be analyzed before lower priority objects.
- In particular, in some implementations, the perception system can receive sensor data from one or more sensors that are coupled to or otherwise included within the autonomous vehicle. As examples, the one or more sensors can include a Light Detection and Ranging (LIDAR) system, a Radio Detection and Ranging (RADAR) system, one or more cameras (e.g., visible spectrum cameras, infrared cameras, etc.), and/or other sensors. The sensor data can include information that describes the location of objects within the surrounding environment of the autonomous vehicle. In some implementations, the sensor data can be obtained at a plurality of consecutive time frames. Based on sensor data received from the one or more sensors and/or the map data, the perception system can identify one or more objects that are proximate to the autonomous vehicle at each time frame. As an example, in some implementations, the perception system can segment the sensor data (e.g., the LIDAR data) into discrete object polygons and/or track objects frame-to-frame (e.g., iteratively over a number of consecutive time frames or periods).
- In particular, in some implementations, the perception system can generate, for each object, state data that describes a current state of such object (also referred to as one or more features of the object). As examples, the state data for each object can describe an estimate of the object's: location (also referred to as position); speed (also referred to as velocity); acceleration; heading; yaw rate; orientation; size/footprint (e.g., as represented by a bounding polygon or other shape); type/class (e.g., vehicle, pedestrian, bicycle); distance from the autonomous vehicle; minimum path to interaction with the autonomous vehicle; a minimum time duration to interaction with the autonomous vehicle; and/or other state information and/or covariances of the above-described forms of state information. In some implementations, certain state data for an object can be used to determine one or more other features for the object. For example, in some implementations, an object's position, speed, acceleration, and/or heading can be used to determine a minimum path to interaction with the autonomous vehicle or a minimum time duration to interaction with the autonomous vehicle. The perception system can provide the state data to the priority classification system and/or the prediction system (e.g., iteratively for each time frame).
- According to an aspect of the present disclosure, the autonomous vehicle can further include a priority classification system configured to classify each object perceived by the autonomous vehicle. In some implementations, the priority classification system can be included in or otherwise incorporated into the perception system. In some implementations, the priority classification system can be included in or otherwise incorporated into the prediction system. The priority classification system can classify objects perceived by the perception system based on the state data for each object. For example, the priority classification system can classify each object into one of a plurality of priority categories and/or rank each object relative to each other object. The relative priority classification and/or rank for each object can be determined based on the state data for each object. The priority classification for each object can be indicative of an importance of the object to a determination for a motion plan for the autonomous vehicle. As examples, the priority classification assigned to each object can be based on a plurality of factors, such as how likely an object is to interact with the autonomous vehicle, how soon an object is likely to interact with the autonomous vehicle, whether an object is likely to impact a motion plan for the autonomous vehicle, etc. For example, a vehicle traveling at a high rate of speed towards the autonomous vehicle can be classified as a higher priority object than a vehicle traveling away from the autonomous vehicle.
- In some implementations, the priority classification can be based on one or more heuristic processes. For example, one or more thresholds can be used to classify objects based on one or more features of the object. For example, a minimum time duration, a minimum path, or a minimum distance to interaction with the autonomous vehicle can be used to classify the objects based on how far away the objects are from the autonomous vehicle or how soon the objects will likely interact with the autonomous vehicle. Similarly, a heading and/or velocity can be used to classify objects. For example objects traveling on headings away from the autonomous vehicle can be classified as lower priority than objects traveling towards the autonomous vehicle, and objects traveling at higher speeds towards the autonomous vehicle can be classified as higher priority than objects traveling at lower speeds towards the autonomous vehicle. Other features can be used as well, such as object type (e.g., vehicle, bicycle, pedestrian, etc.), object size, position, or any other feature described herein.
- In some implementations, each object can be classified as either high-priority or low-priority. For example, the priority classification system can classify each object as either high-priority or low-priority based on the respective state data for each object. In some implementations, a predicted future state for each high-priority object can be determined before a predicted future state is determined for any low-priority object.
- In some implementations, the ratio of high-priority objects and low-priority objects can be determined based at least in part on a velocity of the autonomous vehicle. For example, in some implementations, in order to reduce the overall latency for determining a motion plan at higher speeds, fewer objects may be classified as high-priority than at lower speeds. For example, one or more thresholds or ranges can be used to determine a ratio of high-priority objects to low-priority objects based on a velocity of the autonomous vehicle. Each object can then be classified as either high-priority or low-priority based on the ratio of high-priority objects to low-priority objects.
- According to another aspect of the present disclosure, the priority classification systems and methods described herein can include or leverage one or more machine-learned models that assist in classifying the objects. As an example, in some implementations, the priority classification system can include a machine-learned object priority classifier to classify each object perceived by the autonomous vehicle. In some implementations, the machine-learned object priority classifier can classify each object as either high-priority or low-priority.
- According to yet another aspect of the present disclosure, the machine-learned models included in or employed by the priority classification systems described herein can be trained using log data collected during actual operation of autonomous vehicles on travelways (e.g., roadways). For example, the log data can include sensor data and/or state data for various objects perceived by an autonomous vehicle (e.g., the perception system of an autonomous vehicle) and also the resulting future state for each object that occurred subsequent and/or contemporaneous to collection of the sensor data and/or generation of the state data. Thus, the log data can include a large number of real-world examples of objects paired with the data collected and/or generated by the autonomous vehicle (e.g., sensor data, map data, perception data, etc.) contemporaneous to such perception, such as whether the object became more or less likely to interact with the autonomous vehicle in the resulting future state of the object. Training the machine-learned models on such real-world log data can enable the machine-learned models to determine object classifications which better mirror or mimic real-world object behavior.
- According to additional aspects of the present disclosure, the prediction system can determine a predicted future state for each object based at least in part on the priority classification for each object. For example, the order at which the prediction system determines the predicted future state for each object can be based at least in part on the priority classification assigned to the objects. For example, in some implementations, predicted future states for higher priority objects can be determined before predicted future states for lower priority objects are determined. In some implementations, the predicted future state for each object classified as high-priority can be determined before the predicted future state is determined for any objects classified as low-priority. In some implementations, the predicted future state for each object can be determined based upon an object's relative priority as compared to each other object. For example, each object perceived by the autonomous vehicle can be assigned a relative priority rank (e.g., for Y objects, a rank of 1 to Y), and a predicted future state can be determined based on the priority rank of the objects.
- The prediction system can predict the future locations of the objects based at least in part on perception information (e.g., the state data for each object) received from the perception system, map data, sensor data, and/or any other data that describes the past and/or current state of the objects, the autonomous vehicle, the surrounding environment, and/or relationship(s) therebetween. For example, the prediction system can estimate the future motion of actors or other objects over a planning horizon which corresponds to the period of time for which a motion plan for the autonomous vehicle is generated. In some implementations, the prediction system can attach probability likelihoods to each predicted motion or other future location of the objects.
- In some implementations, the prediction system can receive the priority classification for each respective object perceived by the autonomous vehicle and the respective state data for each object for a plurality of consecutive time frames. For example, the perception system can provide state data for a plurality of objects at a plurality of consecutive time frames, and the priority classification system can provide a respective priority classification for each object for each of the plurality of consecutive time frames.
- In some implementations, upon receiving the priority classifications and respective state data for the plurality of objects for a current (i.e., most recently obtained) time frame, the prediction system can determine a predicted future state for each object classified as high-priority for the current time frame. As used herein, the terms “current” or “most recently obtained” when used in reference to a time frame refers to the time frame most recently provided to a particular system (e.g., perception system, prediction system). For example, using state data for the current time frame, a predicted future location can be determined for each high-priority object. Once a predicted future state has been determined for each object classified as high-priority for the current timeframe, the predicted future states for each object classified as high-priority can then be provided to the motion planning system. Thus, as soon as a predicted future state has been determined for each high-priority object, a motion plan can be determined for the autonomous vehicle.
- Further, in some implementations, after the prediction system has provided the predicted future state for each object classified as high-priority to the motion planning system, the prediction system can determine a predicted future state for the current timeframe for each object classified as low-priority. Thus, each object perceived by an autonomous vehicle can have a predicted future state determined by the prediction system for each time frame.
- Additionally, in some implementations, the prediction system can provide a predicted future state for a previous sequential timeframe for each object classified as low-priority to the motion planning system concurrently with the predicted future states for each object classified as high-priority for the current timeframe. For example, as soon as a predicted future state for the current time frame has been determined for each high-priority object, the predicted future states for the current time frame for each high-priority object can be provided to the motion planning system along with the predicted future state for the previous sequential time frame for each low-priority object. For example, a predicted future state for each object classified as low-priority can be determined by the prediction system by selecting, obtaining, or otherwise using a predicted future state for each low-priority object from a previous sequential time frame. Thus, rather than waiting until a predicted future state has been determined for each object perceived by an autonomous vehicle in a current time frame, a full set of predicted future states comprising current predicted future states for high-priority objects and previous sequential predicted future states for low-priority objects can be provided to the motion planning system as soon as the prediction system determines a predicted future state for all high-priority objects. This can reduce the overall latency for determining a motion plan for a vehicle, thereby reducing the response time for an autonomous vehicle and increasing passenger safety.
- In some implementations, the prediction system can include a low-fidelity prediction system and a high-fidelity prediction system. As used herein, the terms “low-fidelity” and “high-fidelity” refer to a relative computational intensity of the prediction system or algorithms used by the respective prediction system. For example, in some implementations, a high-fidelity prediction system can include or otherwise leverage one or more machine-learned models in order to predict a future location for each object. For example, in some implementations, the prediction system can be a goal-oriented prediction system that generates one or more potential goals, selects one or more of the most likely potential goals, and develops one or more trajectories by which the object can achieve the one or more selected goals. For example, the prediction system can include a scenario generation system that generates and/or scores the one or more goals for an object and a scenario development system that determines the one or more trajectories by which the object can achieve the goals. In some implementations, the prediction system can include a machine-learned goal-scoring model, a machine-learned trajectory development model, and/or other machine-learned models. In some implementations, a high-fidelity prediction system can be used to determine a predicted future state for objects classified as high-priority.
- In some implementations, a low-fidelity prediction system can include one or more state forward integration models. For example, a low-fidelity prediction system can predict a future state for an object by forward integrating a current state. For example, a low-fidelity prediction system can use a current position, a current velocity, and a current heading of an object to determine a predicted future location for the object at a future time period. In some implementations, a low-fidelity prediction system can be used to determine a predicted future state for objects classified as low-priority.
- In this way, the systems and methods according to example aspects of the present disclosure can allow for determining a priority classification for objects perceived by an autonomous vehicle. In particular, by applying one or more heuristic processes and/or using machine-learned models, the systems and methods of the present disclosure can determine a respective priority classification for each object perceived by an autonomous vehicle. The order at which a predicted future state is determined for each object can then be determined based at least upon the respective priority classification for each object. The ability to classify objects according to a respective priority can allow for computational resources to be focused on higher-priority objects.
- As such, one technical effect and benefit of the present disclosure is reduced latency for determining a predicted future location for higher priority objects which are more likely to impact a motion plan for an autonomous vehicle than low-priority objects. In particular, the present disclosure provides techniques that enable a computing system to determine a motion plan for an autonomous vehicle as soon as a predicted future location for all high-priority objects has been determined. Thus, the present disclosure can allow for a reduction in the time required for an autonomous vehicle to perceive an object and determine a motion plan in response to the object. Further, the present disclosure can allow for higher fidelity prediction systems to be used to determine predicted future locations for higher priority objects, and lower fidelity prediction systems to be used to determine predicted future locations for lower priority objects. This can allow for more efficient use of computing resources on board an autonomous vehicle.
- The present disclosure also provides additional technical effects and benefits, including, for example, enhancing passenger safety. For example, the systems and methods according to example aspects of the present disclosure can allow for reduced reaction times for determining a motion plan in response to an object perceived by the autonomous vehicle. This can allow an autonomous vehicle to come to a stop more quickly, navigate around the object, or otherwise respond to the object more quickly, thereby reducing the likelihood of an autonomous vehicle colliding with the object.
- With reference now to the FIGS., example aspects of the present disclosure will be discussed in further detail.
FIG. 1 depicts a block diagram of an example autonomous vehicle 10 according to example aspects of the present disclosure. The autonomous vehicle 10 can include one ormore sensors 101, avehicle computing system 102, and one or more vehicle controls 107. Thevehicle computing system 102 can assist in controlling the autonomous vehicle 10. In particular, thevehicle computing system 102 can receive sensor data from the one ormore sensors 101, attempt to comprehend the surrounding environment by performing various processing techniques on data collected by thesensors 101, and generate an appropriate motion path through such surrounding environment. Thevehicle computing system 102 can control the one or more vehicle controls 107 to operate the autonomous vehicle 10 according to the motion path. - The
vehicle computing system 102 can include one ormore processors 112 and amemory 114. The one ormore processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a computing device, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. Thememory 114 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. Thememory 114 can storedata 116 andinstructions 118 which can be executed by theprocessor 112 to causevehicle computing system 102 to perform operations. - As illustrated in
FIG. 1 , thevehicle computing system 102 can include aperception system 103, aprediction system 104, and amotion planning system 105 that cooperate to perceive the surrounding environment of the autonomous vehicle 10 and determine a motion plan for controlling the motion of the autonomous vehicle 10 accordingly. - In particular, in some implementations, the
perception system 103 can receive sensor data from the one ormore sensors 101 that are coupled to or otherwise included within the autonomous vehicle 10. As examples, the one ormore sensors 101 can include a Light Detection and Ranging (LIDAR) system, a Radio Detection and Ranging (RADAR) system, one or more cameras (e.g., visible spectrum cameras, infrared cameras, etc.), and/or other sensors. The sensor data can include information that describes the location of objects within the surrounding environment of the autonomous vehicle 10. - As one example, for a LIDAR system, the sensor data can include the location (e.g., in three-dimensional space relative to the LIDAR system) of a number of points that correspond to objects that have reflected a ranging laser. For example, a LIDAR system can measure distances by measuring the Time of Flight (TOF) that it takes a short laser pulse to travel from the sensor to an object and back, calculating the distance from the known speed of light.
- As another example, for a RADAR system, the sensor data can include the location (e.g., in three-dimensional space relative to the RADAR system) of a number of points that correspond to objects that have reflected a ranging radio wave. For example, radio waves (e.g., pulsed or continuous) transmitted by the RADAR system can reflect off an object and return to a receiver of the RADAR system, giving information about the object's location and speed. Thus, a RADAR system can provide useful information about the current speed of an object.
- As yet another example, for one or more cameras, various processing techniques (e.g., range imaging techniques such as, for example, structure from motion, structured light, stereo triangulation, and/or other techniques) can be performed to identify the location (e.g., in three-dimensional space relative to the one or more cameras) of a number of points that correspond to objects that are depicted in imagery captured by the one or more cameras. Other sensor systems can identify the location of points that correspond to objects as well.
- As another example, the one or
more sensors 101 can include a positioning system. The positioning system can determine a current position of the vehicle 10. The positioning system can be any device or circuitry for analyzing the position of the vehicle 10. For example, the positioning system can determine position by using one or more of inertial sensors, a satellite positioning system, based on IP address, by using triangulation and/or proximity to network access points or other network components (e.g., cellular towers, WiFi access points, etc.) and/or other suitable techniques. The position of the vehicle 10 can be used by various systems of thevehicle computing system 102. - Thus, the one or
more sensors 101 can be used to collect sensor data that includes information that describes the location (e.g., in three-dimensional space relative to the autonomous vehicle 10) of points that correspond to objects within the surrounding environment of the autonomous vehicle 10. In some implementations, thesensors 101 can be located at various different locations on the autonomous vehicle 10. As an example, in some implementations, one or more cameras and/or LIDAR sensors can be located in a pod or other structure that is mounted on a roof of the autonomous vehicle 10 while one or more RADAR sensors can be located in or behind the front and/or rear bumper(s) or body panel(s) of the autonomous vehicle 10. As another example, camera(s) can be located at the front or rear bumper(s) of the vehicle 10 as well. Other locations can be used as well. - In addition to the sensor data, the
perception system 103 can retrieve or otherwise obtainmap data 126 that provides detailed information about the surrounding environment of the autonomous vehicle 10. Themap data 126 can provide information regarding: the identity and location of different travelways (e.g., roadways), road segments, buildings, or other items or objects (e.g., lampposts, crosswalks, curbing, etc.); the location and directions of traffic lanes (e.g., the location and direction of a parking lane, a turning lane, a bicycle lane, or other lanes within a particular roadway or other travelway); traffic control data (e.g., the location and instructions of signage, traffic lights, or other traffic control devices); and/or any other map data that provides information that assists thevehicle computing system 102 in comprehending and perceiving its surrounding environment and its relationship thereto. - The
perception system 103 can identify one or more objects that are proximate to the autonomous vehicle 10 based on sensor data received from the one ormore sensors 101 and/or themap data 126. In particular, in some implementations, theperception system 103 can determine, for each object, state data that describes a current state of such object (also referred to as features of the object). As examples, the state data for each object can describe an estimate of the object's: current location (also referred to as position); current speed (also referred to as velocity); current acceleration; current heading; current orientation; size/shape/footprint (e.g., as represented by a bounding shape such as a bounding polygon or polyhedron); type/class (e.g., vehicle versus pedestrian versus bicycle versus other); yaw rate; distance from the autonomous vehicle; minimum path to interaction with the autonomous vehicle; minimum time duration to interaction with the autonomous vehicle; and/or other state information. - In some implementations, the
perception system 103 can determine state data for each object over a number of iterations. In particular, theperception system 103 can update the state data for each object at each iteration. Thus, theperception system 103 can detect and track objects (e.g., vehicles) that are proximate to the autonomous vehicle 10 over time. - The
prediction system 104 can receive the state data from theperception system 103 and predict one or more future locations for each object based on such state data. For example, theprediction system 104 can predict where each object will be located within the next 5 seconds, 10 seconds, 20 seconds, etc. As one example, an object can be predicted to adhere to its current trajectory according to its current speed. As another example, other, more sophisticated prediction techniques or modeling can be used. - The
motion planning system 105 can determine a motion plan for the autonomous vehicle 10 based at least in part on the predicted one or more future locations for the object and/or the state data for the object provided by theperception system 103. Stated differently, given information about the current locations of objects and/or predicted future locations of proximate objects, themotion planning system 105 can determine a motion plan for the autonomous vehicle 10 that best navigates the autonomous vehicle 10 relative to the objects at such locations. In some implementations, themotion planning system 105 can determine the motion plan for the autonomous vehicle using one or more adjusted vehicle parameters, as described herein. - In some implementations, the
motion planning system 105 can evaluate one or more cost functions and/or one or more reward functions for each of one or more candidate motion plans for the autonomous vehicle 10. For example, the cost function(s) can describe a cost (e.g., over time) of adhering to a particular candidate motion plan while the reward function(s) can describe a reward for adhering to the particular candidate motion plan. For example, the reward can be of opposite sign to the cost. - Thus, given information about the current locations and/or predicted future locations of objects, the
motion planning system 105 can determine a total cost (e.g., a sum of the cost(s) and/or reward(s) provided by the cost function(s) and/or reward function(s)) of adhering to a particular candidate pathway. Themotion planning system 105 can select or determine a motion plan for the autonomous vehicle 10 based at least in part on the cost function(s) and the reward function(s). For example, the motion plan that minimizes the total cost can be selected or otherwise determined. Themotion planning system 105 can provide the selected motion plan to avehicle controller 106 that controls one or more vehicle controls 107 (e.g., actuators or other devices that control gas flow, steering, braking, etc.) to execute the selected motion plan. - According to example aspects of the present disclosure, the
vehicle computing system 102 can also include apriority classification system 150 configured to classify one or more objects perceived by the autonomous vehicle 10. For example, in some implementations, thepriority classification system 150 can receive state data descriptive of one or more objects perceived by the autonomous vehicle 10 from theperception system 103. Thepriority classification system 150 can then classify each object based at least in part on the respective state data for each object. - For example, in some implementations, the priority classification for each object can be based on an object's position, velocity, and/or heading. For example, objects that are closer to the autonomous vehicle can be given a higher priority classification. Similarly, objects that are traveling in a direction towards the autonomous vehicle and/or towards a position at which the autonomous vehicle will be at a forthcoming time period can be given a higher priority classification. In some implementations, objects that are traveling at higher speeds, such as objects traveling at higher speeds towards the autonomous vehicle, can be given a higher priority classification than objects traveling at lower speeds.
- In some implementations, the priority classification can be based on a likelihood that an object will interact with the autonomous vehicle or otherwise be of importance to determining a motion plan for the autonomous vehicle. For example, objects traveling in an opposite direction as the autonomous vehicle can be given a lower priority classification than objects traveling in a direction that will interact with a motion path of the autonomous vehicle.
- In some implementations, the priority classification can be based on an object type. For example, in some implementations, pedestrians can be assigned a higher priority than other objects, such as a static (i.e., non-moving) vehicle. Similarly, other object types and/or classes can be used to determine a priority classification for each object.
- In some implementations, the priority classification for each object can be based on a minimum path to interaction with the autonomous vehicle or a minimum time duration to interaction with the autonomous vehicle. For example, a minimum path to interaction with the autonomous vehicle can correspond to a distance along one or more travelways that the object would have to traverse in order to interact with the object. Thus, for example, a vehicle traveling along a highway in an opposite direction as the autonomous vehicle may need to exit the highway, turn around, re-enter the highway, and overtake the autonomous vehicle in order to interact with the autonomous vehicle. In such a case, the vehicle is likely to have a long minimum path to interaction and/or minimum time duration to interaction with the autonomous vehicle. Conversely, a vehicle approaching an intersection at a perpendicular path of travel to the autonomous vehicle is likely to have a shorter minimum path to interaction and/or minimum time duration to interaction with the autonomous vehicle. In such a case, the vehicle approaching the intersection can be given a higher priority classification than the vehicle traveling in the opposite direction.
- In some implementations, the
priority classification system 150 can classify each object as high-priority or low-priority. For example, each object can be classified according to a binary classification in which each object is either a high-priority or low-priority object. For example, objects which have a minimum path to interaction and/or minimum time duration to interaction with an autonomous vehicle that is less than a threshold can be classified as high-priority objects. Similarly, objects which have a minimum path to interaction and/or minimum time duration to interaction that exceeds the threshold can be classified as low-priority objects. In some implementations, objects of a particular type (e.g., pedestrians) can always be classified as high-priority objects. In some implementations, objects which have been determined to be unlikely to interact with the autonomous vehicle or the determination of a motion plan for the autonomous vehicle can be classified as low-priority objects. - In some implementations, the
priority classification system 150 can classify each object relative to other objects perceived by the autonomous vehicle. For example, in some implementations, each object can be assigned a relative priority in relation to each other object perceived by the autonomous vehicle. For example, each object can be assigned a relative priority rank based on the respective priority of the object. For example, if an autonomous vehicle perceives Y objects within a surrounding environment of the autonomous vehicle, each object can be assigned a relative rank on a scale of 1 to Y. In this way, each object can be assigned a priority classification relative to each other object perceived by the autonomous vehicle. - In some implementations, the
priority classification system 150 can classify each object based on a velocity of the autonomous vehicle. In some implementations, a ratio of high-priority objects to low-priority objects can be determined based on a velocity of the vehicle. For example, at higher velocities, it may be preferable to limit the number of high-priority objects in order to reduce and/or minimize the number of high-priority objects for which theprediction system 104 must determine a future predicted state in a current time frame in order to reduce and/or minimize a latency for determining a motion plan in response to such objects. In such a case, fewer objects may be classified as high-priority objects than at lower velocities. - In some implementations, the ratio of high-priority objects to low-priority objects can be determined based on one or more threshold velocities. For example, for a first velocity range of 1 to X, a ratio of 1 high-priority object to Y low-priority objects can be used, whereas for a second velocity range of X to 2X, a ratio of 1 high priority object to 2Y low-priority objects can be used. In other implementations, other pre-determined ratios can be used. In some implementations, each object can be classified as either high-priority or low-priority such that the ratio of high-priority objects to low-priority objects generally conforms to the pre-determined ratio (i.e., the ratio of high-priority to low-priority objects is within a threshold variance of the pre-determined ratio).
- In some implementations, a machine-learned model can be used to determine the priority classification for each object based on the respective state data for each object. For example, in some implementations, the machine-learned model can be configured to classify each object as either high-priority or low-priority and provide the priority classification for each object to the
prediction system 104. In some implementations, the respective state data for each object can be input into the machine-learned model, and data indicative of a respective priority classification for the object can be received as an output of the machine-learned model. - In some implementations, the machine-learned model can be trained based at least in part on training data that comprises annotated vehicle data logs that were previously collected during previous autonomous vehicle driving sessions. For example, vehicle data logs can be recorded during one or more autonomous vehicle driving sessions, which can include state data for objects perceived by the autonomous vehicle. In some implementations, the vehicle data logs can be annotated by a human reviewer in order to help train the machine-learned model. For example, in some implementations, objects can be labeled either high-priority or low-priority. The machine-learned model can then be trained to determine a priority classification for objects based on the training data.
- According to example aspects of the present disclosure, the
vehicle computing system 102 can determine a predicted future state for each object based at least in part on the priority classification for each object. For example, thepriority classification system 150 can be configured to provide the priority classification for each object perceived by theperception system 103 to theprediction system 104. Theprediction system 104 can then determine a predicted future state for each object based at least in part on the priority classification for each object. - For example, in some implementations, the order at which the computing system determines the predicted future state for each object can be based at least in part on the priority classification assigned to each object. For example, in some implementations, predicted future states for all objects classified as high-priority can be determined before predicted future states are determined for any low-priority objects. In some implementations, a predicted future state can be determined for each object according to a respective priority rank for each object. For example, for Y objects, each object can be assigned a relative priority rank of 1 to Y, and a predicted future state for each object can be determined based on the relative priority rank for each object (i.e., starting with 1 and ending with Y).
- As will be discussed in greater detail with respect to
FIG. 5 , in some implementations, a future location prediction system can be selected based at least in part on the priority classification for each object. For example, in some implementations, a low-fidelity prediction system can be used to determine a predicted future state for low-priority objects, and a high-fidelity prediction system can be used to determine a predicted future state for high-priority objects. - Each of the
perception system 103, theprediction system 104, themotion planning system 105, thevehicle controller 106, and thepriority classification system 150 can include computer logic utilized to provide desired functionality. In some implementations, each of theperception system 103, theprediction system 104, themotion planning system 105, thevehicle controller 106, and thepriority classification system 150 can be implemented in hardware, firmware, and/or software controlling a general purpose processor. For example, in some implementations, each of theperception system 103, theprediction system 104, themotion planning system 105, thevehicle controller 106, and thepriority classification system 150 includes program files stored on a storage device, loaded into a memory and executed by one or more processors. In other implementations, each of theperception system 103, theprediction system 104, themotion planning system 105, thevehicle controller 106, and thepriority classification system 150 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media. - Referring now to
FIG. 2 , a block diagram depicting anexample perception system 103 according to example aspects of the present disclosure is shown. Elements that are the same or similar to those shown inFIG. 1 are referred to with the same reference numerals. - As shown, in some implementations, the
priority classification system 150 can be implemented as a subpart of theperception system 103. For example, theperception system 103 can receive sensor data from one or more sensors 101 (as shown inFIG. 1 ) andmap data 126. Theperception system 103 can generate state data for each object perceived by the autonomous vehicle 10, perform a priority classification for each object using thepriority classification system 150, and provide the state data and respective priority classification for each object to theprediction system 104. - Referring now to
FIG. 3 , a block diagram depicting anexample prediction system 104 according to example aspects of the present disclosure is shown. Elements that are the same or similar to those shown inFIGS. 1 and 2 are referred to with the same reference numerals. - As shown, in some implementations, the
priority classification system 150 can be implemented as a subpart of theprediction system 104. For example, theperception system 103 can receive sensor data from one or more sensors 101 (as shown inFIG. 1 ) andmap data 126. Theperception system 103 can then provide the state data indicative of one or more objects to theprediction system 104. Theprediction system 104 can then determine a priority classification for each object using thepriority classification system 150, and determine a predicted future state for each object based at least in part on the priority classification for each object. Theprediction system 104 and then provide the predicted future states for each object to themotion planning system 105. - Thus, as shown in
FIGS. 1-3 , thepriority classification system 150 can be implemented as a stand-alonepriority classification system 150, or as a sub system of either aperception system 103 or aprediction system 104. - Referring now to
FIG. 4 , a diagram of an example object prediction process according to example aspects of the present disclosure is shown. As represented inFIG. 4 , in some implementations, a vehicle computing system can iteratively determine a motion plan using data obtained in a plurality of consecutive time frames. For example, each of the perception, prediction, and motion planning systems illustrated inFIGS. 1-3 can concurrently perform analysis on data from a plurality of consecutive time frames. As an example, for a time frame N, the perception system can receive sensor data for the time frame N; the perception system can concurrently generate and provide state data to the prediction system for one or more objects perceived by the perception system for a time frame N−1; the prediction system can concurrently determine a predicted future state for each object perceived by the perception system for a time frame N−2; and a motion planning system can concurrently determine a motion plan for the autonomous vehicle using predicted future states for a time frame N−3. In a subsequent time frame N+1, each of the perception, prediction, and motion planning systems can receive and perform a respective analysis of data received from an upstream system, resulting in the motion planning system determining a motion plan using predicted future states for a time frame N−2. In this way, a motion plan for the autonomous vehicle can be iteratively determined using data from each of a plurality of consecutive time frames. - For example, as shown in
FIG. 4 , block 410 represents analysis by a perception system for data from a frame N. As shown, the perception system's analysis of data for frame N can include a plurality ofobjects 411A-J. Each of the objects can have associated state data descriptive of the object generated by the perception system. For example, for eachobject 411A-J, the perception system can generate state data describing a position, velocity, acceleration, heading, size, type, yaw rate, or other state data descriptive of the object as described herein. - As represented by the arrow from
block 410 to block 430, the state data descriptive of theobjects 411A-J generated by the perception system for frame N can be provided to the prediction system once the perception system has completed its analysis. - According to example aspects of the present disclosure, however, the prediction system can also receive a priority classification for each object. For example, in some implementations, each object can be classified as either high-priority (“HP”) or low-priority (“LP”). As described herein, the priority classification for each object can be determined based on the respective state data for each object. Further, in some implementations, the priority classification can be determined by a machine-learned model.
- Thus, as represented by
block 430, the prediction system can receive the respective priority classifications for each object as well as the respective state data describing each object from the perception system. The perception system can then determine a predicted future state for each object based at least in part on the respective priority classification for each object. For example, in some implementations, the prediction system can first determine a predicted future state for each object classified as high-priority. For example, as shown inFIG. 4 , the prediction system can first determine a predicted future state for HP objects 431A-D. Stated differently, the prediction system can determine a predicted future state for each object classified as high-priority based at least in part on the state data obtained for the most recent time frame (Frame N). - According to additional example aspects of the present disclosure, once the prediction system has determined a predicted future state for each object classified as high-priority, the prediction system can provide the predicted future state for each object classified as high-priority for the current timeframe to the motion planning system. For example, as shown by the arrow from the dashed
block 440 to theblock 450, once the prediction system has determined a predicted future state for each high-priority object HP 431A-D, the prediction system can provide the predicted future states for theobjects HP 431A-D to the motion planning system. In this way, the motion planning system can begin determining a motion plan in an advanced fashion (e.g., “ahead of schedule”). - According to additional example aspects of the present disclosure, once the prediction system has determined a predicted future state for each object classified as high-priority, the prediction system can determine a predicted future state for each object identified as low-priority. For example, after the prediction system has provided the high-
priority objects HP 431A-D to the motion planning system, the prediction system can determine a predicted future state for each lowpriority object LP 431E-J. In this way, each object perceived in a particular frame (e.g., frame N) can have a predicted future state determined by the prediction system. - In some implementations, the prediction system can further be configured to provide a predicted future state for the previous sequential timeframe for each object classified as low-priority to the motion planning system concurrently with the predicted future state for each object classified as high-priority for the current timeframe. Stated differently, in some implementations, a predicted future state for a low-priority object can be determined by selecting, obtaining, or otherwise determining a predicted future state for the object based on state data obtained for a previous sequential time frame.
- For example, as shown by
block 420, the prediction system can have previously determined a predicted future state forobjects 421A-J, including high-priority objects HP 421A-D and low-priority objects LP 421E-J. For example, as the perception system generated state data forobjects 411A-J for time frame N inblock 410, the prediction system could concurrently determine predicted future states for high-priority objects HP 421A-D and low-priority objects LP 421E4 for time frame N−1 inblock 420. Further, as an example, each high-priority object HP 421A-D can respectively correspond to each high-priority object HP 431A-D for the time frame N−1, whereas each low-priority object LP 421E-J can respectively correspond to each low-priority object LP 431E4 for the time frame N−1. - Thus, as represented by the arrow from
block 440 to block 450, when the prediction system provides the predicted future states for high-priority objects HP 431A-D to the motion planning system, the prediction system can be configured to concurrently provide a previously determined predicted future state for each low-priority object (i.e., LP 431E4) for the previous sequential time frame (i.e., LP 421E4). In this way, a full set of predicted future states comprising the predicted future states for all high-priority objects (HP 431A-D) for a current time frame and a previously determined predicted future state for all low-priority objects (LP 421E4) for a previous sequential time frame can be concurrently provided to a motion planning system as soon as the prediction system has determined a predicted future state for each object classified as high-priority (HP 431A-D). - An advantage provided by the object prediction process depicted in
FIG. 4 is that the time required to determine a motion plan for an autonomous vehicle can be reduced. For example, for a vehicle autonomy system such as the sequential vehicle autonomy system described herein, the motion planning system can receive a predicted future state for each object much sooner, thereby allowing a motion plan to be determined ahead of schedule. Further, the reduction in time for the prediction system to determine a predicted future state for each object can correspond to the ratio of high-priority objects to low-priority objects. For example, as depicted inFIG. 4 , the prediction system would only need to determine a predicted future state for high-priority objects HP 431A-D (i.e., 4 out of 10 objects) for time frame N before providing the predicted future states for eachobject 431A-D and 421E-J to the motion planning system, allowing for a reduction of approximately 60% of the required processing time. - Further, because low-priority objects can be classified as such based on their negligible impact on a motion plan, using a predicted future state for a low-priority object from a previous sequential time frame can allow for a net increase in passenger and autonomous vehicle safety. For example, low-priority objects, such as objects positioned far away from an autonomous vehicle and traveling away from the autonomous vehicle, may be unlikely to impact the motion plan for the autonomous vehicle. However, high-priority objects, such as objects travelling towards the autonomous vehicle or positioned near the autonomous vehicle, may be much more likely to impact a motion plan for the autonomous vehicle. By allowing for such high-priority objects to be sensed by the sensors, perceived by the perception system, predicted by the prediction system, and planned for by the motion planning system in a reduced amount of time, the autonomous vehicle can respond to high-priority objects in a quicker fashion, thereby reducing the likelihood of unsafe conditions, such as a collision.
- Referring now to
FIG. 5 , a block diagram of anexample computing system 100 according to example embodiments of the present disclosure is depicted. Elements that are the same or similar to those inFIGS. 1-3 are referred to with the same reference numerals. As shown, theexample computing system 100 can include a computing system 102 (e.g., avehicle computing system 102 on an autonomous vehicle 10) and a machinelearning computing system 130 that are communicatively coupled over one ormore communication networks 180. - The
computing system 102 can include one or more processor(s) 112 andmemory 114. The one ormore processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. Thememory 114 can include one or more non-transitory computer-readable storage media, such as RAM, ROM, EEPROM, EPROM, one or more memory devices, flash memory devices, etc., and combinations thereof. - The
memory 114 can store information that can be accessed by the one ormore processors 112. For instance, the memory 114 (e.g., one or more non-transitory computer-readable storage mediums, memory devices) can storedata 116 that can be obtained, received, accessed, written, manipulated, created, and/or stored. Thememory 114 can also store computer-readable instructions 118 that can be executed by the one ormore processors 112. The instructions can be software written in any suitable programming language or can be implemented in hardware. Additionally, or alternatively, the instructions can be executed in logically and/or virtually separate threads on processor(s) 112. For example, thememory 114 can store instructions that when executed by the one ormore processors 112 cause the one ormore processors 112 to perform any of the operations and/or functions described herein. - The
computing system 102 can also include anetwork interface 128 used to communicate with one or more systems or devices, including systems or devices that are remotely located from thecomputing system 102. Thenetwork interface 128 can include any circuits, components, software, etc. for communicating with one or more networks (e.g., 180). In some implementations, thenetwork interface 128 can include, for example, one or more of a communications controller, receiver, transceiver, transmitter, port, conductors, software and/or hardware for communicating data. - The
computing system 102 can also include aperception system 103, aprediction system 104, amotion planning system 105, avehicle controller 106, and apriority classification system 150, as described herein. Each of theperception system 103, theprediction system 104, themotion planning system 105, thevehicle controller 106, and thepriority classification system 150 can include computer logic utilized to provide desired functionality. In some implementations, each of theperception system 103, theprediction system 104, themotion planning system 105, thevehicle controller 106, and thepriority classification system 150 can be implemented in hardware, firmware, and/or software controlling a general purpose processor. For example, in some implementations, each of theperception system 103, theprediction system 104, themotion planning system 105, thevehicle controller 106, and thepriority classification system 150 can include program files stored on a storage device, loaded into a memory and executed by one or more processors. In other implementations, each of theperception system 103, theprediction system 104, themotion planning system 105, thevehicle controller 106, and thepriority classification system 150 can include one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media. - According to an example aspect of the present disclosure, in some implementations, the
prediction system 104 can include a low-fidelity prediction system 122 and a high-fidelity prediction system 124. For example, in some implementations, a high-fidelity prediction system 124 can include or otherwise leverage one or more machine-learned models in order to predict a future location for each object. For example, in some implementations, the high-fidelity prediction system 124 can be a goal-oriented prediction system that generates one or more potential goals, selects one or more of the most likely potential goals, and develops one or more trajectories by which the object can achieve the one or more selected goals. For example, the high-fidelity prediction system 124 can include a scenario generation system that generates and/or scores the one or more goals for an object and a scenario development system that determines the one or more trajectories by which the object can achieve the goals. In some implementations, the high-fidelity prediction system 124 can include a machine-learned goal-scoring model, a machine-learned trajectory development model, and/or other machine-learned models. - In some implementations, a low-
fidelity prediction system 122 can include one or more state forward-integration models. For example, a low-fidelity prediction system 122 can predict a future state for an object by forward integrating a current state. For example, a low-fidelity prediction system can use a current position, a current velocity, and a current heading of an object to determine a predicted future location for the object at a future time period. - In some implementations, the
computing system 102 can determine a predicted future state for each object based at least in part on the priority classification for the object by selecting a future location prediction system based at least in part on the priority classification for the object and determining the predicted future state for the object using the selected future location prediction system. For example, in some implementations, the low-fidelity prediction system 122 can be used to determine a predicted future state for objects classified as low-priority, and the high-fidelity prediction system 124 can be used to determine a predicted future state for objects classified as high-priority. - An advantage provided by using a low-fidelity prediction system and a high-fidelity prediction system to determine the predicted future state for each object based at least in part on the priority classification for each object is that computing resources can be allocated more efficiently. For example, low-priority objects which have been classified as such due to their likely negligible impact on a vehicle motion plan may not require a sophisticated prediction system, such as the high-
fidelity prediction system 124, in order to determine a predicted future state for such objects. For example, low-priority objects located far away from an autonomous vehicle and/or travelling in a direction away from the autonomous vehicle may have little to no impact on the motion plan for the autonomous vehicle. As such, the granularity provided by a goal-oriented prediction system as described herein may provide little to no benefit over a low-fidelity prediction model 122, such as a simple state forward-integration model. Accordingly, by first determining a priority classification for each object, computational resources can be more efficiently allocated for determining predicted future states for each object. - According to another example aspect of the present disclosure, the
priority classification system 150 can store or include one or more machine-learnedmodels 120. For example, the machine-learnedmodel 120 can be or can otherwise include various machine-learned models such as decision tree-based models, support vector machines, k-Nearest Neighbor models, neural networks (e.g., deep neural networks), or other multi-layer non-linear models. Example neural networks include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), or other forms of neural networks. - In some implementations, the one or more machine-learned
models 120 can include a machine-learned object priority classifier. For example, in some implementations, a machine-learned object priority classifier can be configured to classify objects perceived by theperception system 103 as either high-priority or low-priority. In some implementations, the machine-learned object priority classifier can be configured to rank objects according to a respective object priority, as described herein. - In some implementations, the
computing system 102 can determine a priority classification for each object using the machine-learnedmodel 120. For example, thecomputing system 102 can obtain data descriptive of the machine-learned model, input the respective state data for each object perceived by theperception system 103 into the machine-learnedmodel 120, and receive data indicative of a respective priority classification for each object as an output of the machine-learned model. In some implementations, the machine-learnedmodel 120 and/or thepriority classification system 150 can be configured to provide the respective priority classification for each object to theprediction system 104. - In some implementations, the
vehicle computing system 102 can receive the one or more machine-learnedmodels 120 from the machinelearning computing system 130 overnetwork 180 and can store the one or more machine-learnedmodels 120 in thememory 114. Thevehicle computing system 102 can then use or otherwise implement the one or more machine-learned models 120 (e.g., by processor(s) 112). - In some implementations, certain operations described herein can be performed by a machine
learning computing system 130 that is remotely located to thecomputing system 102 and in communication with thecomputing system 102 over one or more wireless networks 180 (e.g., cellular data networks, satellite communication networks, wide area networks, etc.). As an example, the machinelearning computing system 130 can include one or more server computing devices. In the event that plural server computing devices are used, the server computing devices can be arranged according to a parallel computing architecture, a sequential computing architecture, or combinations thereof. - The machine
learning computing system 130 can include one ormore processors 132 and amemory 134. The one ormore processors 132 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. Thememory 134 can include one or more non-transitory computer-readable storage media, such as RAM, ROM, EEPROM, EPROM, one or more memory devices, flash memory devices, etc., and combinations thereof. - The
memory 134 can store information that can be accessed by the one ormore processors 132. For instance, the memory 134 (e.g., one or more non-transitory computer-readable storage mediums, memory devices) can storedata 136 that can be obtained, received, accessed, written, manipulated, created, and/or stored. In some implementations, the machinelearning computing system 130 can obtain data from one or more memory device(s) that are remote from thesystem 130. - The
memory 134 can also store computer-readable instructions 138 that can be executed by the one ormore processors 132. Theinstructions 138 can be software written in any suitable programming language or can be implemented in hardware. Additionally, or alternatively, theinstructions 138 can be executed in logically and/or virtually separate threads on processor(s) 132. For example, thememory 134 can storeinstructions 138 that when executed by the one ormore processors 132 cause the one ormore processors 132 to perform any of the operations and/or functions described herein. - The machine
learning computing system 130 can also include anetwork interface 164 used to communicate with one or more systems or devices, including systems or devices that are remotely located from the machinelearning computing system 130. Thenetwork interface 164 can include any circuits, components, software, etc. for communicating with one or more networks (e.g., 180). In some implementations, thenetwork interface 164 can include, for example, one or more of a communications controller, receiver, transceiver, transmitter, port, conductors, software and/or hardware for communicating data. - In some implementations, the machine
learning computing system 130 includes one or more server computing devices. If the machinelearning computing system 130 includes multiple server computing devices, such server computing devices can operate according to various computing architectures, including, for example, sequential computing architectures, parallel computing architectures, or some combination thereof. - In addition or alternatively to the model(s) 110 at the
computing system 102, the machinelearning computing system 130 can include one or more machine-learnedmodels 140. For example, the machine learned model(s) 140 can be or can otherwise include various machine-learned models such as decision tree-based models, support vector machines, k-Nearest Neighbor models, neural networks (e.g., deep neural networks), or other multi-layer non-linear models. Example neural networks include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), or other forms of neural networks. - As an example, the machine
learning computing system 130 can communicate with thecomputing system 102 according to a client-server relationship. For example, the machinelearning computing system 140 can implement the machine-learned model(s) 140 to provide a web service to thecomputing system 102. For example, the web service can provide object priority classifications to thecomputing system 102. - Thus, machine-learned models 110 can be located and used at the
computing system 102 and/or machine-learnedmodels 140 can be located and used at the machinelearning computing system 130. - In some implementations, the machine
learning computing system 130 and/or thecomputing system 102 can train the machine-learned models 110 and/or 140 through use of amodel trainer 160. Themodel trainer 160 can train the machine-learned models 110 and/or 140 using one or more training or learning algorithms. One example training technique is backwards propagation of errors. In some implementations, themodel trainer 160 can perform supervised training techniques using a set of labeledtraining data 162. In other implementations, themodel trainer 160 can perform unsupervised training techniques using a set ofunlabeled training data 162. Themodel trainer 160 can perform a number of generalization techniques to improve the generalization capability of the models being trained. Generalization techniques include weight decays, dropouts, or other techniques. - In particular, the
model trainer 160 can train a machine-learned model 110 and/or 140 based on a set oftraining data 162. Thetraining data 162 can include, for example, vehicle data logs from previously completed autonomous vehicle driving sessions. The vehicle data logs can include, for example, sensor data obtained by one or more sensors of the autonomous vehicle, state data descriptive of one or more objects perceived by theperception system 103 of the autonomous vehicle, predicted future states for objects perceived by the autonomous vehicle determined by theprediction system 104, previous motion plans determined by themotion planning system 105, or other vehicle data as described herein. In some implementations, themodel trainer 160 can be configured to train the machine-learned models 110 and/or 140 by determining whether objects perceived by the autonomous vehicle impacted a motion plan of the autonomous vehicle. - According to another aspect of the present disclosure, the
training data 162 can include vehicle data logs that include object priority classification labels recorded by a human reviewer can be used to train the machine-learned model(s) 110 and/or 140. In particular, a human reviewer can review the vehicle data logs and label object priority classifications for objects perceived by theperception system 103. - The
model trainer 160 includes computer logic utilized to provide desired functionality, and can be implemented in hardware, firmware, and/or software controlling a general purpose processor. For example, in some implementations, themodel trainer 160 includes program files stored on a storage device, loaded into a memory and executed by one or more processors. In other implementations, themodel trainer 160 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media. - The network(s) 180 can be any type of network or combination of networks that allows for communication between devices. In some embodiments, the network(s) 180 can include one or more of a local area network, wide area network, the Internet, secure network, cellular network, mesh network, peer-to-peer communication link and/or some combination thereof and can include any number of wired or wireless links. Communication over the network(s) 180 can be accomplished, for instance, via a network interface using any type of protocol, protection scheme, encoding, format, packaging, etc.
-
FIG. 5 illustrates oneexample computing system 100 that can be used to implement the present disclosure. Other computing systems can be used as well. For example, in some implementations, thecomputing system 102 can include themodel trainer 160 and thetraining dataset 162. In such implementations, the machine-learned models 110 can be both trained and used locally at thecomputing system 102. As another example, in some implementations, thecomputing system 102 is not connected to other computing systems. - In addition, components illustrated and/or discussed as being included in one of the
computing systems computing systems - Referring now to
FIG. 6 , an example method (600) to determine a predicted future state for objects perceived by an autonomous vehicle based at least in part on a priority classification for the objects according to example aspects of the present disclosure is depicted. AlthoughFIG. 6 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of method (600) can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure. The method (600) can be implemented by a computing system, such as a computing system comprising one or more computing devices. - At (602), the method (600) can include obtaining, by a computing system, state data descriptive of at least a current or past state of a plurality of objects that are perceived by an autonomous vehicle. For example, the state data can include data descriptive of one or more features of an object, such as a position, a velocity, an acceleration, a heading, a yaw rate, a shape, a size, a type, a distance from the autonomous vehicle, a minimum path to interaction with the autonomous vehicle, a minimum time duration to interaction with the autonomous vehicle, any other state data described herein, or any state data descriptive of an object perceived by an autonomous vehicle. In some implementations, the state data can be obtained from a perception system of the autonomous vehicle configured to generate the state data based on sensor data obtained from one or more sensors of the autonomous vehicle.
- At (604) the method (600) can include determining, by the computing system, a priority classification for each object in the plurality of objects based at least in part on the respective state data for each object. For example, in some implementations, the priority classification for each object can be determined by a priority classification system. In some implementations, the priority classification can be either a high-priority or low-priority classification for each object. In some implementations, the priority classification can be a respective priority rank for each object relative to each other object perceived by the autonomous vehicle.
- In some implementations, the priority classification for each object can be determined by a machine-learned model. For example, at (606), the method can include obtaining data descriptive of a machine-learned model. In some implementations, data descriptive of the machine-learned model can be obtained from a memory (e.g., non-transitory computer readable media) of the computing system. In some implementations, the machine-learned model can be a machine-learned object priority classifier configured to classify each object as either high-priority or low-priority based on the respective state data for each object.
- At (608), the method (600) can include inputting the respective state data for an object into the machine-learned model. For example, state data generated by a perception system can be received by a machine-learned model of a priority classification system. The respective state data for each object can be input into the machine-learned model in order to determine a priority classification for each respective object.
- At (610), the method (600) can include receiving data indicative of a respective priority classification as an output of the machine-learned model. For example, in some implementations, the machine-learned model can be a machine-learned object priority classifier configured to classify each object as either high-priority or low-priority, and the machine-learned model can output a respective high-priority or low-priority classification based on the respective state data for each object.
- At (612), the method (600) can include determining, by the computing system, an order at which the computing system determines a predicted future state for each object based at least in part on the priority classification for each object. For example, in some implementations, each object can be classified as either high-priority or low-priority, and the order can be determined such that each high-priority object has a predicted future state determined before a predicted future state is determined for any low-priority objects. In some implementations, determining an order at which the computing system determines a predicted future state for each object can be based on the priority rank assigned to each object. For example, the highest ranked object can have a predicted future state determined first, with each successive ranked object successively determined according to the respective priority rank for each object.
- At (614), the method (600) can include determining, by the computing system, the predicted future state for each object based at least in part on the determined order. For example, in some implementations, the prediction system can determine an order in which a predicted future state for each object classified as high-priority before determining a predicted future state for each object classified as low-priority. In some implementations, as soon as the prediction system has determined a predicted future state for each object classified as high-priority, the prediction system can be configured to provide the predicted future states for each high-priority object to a motion planning system. In some implementations, determining a predicted future state for each object classified as high-priority can include determining a future state for each high-priority object based at least in part on state data obtained for the most recent time frame.
- In some implementations, determining a predicted future state for each object classified as low-priority can include determining a predicted future state for the object that was determined based on state data obtained for a previous sequential time frame. For example, in some implementations, the prediction system can provide a previously-determined future predicted state for each low-priority object for a previous sequential time frame to a motion planning system at the same time that the prediction system provides a future predicted state for each high-priority object for a current time frame to the motion planning system.
- In some implementations, determining a predicted future state for each object based at least in part on the determined order can include selecting a future location prediction system at least in part on the priority classification for the object and determining the predicted future state for the object using the selected future location prediction system. For example, in some implementations, a prediction system can include a low-fidelity prediction system and a high-fidelity prediction system. In some implementations, the low-fidelity protection system can be used to determine a predicted future state for each low-priority object, and the high-fidelity prediction system can be used to determine a predicted future state for each high-priority object.
- At (616), the method (600) can include determining a motion plan for the autonomous vehicle based at least in part on the predicted future state for at least one of the objects. For example, a motion planning system can receive one or more predicted future states for one or more objects perceived by the autonomous vehicle, and can determine a motion plan for the autonomous vehicle based at least in part on the predicted future states for the one or more objects.
- In this way, the systems and methods according to example aspects of the present disclosure can allow for determining a priority classification for objects perceived by an autonomous vehicle, determining a predicted future state for each object based on the respective priority classification for each object, and determining a motion plan for the autonomous vehicle based at least in part on the predicted future states.
- The technology discussed herein makes reference to servers, databases, software applications, and other computer-based systems, as well as actions taken and information sent to and from such systems. The inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components. For instance, processes discussed herein can be implemented using a single device or component or multiple devices or components working in combination. Databases and applications can be implemented on a single system or distributed across multiple systems. Distributed components can operate sequentially or in parallel.
- While the present subject matter has been described in detail with respect to various specific example embodiments thereof, each example is provided by way of explanation, not limitation of the disclosure. Those skilled in the art, upon attaining an understanding of the foregoing, can readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the subject disclosure does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present disclosure cover such alterations, variations, and equivalents.
Claims (26)
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/811,865 US10216189B1 (en) | 2017-08-23 | 2017-11-14 | Systems and methods for prioritizing object prediction for autonomous vehicles |
PCT/US2018/047032 WO2019040349A1 (en) | 2017-08-23 | 2018-08-20 | Systems and methods for prioritizing object prediction for autonomous vehicles |
CN201880061470.5A CN111133485B (en) | 2017-08-23 | 2018-08-20 | Object prediction prioritization system and method for autonomous vehicles |
EP18782225.9A EP3673470B1 (en) | 2017-08-23 | 2018-08-20 | Systems and methods for prioritizing object prediction for autonomous vehicles |
JP2020510567A JP7199421B2 (en) | 2017-08-23 | 2018-08-20 | Systems and methods for prioritizing object prediction for autonomous vehicles |
US16/211,376 US11099569B2 (en) | 2017-08-23 | 2018-12-06 | Systems and methods for prioritizing object prediction for autonomous vehicles |
US17/408,728 US11710303B2 (en) | 2017-08-23 | 2021-08-23 | Systems and methods for prioritizing object prediction for autonomous vehicles |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762549407P | 2017-08-23 | 2017-08-23 | |
US15/811,865 US10216189B1 (en) | 2017-08-23 | 2017-11-14 | Systems and methods for prioritizing object prediction for autonomous vehicles |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/211,376 Continuation US11099569B2 (en) | 2017-08-23 | 2018-12-06 | Systems and methods for prioritizing object prediction for autonomous vehicles |
Publications (2)
Publication Number | Publication Date |
---|---|
US10216189B1 US10216189B1 (en) | 2019-02-26 |
US20190064815A1 true US20190064815A1 (en) | 2019-02-28 |
Family
ID=65410911
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/811,865 Active US10216189B1 (en) | 2017-08-23 | 2017-11-14 | Systems and methods for prioritizing object prediction for autonomous vehicles |
US16/211,376 Active 2038-06-09 US11099569B2 (en) | 2017-08-23 | 2018-12-06 | Systems and methods for prioritizing object prediction for autonomous vehicles |
US17/408,728 Active US11710303B2 (en) | 2017-08-23 | 2021-08-23 | Systems and methods for prioritizing object prediction for autonomous vehicles |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/211,376 Active 2038-06-09 US11099569B2 (en) | 2017-08-23 | 2018-12-06 | Systems and methods for prioritizing object prediction for autonomous vehicles |
US17/408,728 Active US11710303B2 (en) | 2017-08-23 | 2021-08-23 | Systems and methods for prioritizing object prediction for autonomous vehicles |
Country Status (5)
Country | Link |
---|---|
US (3) | US10216189B1 (en) |
EP (1) | EP3673470B1 (en) |
JP (1) | JP7199421B2 (en) |
CN (1) | CN111133485B (en) |
WO (1) | WO2019040349A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111923928A (en) * | 2019-05-13 | 2020-11-13 | 长城汽车股份有限公司 | Decision making method and system for automatic vehicle |
US10937176B2 (en) * | 2018-05-31 | 2021-03-02 | Toyota Jidosha Kabushiki Kaisha | Object recognition apparatus |
US20210192864A1 (en) * | 2019-12-24 | 2021-06-24 | Gm Cruise Holdings Llc | Using dynamic triggers in dangerous situations to view sensor data for autonomous vehicle passengers |
US20220207883A1 (en) * | 2019-05-09 | 2022-06-30 | Sony Group Corporation | Information processing apparatus, information processing method, and program |
US20220281445A1 (en) * | 2019-09-02 | 2022-09-08 | Volkswagen Aktiengesellschaft | Method for Predicting a Future Driving Situation of a Foreign Object Participating in Road Traffic Device, Vehicle |
US11643115B2 (en) * | 2019-05-31 | 2023-05-09 | Waymo Llc | Tracking vanished objects for autonomous vehicles |
US11919512B2 (en) * | 2017-12-11 | 2024-03-05 | Volvo Car Corporation | Path prediction for a vehicle |
US20240087445A1 (en) * | 2019-10-18 | 2024-03-14 | Robert Bosch Gmbh | Method for providing an object message about an object, which is recognized in the surroundings of a road user, in a communication network for the communication with other road users |
US20240175710A1 (en) * | 2022-11-30 | 2024-05-30 | Argo AI, LLC | Low Latency Vector Map Updates |
Families Citing this family (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10678244B2 (en) | 2017-03-23 | 2020-06-09 | Tesla, Inc. | Data synthesis for autonomous control systems |
US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
US11157441B2 (en) | 2017-07-24 | 2021-10-26 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
US10671349B2 (en) | 2017-07-24 | 2020-06-02 | Tesla, Inc. | Accelerated mathematical engine |
US10768626B2 (en) * | 2017-09-30 | 2020-09-08 | Tusimple, Inc. | System and method for providing multiple agents for decision making, trajectory planning, and control for autonomous vehicles |
US11760280B2 (en) | 2018-01-12 | 2023-09-19 | Uatc, Llc | Systems and methods for streaming processing for autonomous vehicles |
US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
US10676085B2 (en) | 2018-04-11 | 2020-06-09 | Aurora Innovation, Inc. | Training machine learning model based on training instances with: training instance input based on autonomous vehicle sensor data, and training instance output based on additional vehicle sensor data |
US10990096B2 (en) * | 2018-04-27 | 2021-04-27 | Honda Motor Co., Ltd. | Reinforcement learning on autonomous vehicles |
US11215999B2 (en) | 2018-06-20 | 2022-01-04 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
US11035943B2 (en) * | 2018-07-19 | 2021-06-15 | Aptiv Technologies Limited | Radar based tracking of slow moving objects |
US11361457B2 (en) | 2018-07-20 | 2022-06-14 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
US10909866B2 (en) * | 2018-07-20 | 2021-02-02 | Cybernet Systems Corp. | Autonomous transportation system and methods |
US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
SG11202103493QA (en) | 2018-10-11 | 2021-05-28 | Tesla Inc | Systems and methods for training machine models with augmented data |
US11196678B2 (en) | 2018-10-25 | 2021-12-07 | Tesla, Inc. | QOS manager for system on a chip communications |
US11403492B2 (en) | 2018-11-02 | 2022-08-02 | Aurora Operations, Inc. | Generating labeled training instances for autonomous vehicles |
US11256263B2 (en) * | 2018-11-02 | 2022-02-22 | Aurora Operations, Inc. | Generating targeted training instances for autonomous vehicles |
US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
US10814870B2 (en) * | 2018-12-04 | 2020-10-27 | GM Global Technology Operations LLC | Multi-headed recurrent neural network (RNN) for multi-class trajectory predictions |
US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
CN113165668A (en) | 2018-12-18 | 2021-07-23 | 动态Ad有限责任公司 | Operating a vehicle using motion planning with machine learning |
KR102505300B1 (en) | 2018-12-18 | 2023-03-02 | 모셔널 에이디 엘엘씨 | Behavior of vehicles using multiple motion constraints |
US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
US10997461B2 (en) | 2019-02-01 | 2021-05-04 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
US11150664B2 (en) | 2019-02-01 | 2021-10-19 | Tesla, Inc. | Predicting three-dimensional features for autonomous driving |
US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
US10956755B2 (en) | 2019-02-19 | 2021-03-23 | Tesla, Inc. | Estimating object properties using visual image data |
US10741070B1 (en) * | 2019-03-04 | 2020-08-11 | GM Global Technology Operations LLC | Method to prioritize transmission of sensed objects for cooperative sensor sharing |
US11341846B2 (en) | 2019-04-04 | 2022-05-24 | Geotab Inc. | Traffic analytics system for defining road networks |
US11335191B2 (en) | 2019-04-04 | 2022-05-17 | Geotab Inc. | Intelligent telematics system for defining road networks |
US10699564B1 (en) * | 2019-04-04 | 2020-06-30 | Geotab Inc. | Method for defining intersections using machine learning |
US11403938B2 (en) | 2019-04-04 | 2022-08-02 | Geotab Inc. | Method for determining traffic metrics of a road network |
US11335189B2 (en) | 2019-04-04 | 2022-05-17 | Geotab Inc. | Method for defining road networks |
RU2750152C1 (en) * | 2019-04-25 | 2021-06-22 | Общество с ограниченной ответственностью "Яндекс Беспилотные Технологии" | Methods and systems for determining vehicle trajectory estimation procedure |
US11772643B1 (en) * | 2019-05-20 | 2023-10-03 | Zoox, Inc. | Object relevance determination |
US11526729B2 (en) * | 2019-05-22 | 2022-12-13 | International Business Machines Corporation | Discovering higher-level actions from expert's action demonstration |
US11634162B2 (en) * | 2019-08-16 | 2023-04-25 | Uatc, Llc. | Full uncertainty for motion planning in autonomous vehicles |
US11340622B2 (en) * | 2019-08-30 | 2022-05-24 | Waymo Llc | Determining respective impacts of agents |
US11577722B1 (en) * | 2019-09-30 | 2023-02-14 | Zoox, Inc. | Hyper planning based on object and/or region |
DE102019216025A1 (en) * | 2019-10-17 | 2021-04-22 | Robert Bosch Gmbh | Method and control device for the automatic selection of data records for a method for machine learning |
KR102193776B1 (en) * | 2019-12-26 | 2020-12-22 | 성균관대학교 산학협력단 | Reinforcement lerning based sensor data management method and system |
KR20210114792A (en) * | 2020-03-11 | 2021-09-24 | 현대자동차주식회사 | Apparatus for tracking object based on lidar sensor and method thereof |
EP3913551B1 (en) * | 2020-05-19 | 2024-10-09 | GEOTAB Inc. | Method for defining road networks |
CN111814970B (en) * | 2020-06-28 | 2021-02-23 | 盾钰(上海)互联网科技有限公司 | Real-time physical engine enhanced calculation method, medium and system based on neural network |
US11926343B2 (en) | 2020-07-20 | 2024-03-12 | Tusimple, Inc. | Classification and prioritization of objects for autonomous driving |
CN114500736B (en) * | 2020-10-23 | 2023-12-05 | 广州汽车集团股份有限公司 | Intelligent terminal motion trail decision method and system and storage medium thereof |
US20220169282A1 (en) * | 2020-12-01 | 2022-06-02 | Gm Cruise Holdings Llc | Autonomous vehicle high-priority data offload system |
US11884296B2 (en) | 2020-12-21 | 2024-01-30 | Qualcomm Incorporated | Allocating processing resources to concurrently-executing neural networks |
WO2022183329A1 (en) * | 2021-03-01 | 2022-09-09 | 华为技术有限公司 | Intelligent driving method and apparatus, and storage medium and computer program |
WO2022251769A1 (en) * | 2021-05-26 | 2022-12-01 | Argo AI, LLC | Using relevance of objects to assess performance of an autonomous vehicle perception system |
US12046013B2 (en) | 2021-05-26 | 2024-07-23 | Ford Global Technologies Llc | Using relevance of objects to assess performance of an autonomous vehicle perception system |
US11884304B2 (en) * | 2021-09-08 | 2024-01-30 | Ford Global Technologies, Llc | System, method, and computer program product for trajectory scoring during an autonomous driving operation implemented with constraint independent margins to actors in the roadway |
US20230339394A1 (en) | 2022-04-22 | 2023-10-26 | Velo.Ai, Inc | Artificially intelligent mobility safety system |
WO2023218583A1 (en) * | 2022-05-12 | 2023-11-16 | 三菱電機株式会社 | Allocation result determination device and allocation result determination method |
GB2625324A (en) * | 2022-12-14 | 2024-06-19 | Aptiv Technoologies Ag | Perception sensor processing method and processing unit for performing the same |
CN116767224B (en) * | 2023-07-03 | 2024-01-23 | 小米汽车科技有限公司 | Method, device, vehicle and storage medium for determining a travelable region |
CN116588125B (en) * | 2023-07-17 | 2023-09-19 | 四川中普盈通科技有限公司 | Vehicle-mounted edge side data processing system |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8255144B2 (en) | 1997-10-22 | 2012-08-28 | Intelligent Technologies International, Inc. | Intra-vehicle information conveyance system and method |
US9373149B2 (en) | 2006-03-17 | 2016-06-21 | Fatdoor, Inc. | Autonomous neighborhood vehicle commerce network and community |
US9302678B2 (en) | 2006-12-29 | 2016-04-05 | Robotic Research, Llc | Robotic driving system |
US8027029B2 (en) * | 2007-11-07 | 2011-09-27 | Magna Electronics Inc. | Object detection and tracking system |
US8605947B2 (en) | 2008-04-24 | 2013-12-10 | GM Global Technology Operations LLC | Method for detecting a clear path of travel for a vehicle enhanced by object detection |
US8917904B2 (en) * | 2008-04-24 | 2014-12-23 | GM Global Technology Operations LLC | Vehicle clear path detection |
US8126642B2 (en) | 2008-10-24 | 2012-02-28 | Gray & Company, Inc. | Control and systems for autonomously driven vehicles |
JP6184124B2 (en) * | 2013-02-18 | 2017-08-23 | 東日本高速道路株式会社 | Road traffic control system using probe data |
US9164511B1 (en) * | 2013-04-17 | 2015-10-20 | Google Inc. | Use of detected objects for image processing |
US9254846B2 (en) | 2013-05-03 | 2016-02-09 | Google Inc. | Predictive reasoning for controlling speed of a vehicle |
US9440647B1 (en) | 2014-09-22 | 2016-09-13 | Google Inc. | Safely navigating crosswalks |
US9934688B2 (en) * | 2015-07-31 | 2018-04-03 | Ford Global Technologies, Llc | Vehicle trajectory determination |
US9804599B2 (en) | 2015-11-04 | 2017-10-31 | Zoox, Inc. | Active lighting control for communicating a state of an autonomous vehicle to entities in a surrounding environment |
US10745003B2 (en) * | 2015-11-04 | 2020-08-18 | Zoox, Inc. | Resilient safety system for a robotic vehicle |
CN108431549B (en) * | 2016-01-05 | 2020-09-04 | 御眼视觉技术有限公司 | Trained system with imposed constraints |
US9707961B1 (en) * | 2016-01-29 | 2017-07-18 | Ford Global Technologies, Llc | Tracking objects within a dynamic environment for improved localization |
CN105679021B (en) * | 2016-02-02 | 2018-11-06 | 招商局重庆交通科研设计院有限公司 | Journey time fusion forecasting and querying method based on traffic big data |
CN106504530B (en) * | 2016-10-31 | 2018-12-11 | 合肥工业大学 | A kind of induction of user's trip route and managing and control system and its method |
-
2017
- 2017-11-14 US US15/811,865 patent/US10216189B1/en active Active
-
2018
- 2018-08-20 WO PCT/US2018/047032 patent/WO2019040349A1/en unknown
- 2018-08-20 EP EP18782225.9A patent/EP3673470B1/en active Active
- 2018-08-20 JP JP2020510567A patent/JP7199421B2/en active Active
- 2018-08-20 CN CN201880061470.5A patent/CN111133485B/en active Active
- 2018-12-06 US US16/211,376 patent/US11099569B2/en active Active
-
2021
- 2021-08-23 US US17/408,728 patent/US11710303B2/en active Active
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11919512B2 (en) * | 2017-12-11 | 2024-03-05 | Volvo Car Corporation | Path prediction for a vehicle |
US10937176B2 (en) * | 2018-05-31 | 2021-03-02 | Toyota Jidosha Kabushiki Kaisha | Object recognition apparatus |
US20220207883A1 (en) * | 2019-05-09 | 2022-06-30 | Sony Group Corporation | Information processing apparatus, information processing method, and program |
CN111923928A (en) * | 2019-05-13 | 2020-11-13 | 长城汽车股份有限公司 | Decision making method and system for automatic vehicle |
US11643115B2 (en) * | 2019-05-31 | 2023-05-09 | Waymo Llc | Tracking vanished objects for autonomous vehicles |
US12091055B2 (en) | 2019-05-31 | 2024-09-17 | Waymo Llc | Tracking vanished objects for autonomous vehicles |
US20220281445A1 (en) * | 2019-09-02 | 2022-09-08 | Volkswagen Aktiengesellschaft | Method for Predicting a Future Driving Situation of a Foreign Object Participating in Road Traffic Device, Vehicle |
US20240087445A1 (en) * | 2019-10-18 | 2024-03-14 | Robert Bosch Gmbh | Method for providing an object message about an object, which is recognized in the surroundings of a road user, in a communication network for the communication with other road users |
US20210192864A1 (en) * | 2019-12-24 | 2021-06-24 | Gm Cruise Holdings Llc | Using dynamic triggers in dangerous situations to view sensor data for autonomous vehicle passengers |
US11636715B2 (en) * | 2019-12-24 | 2023-04-25 | GM Cruise Holdings LLC. | Using dynamic triggers in dangerous situations to view sensor data for autonomous vehicle passengers |
US20240175710A1 (en) * | 2022-11-30 | 2024-05-30 | Argo AI, LLC | Low Latency Vector Map Updates |
Also Published As
Publication number | Publication date |
---|---|
JP2020531993A (en) | 2020-11-05 |
EP3673470A1 (en) | 2020-07-01 |
EP3673470B1 (en) | 2023-12-06 |
CN111133485B (en) | 2022-06-14 |
US10216189B1 (en) | 2019-02-26 |
US20210382488A1 (en) | 2021-12-09 |
US11710303B2 (en) | 2023-07-25 |
CN111133485A (en) | 2020-05-08 |
JP7199421B2 (en) | 2023-01-05 |
US20190146507A1 (en) | 2019-05-16 |
WO2019040349A1 (en) | 2019-02-28 |
US11099569B2 (en) | 2021-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11710303B2 (en) | Systems and methods for prioritizing object prediction for autonomous vehicles | |
US11713006B2 (en) | Systems and methods for streaming processing for autonomous vehicles | |
US11822337B2 (en) | Autonomous vehicles featuring machine-learned yield model | |
US12130624B2 (en) | Discrete decision architecture for motion planning system of an autonomous vehicle | |
US11288963B2 (en) | Autonomous vehicles featuring vehicle intention system | |
US11467586B2 (en) | Gridlock solver for motion planning system of an autonomous vehicle | |
US10768628B2 (en) | Systems and methods for object detection at various ranges using multiple range imagery | |
US20190025843A1 (en) | Systems and Methods for Speed Limit Context Awareness | |
US10654453B2 (en) | Systems and methods for low-latency braking action for an autonomous vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: UBER TECHNOLOGIES, INC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HAYNES, GALEN CLARK;REEL/FRAME:044446/0120 Effective date: 20171214 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: UATC, LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:UBER TECHNOLOGIES, INC.;REEL/FRAME:050353/0884 Effective date: 20190702 |
|
AS | Assignment |
Owner name: UATC, LLC, CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE NATURE OF CONVEYANCE FROM CHANGE OF NAME TO ASSIGNMENT PREVIOUSLY RECORDED ON REEL 050353 FRAME 0884. ASSIGNOR(S) HEREBY CONFIRMS THE CORRECT CONVEYANCE SHOULD BE ASSIGNMENT;ASSIGNOR:UBER TECHNOLOGIES, INC.;REEL/FRAME:051145/0001 Effective date: 20190702 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: AURORA OPERATIONS, INC., PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:UATC, LLC;REEL/FRAME:067733/0001 Effective date: 20240321 |