Nothing Special   »   [go: up one dir, main page]

CN113874889A - Self-maintenance autonomous vehicle procedure - Google Patents

Self-maintenance autonomous vehicle procedure Download PDF

Info

Publication number
CN113874889A
CN113874889A CN201980094565.1A CN201980094565A CN113874889A CN 113874889 A CN113874889 A CN 113874889A CN 201980094565 A CN201980094565 A CN 201980094565A CN 113874889 A CN113874889 A CN 113874889A
Authority
CN
China
Prior art keywords
autonomous vehicle
operational
maintenance
diagnostic data
instructions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980094565.1A
Other languages
Chinese (zh)
Inventor
T·巴坎特
N·厄尔曼
J·西博
J·麦克罗斯基
P·加西亚
J·A·科瓦鲁维亚斯
M·马格诺利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
General Cruise Holdings Ltd
GM Cruise Holdings LLC
Original Assignee
General Cruise Holdings Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by General Cruise Holdings Ltd filed Critical General Cruise Holdings Ltd
Publication of CN113874889A publication Critical patent/CN113874889A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/006Indicating maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0088Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/80Arrangements for reacting to or preventing system or operator failure
    • G05D1/81Handing over between on-board automatic and on-board manual control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/02Registering or indicating driving, working, idle, or waiting time only
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Game Theory and Decision Science (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)

Abstract

Systems and methods are provided for enabling autonomous vehicles to automatically and dynamically perform their own monitoring and maintenance. The autonomous vehicle may analyze diagnostic data captured by one or more of its sensors. Based on the analysis of the diagnostic data, the autonomous vehicle may determine that it needs maintenance and, based on the determination, send the analysis of the diagnostic data to the routing service. The autonomous vehicle may receive instructions from a routing service to dynamically route the autonomous vehicle according to the maintenance action.

Description

Self-maintenance autonomous vehicle procedure
Cross Reference to Related Applications
This application claims priority from U.S. application No. 16/410,911 entitled "SELF-maintenance automated vehicle program (SELF-MAINTAINING AUTONOMOUS VEHICLE PROCEDURE)" filed on 13/5/2019, the entire contents of which are incorporated herein by reference.
Technical Field
The subject matter of the present disclosure relates generally to the field of ride-sharing vehicles, and more particularly, to a system and method for self-maintenance of an autonomous ride-sharing vehicle.
Background
An autonomous vehicle is a motor vehicle that can be navigated without a human driver. An exemplary autonomous vehicle includes a plurality of sensor systems, such as, but not limited to, a camera sensor system, a lidar sensor system, a radar sensor system, and the like, wherein the autonomous vehicle operates based on sensor signals output by the sensor systems. In particular, the sensor signals are provided to an internal computing system in communication with a plurality of sensor systems, wherein the processor executes instructions to control a mechanical system of the autonomous vehicle, such as a vehicle propulsion system, a braking system, or a steering system, based on the sensor signals.
Currently, human operators are required to constantly monitor fleet vehicles and drive autonomous vehicles to a repair facility when maintenance is deemed necessary. This requires checking both quantitative and qualitative values against acceptable value ranges. Once maintenance is deemed necessary, the human operator must direct the autonomous vehicle to a garage. Since the operation relies on human intervention, this can be a time consuming and error prone process, especially when fleets of vehicles are significantly increased in size or operating areas are significantly enlarged.
Drawings
The above and other advantages and features of the present technology will become apparent by reference to the detailed description illustrated in the accompanying drawings. Those of ordinary skill in the art will understand that these drawings depict only some examples of the present technology and are not intended to limit the scope of the present technology to these examples. Furthermore, those skilled in the art will understand the principles of the present technology as described and illustrated with additional specificity and detail through the use of the accompanying drawings in which:
FIG. 1 illustrates an exemplary schematic diagram of an autonomous vehicle and network environment, according to some embodiments:
FIG. 2 illustrates an exemplary schematic diagram of an autonomous vehicle and network environment in which self-maintenance of the autonomous vehicle may be implemented, according to some embodiments;
FIG. 3A illustrates a flowchart representation of self-maintenance of an autonomous vehicle, according to some embodiments;
FIG. 3B illustrates a flowchart representation of a criticality determination of a detected problem according to some embodiments; and
FIG. 4 illustrates an example of a system for implementing certain aspects of the present technology.
Detailed Description
Various examples of the present technology are discussed in detail below. While specific implementations are discussed, it should be understood that this is for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the technology. In some instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more aspects. Further, it should be understood that functionality that is described as being performed by certain system components may be performed by more or fewer components than shown.
The disclosed technology addresses the need in the art for a self-maintenance capability for autonomous vehicles. Currently, human intervention is required to continuously monitor fleet vehicles. When the personnel monitoring the fleet deem that the fleet needs maintenance, they need to direct another service to drive the vehicle into the garage. This means that human intervention is required to check the quantitative and qualitative values against the acceptable value ranges, and then again to bring the vehicle to a garage for repair. This requires constant attention and introduces the possibility of error or vehicle problem escalation, as only major problems may be flagged by human operators as requiring service or repair.
To address the above-mentioned problems, systems and methods are disclosed for allowing an autonomous vehicle to automatically and dynamically monitor and maintain itself, which eliminates the need for manual monitoring and intervention. For example, an autonomous vehicle may analyze diagnostic data captured by one or more of its sensors. Based on the analysis of the diagnostic data, the autonomous vehicle may determine that it needs maintenance and, based on the determination, send the analysis of the diagnostic data to a routing service. The autonomous vehicle may then receive instructions from the routing service to dynamically route the autonomous vehicle according to the maintenance action.
FIG. 1 shows an environment 100 including an autonomous vehicle 102 in communication with a remote computing system 150. In some embodiments, the autonomous vehicle 102 may navigate the road without a human driver based on the sensor signals output by the sensor system 104 and 106 of the autonomous vehicle 102. The autonomous vehicle 102 includes a plurality of sensor systems 104 and 106 (first through Nth sensor systems 104 through 106). The sensor systems 104 and 106 are of different types and are disposed about the autonomous vehicle 102. For example, the first sensor system 104 may be a camera sensor system and the nth sensor system 106 may be a lidar sensor system. Other exemplary sensor systems include radar sensor systems, Global Positioning System (GPS) sensor systems, Inertial Measurement Units (IMUs), infrared sensor systems, laser sensor systems, sonar sensor systems, and the like.
Autonomous vehicle 102 also includes several mechanical systems for effecting appropriate movement of autonomous vehicle 102. For example, the mechanical systems may include, but are not limited to, a vehicle propulsion system 130, a braking system 132, and a steering system 134. The vehicle propulsion system 130 may include an electric motor, an internal combustion engine, or both. Braking system 132 may include engine brakes, brake pads, actuators, and/or any other suitable components configured to help slow autonomous vehicle 102. The steering system 134 includes suitable components configured to control the direction of movement of the autonomous vehicle 102 during navigation.
The autonomous vehicle 102 also includes a safety system 136, which safety system 136 may include various lights and signal indicators, parking brakes, airbags, and the like. The autonomous vehicle 102 also includes a cabin system 138, which cabin system 138 may include a cabin temperature control system, an in-cabin entertainment system, and the like.
The autonomous vehicle 102 additionally includes an Autonomous Vehicle (AV) interior computing system 110 in communication with the sensor system 104 and systems 130, 132, 134, 136, and 138. The AV internal computing system 110 includes at least one processor and at least one memory having computer-executable instructions executed by the processor. The computer-executable instructions may constitute one or more services responsible for controlling the autonomous vehicle 102, communicating with the remote computing system 150, receiving input from passengers or human co-driving, recording metrics regarding the sensor system 104 and the data collected by human co-driving 106, and the like.
The AV interior computing system 110 may include a control service 112, the control service 112 configured to control operation of a vehicle propulsion system 130, a braking system 132, a steering system 134, a safety system 136, and a cabin system 138. The control service 112 receives sensor signals from the sensor system 104 and 106 as well as communicates with other services of the AV internal computing system 110 to enable operation of the autonomous vehicle 102. In some embodiments, the control service 112 may operate in conjunction with one or more other systems of the autonomous vehicle 102.
The AV internal computing system 110 may also include a constraint service 114 to facilitate safe propulsion of the autonomous vehicle 102. The constraint service 114 includes instructions for activating constraints according to rule-based constraints while the autonomous vehicle 102 is operating. For example, the constraints may be restrictions on navigation activated according to a protocol configured to avoid occupying the same space as other objects, complying with traffic regulations, avoiding avoidance zones, and the like. In some embodiments, the constraint service may be part of the control service 112.
The AV internal computing system 110 may also include a communication service 116. The communication service 116 may include software and hardware elements for receiving and transmitting signals from/to the remote computing system 150. The communication service 116 is configured to wirelessly communicate information over a network, for example, through an antenna array that provides personal cellular (long term evolution (LTE), 3G, 5G, etc.) communication.
In some embodiments, one or more services of the AV internal computing system 110 are configured to send and receive communications to the remote computing system 150 for the following reasons: reporting data used to train and evaluate machine learning algorithms, requesting assistance from a remote computing system 150 or from a human operator through the remote computing system 150, software service updates, ride-in and disembark instructions, and the like.
The AV internal computing system 110 may also include a latency service 118. The latency service 118 can utilize the timestamp of the communication with the remote computing system 150 to determine whether a useful communication has been received from the remote computing system 150 in a timely manner. For example, when the services of the AV internal computing system 110 request feedback from the remote computing system 150 regarding time-sensitive processes, the latency service 118 can determine whether a response is received from the remote computing system 150 in time because the information quickly becomes stale and inoperable. When the wait time service 118 determines that a response is not received within the threshold, the wait time service 118 may enable other systems or passengers of the autonomous vehicle 102 to make the necessary decisions or provide the required feedback.
The AV internal computing system 110 may also include a user interface service 120, which the user interface service 120 may communicate with the cabin system 138 to provide information to or receive information from a human co-driver or human passengers. In some embodiments, a human co-driver or human passenger may be required to evaluate and control the constraints from the constraint service 114, or the human co-driver or human passenger may wish to provide instructions to the autonomous vehicle 102 regarding a destination, requested route, or other requested operation.
As described above, the remote computing system 150 is configured to transmit/receive signals from the autonomous vehicle 102 regarding: reporting data used to train and evaluate machine learning algorithms, requesting assistance from a remote computing system 150 or from a human operator through the remote computing system 150, software service updates, ride-in and disembark instructions, and the like.
The remote computing system 150 includes an analysis service 152 configured to receive data from the autonomous vehicle 102 and analyze the data to train or evaluate machine learning algorithms for operating the autonomous vehicle 102. Analysis service 152 may also perform analyses on data associated with one or more errors or constraints reported by autonomous vehicle 102.
Remote computing system 150 may also include user interface services 154, which user interface services 154 are configured to present metrics, videos, pictures, sounds reported from autonomous vehicle 102 to an operator of remote computing system 150. The user interface service 154 may also receive input instructions from the operator, which may be sent to the autonomous vehicle 102.
The remote computing system 150 may also include an instruction service 156 for sending instructions regarding operating the autonomous vehicle 102. For example, in response to the output of analysis service 152 or user interface service 154, instruction service 156 may provide instructions to one or more services of autonomous vehicle 102 or a co-driver or passenger of autonomous vehicle 102.
The remote computing system 150 may also include a ride pooling service 158, the ride pooling service 158 configured to interact with a ride pooling application 170 running on the (potential) passenger computing device. The ride pool service 158 may receive requests to enter or exit from the passenger ride pool application 170 and may schedule the autonomous vehicle 102 for a trip. The ride pooling service 158 may also act as an intermediary between the ride pooling application 170 and the autonomous vehicle 102, wherein passengers may provide instructions to the autonomous vehicle 102 to bypass obstacles, change routes, horn, etc.
FIG. 2 illustrates an exemplary schematic diagram of an autonomous vehicle and network environment capable of implementing self-maintenance of the autonomous vehicle, according to some embodiments. The system 200 may maintain a fully autonomous fleet of vehicles without human intervention and/or monitoring. Conventionally, current vehicle maintenance systems require human intervention-the vehicle runs a diagnostic program, and if there is a problem marked by the personnel monitoring the diagnostic program, they must contact personnel of the maintenance facility (such as a repair shop) to see if the repair shop can repair the vehicle. This introduces human error and inefficiency, which over time may affect the fleet of autonomous vehicles that may need maintenance. The system 200 takes advantage of the increased diagnostic capabilities of autonomous vehicles 202, such as autonomous vehicles 202 within a fleet 226, to eliminate the need for human intervention and/or monitoring. The diagnostic functionality may be integrated into a scheduling algorithm that may dynamically and/or automatically route the autonomous vehicle to a desired facility based on the severity of one or more problems. Diagnostic capabilities may be applied to preventative maintenance (e.g., knowing that autonomous vehicle 202 needs to be refueled within 50 miles, and based thereon, sending autonomous vehicle 202 to the nearest repair shop before 50 miles are reached), and/or may be applied to diagnose more critical issues (e.g., where one lidar sensor is damaged or malfunctioning, requiring immediate sending to the repair shop to repair or replace the lidar sensor).
The autonomous vehicle 202 may dynamically maintain itself by analyzing diagnostic data captured by one or more sensors 204 of the autonomous vehicle 202. Autonomous vehicle 202 may include a plurality of sensors 204 within a plurality of sensor systems, including but not limited to camera sensor systems, lidar sensor systems, radar sensor systems, and the like. Multiple sensor systems may work independently or interoperable with each other to navigate and/or capture environmental and operational conditions. For example, sensors 204 may detect or capture diagnostic data that will enable autonomous vehicle 202 to monitor itself.
Each sensor 204 may store diagnostic data in a data store 208 on the autonomous vehicle 202. In some embodiments, the internal analysis service 210 of the autonomous vehicle 202 may generate one or more models 214 describing the behavior, operation, or environment of the autonomous vehicle 202 based on the diagnostic data within the data store 208. For example, the interior analysis service 210 may determine yaw, acceleration, direction, location, and surroundings (e.g., buildings, people, obstacles, light levels, temperature, sound, etc.) of the autonomous vehicle 202 based on the model 214.
Based on the analysis of the diagnostic data that detects the operational problem, the autonomous vehicle 202 may determine that it needs maintenance. For example, the diagnostic service 206 may diagnose a problem within the diagnostic data and determine whether the autonomous vehicle 202 requires maintenance and the criticality of the problem. For example, the diagnostic service 206 may detect operational problems by applying the model 214 to diagnostic data. For example, the diagnostic service 206 may check against software versions, valid calibration values, and the like. For example, if the yaw of the autonomous vehicle 202 is outside of an acceptable value or range of values, the diagnostic service 206 may compare the diagnostic data to the model 214 to diagnose a particular problem. In some embodiments, most of the analysis of the diagnostic data is done on the autonomous vehicle 202 (via the diagnostic service 206) to provide an immediate response to the problem (or a response may be provided despite the loss of connection to the remote network). In other embodiments, the analysis of the diagnostic data may be performed on a back-end remote server.
Based on the determination of the particular issue, the analysis of the diagnostic data may be sent to a back-end remote network, such as network 232, which may route the autonomous vehicle 202 via the routing service 224 according to the determination. For example, the routing service 224 may send questions to the maintenance service 222 to determine an appropriate response, such as routing the autonomous vehicle 202 to a particular repair shop, notifying and setting up maintenance, picking up detained passengers, and so on.
In some embodiments, diagnostic service 206 may additionally receive input from passengers within autonomous vehicle 202 to provide feedback. For example, the user interface service 230 may receive input from the passenger at a computer interface (such as a tablet in the autonomous vehicle 202) or through a ride pool application 234 on the passenger's mobile device. The passenger may indicate at the user interface service 230 that there is a problem with the operation of the autonomous vehicle 202, such as a quick stop, a sharp turn, and so forth. In some cases, the passenger may manually control the autonomous vehicle through the user interface service 230, which user interface service 230 may communicate with the control service 212 to operate the autonomous vehicle 202 accordingly.
In some embodiments, the model 214 may be continually supplemented and updated by diagnostic data collected throughout the fleet 226. The diagnostic data may be transmitted to a network 232, which network 232 may remotely compile and average the diagnostic data received from all vehicles within the fleet 226 to remotely generate the model 214. For example, the analysis service 216 may analyze diagnostic data from the fleet 226 to generate detailed, accurate models 214, which may then be applied to the models 214 the next time the autonomous vehicle 202 updates or connects to the network 232. Thus, the model 214 may be continuously trained as each autonomous vehicle 202 within the fleet 226 operates over time. In some embodiments, the model 214 on the network 232 may be supplemented by passenger input through the ride service 228 in communication with the ride application 234.
Determining that the autonomous vehicle 202 requires maintenance based on analysis of diagnostic data from the entire fleet 226 may enable detection of subtle or emerging operational problems of the autonomous vehicle 202. For example, for each sensor 204 that may register a problem, diagnostic data from that sensor 204 may be compared to the smooth trips (or unfavorable trips) of other vehicles having that sensor. This may enable the model 214 to continually learn and improve the level of criticality and other insights, such as optimal charge levels, optimal states of components of the autonomous vehicle 202 to obtain optimal mileage or passenger experience, and so forth.
The diagnostic service 206 may also classify problems flagged by the model 214 into different criticality levels. For example, a criticality level of an operating issue of an autonomous vehicle may be determined based on the model 214 (and, in some embodiments, based on the model 214 being continuously updated from diagnostic data from the fleet 226). For example, an operational problem of the autonomous vehicle 202 may be classified as a high criticality level based on the diagnostic data being within or exceeding a range of values (which indicates that the autonomous vehicle 202 is in an imminent danger of malfunction or mishandling). Based on the operational issues being within the high criticality level, the autonomous vehicle 202 may receive instructions from the routing service 224 to stop the autonomous vehicle.
Further, at the network 232, additional details may be stored and analyzed that would not normally be considered urgent or critical to the operation of the autonomous vehicle 202 (which would take more processing time than expected if done locally). For example, the autonomous vehicle 202 may report minor maintenance issues, such as low tire pressure, low oil level, and other long-standing issues that may lead to emergency issues in the long term. The granular details within the diagnostic data may be retrieved from the data store 208 and/or the internal analytics service 210 and stored/analyzed at the back end (such as via the analytics service 216 on the network 232 and used to modify/update the model 214). The network 232 may determine how important the granularity detail is to the schedule. For example, low tire pressure may be flagged based on values received by certain diagnostic data analysis services 216. The routing service 224 may continue the current trip and need not cancel the ride of the passenger. If, however, analysis service 216 determines that the problem is sufficiently severe, once the trip is over, routing service 224 may be sent to control service 212 to route autonomous vehicle 202 to the next facility to repair the tire pressure.
In some embodiments, once analysis of the diagnostic data is sent to the routing service 224 based on a determination that the autonomous vehicle 202 requires maintenance, the autonomous vehicle 202 may receive instructions from the routing service 224 to dynamically route the autonomous vehicle 202 according to the maintenance actions specified by the maintenance service 222. For example, the maintenance service 222 may communicate with one or more maintenance facilities 218 to dynamically and automatically schedule maintenance for the autonomous vehicle 202. The maintenance service 222 may also communicate with the backup service 220 to dispatch a backup vehicle to pick up the detained passengers (e.g., for an autonomous vehicle 202 experiencing an emergency, dangerous problem).
For example, the maintenance service 222 may consider the current load at the maintenance facility 218. For example, the total number of charging ports for a given facility may be known, as well as the actual number of charging ports available and used.
The maintenance service 222 may also consider different maintenance facilities 218 with particular expertise, particular technicians, and/or particular components. For example, some maintenance facilities 218 may be suitable for maintenance, while other maintenance facilities are best suited for charging the autonomous vehicle 202. Thus, when the autonomous vehicle 202 needs to automatically route itself to the maintenance facility 218, the maintenance facility 218 may be selected based on parameters, such as directing the autonomous vehicle 202 to a repair shop with appropriate technicians and appropriate components to service the particular needs of the autonomous vehicle 202. For example, certain maintenance facilities 218 may have technicians specialized in the lidar sensor system. Other maintenance facilities 218 may have specialized duty-cycling camera sensor systems, radar sensor systems, Global Positioning System (GPS) sensor systems, Inertial Measurement Units (IMUs), infrared sensor systems, laser sensor systems, sonar sensor systems, and/or other sensor systems. Other maintenance facilities 218 may have technicians specialized in more general vehicle systems, such as the propulsion system, braking system, steering system, safety systems, cabin systems (e.g., temperature control, lighting, etc.) of the autonomous vehicle 202. Furthermore, even within the maintenance facilities 218 used to charge the autonomous vehicle 202, a certain number of charging stations within certain maintenance facilities 218 may have been plugged into the car (e.g., providing a limited number of open charging stations may service the autonomous vehicle 202). In that case, it may be considered to skip the maintenance facility 218 and go to another available facility.
In some embodiments, the maintenance service 222 may weight certain actions according to multiple priorities. For example, while some maintenance facilities 218 may be best suited for a particular maintenance action, the maintenance facilities 218 may be fully loaded and unable to service the autonomous vehicle 202, and thus, the maintenance service 222 may communicate with the routing service 224 to route the autonomous vehicle 202 to another available maintenance facility that is not dedicated to the particular maintenance action or is farther away from the current location of the autonomous vehicle 202.
In some embodiments, autonomous vehicles 202 may be queued based on their priority. Its priority may be related to the criticality level of the problem. For example, the queue may place the front-most of the queue for autonomous vehicles in the fleet 226 that have high criticality issues and therefore need to go directly to the maintenance facility 218. Autonomous vehicles in the fleet 226 with low criticality issues may be pushed to a later position in the queue to dispatch additional maintenance facilities to the maintenance facility 218 when they are available.
Fig. 3A illustrates a flowchart representation of self-maintenance of an autonomous vehicle, according to some embodiments. The method 300 may begin when an autonomous vehicle is started (step 302). In some embodiments, the initiation may trigger a diagnostic check (step 304) that may check whether there are any problems with the systems within the autonomous vehicle before the autonomous vehicle takes a passenger or begins operating on the road. Additionally and/or alternatively, in some embodiments, the diagnostic check (step 304) may be run continuously or periodically to monitor the operation of the autonomous vehicle over time. The diagnostic check may determine whether any problems are detected (step 306). If no problems are detected, a diagnostic check may be run subsequently to check for problems again.
In some embodiments, the diagnostic check may periodically (e.g., every 30 seconds) or continuously trigger a heartbeat event to check whether the system is still active (e.g., functioning properly). The system being examined may respond to the heartbeat event and may be recorded in a log as a response event. The response event may include diagnostic data for analyzing any problems experienced by the corresponding system.
In some embodiments, at startup, the diagnostic check may find that a particular firmware version is out of date. For example, if the network has been updated to a new version of the lidar system firmware, a response event from the diagnostic check may detect that an old version is being used and that the lidar system on the autonomous vehicle needs to be updated (and the update may be initiated once a link to the network is established). Or a response event from a diagnostic check may determine that version 1.5 of the lidar system firmware is corrupted and therefore needs to be patched or updated to resolve the problem. The diagnostic check may contain a list of requirements for the required sensors, versions, autonomous vehicle components, etc. Thus, the diagnostic check may determine whether the autonomous vehicle has a healthy lidar system, a healthy radar system, or the like by monitoring and comparing the diagnostic data to a list of requirements.
In some embodiments, the diagnostic checks may be performed at different layers. For example, at a first level, if a diagnostic check runs a particular set of checks for all hardware parts, and these checks fail, then autonomous vehicle launch is not allowed or inhibited. After these checks pass, i.e., it can be determined that all of the basic requirements for driving the autonomous vehicle have been met, the second layer can check all of the components of the autonomous vehicle in operation. The second layer may periodically or continuously transmit diagnostic information and data to the backend (e.g., network) and confirm that all systems are functioning as needed (or that the systems are experiencing one or more problems). In some embodiments, if the first level of diagnostic checks fails, the system may flag the availability of the vehicle, provide information as to whether the autonomous vehicle is not available at all for a ride, or whether the autonomous vehicle is available for a ride but soon requires maintenance.
In some embodiments, the system may determine the criticality of any detected problem (step 308). For example, diagnostic data may be received from sensors on the autonomous vehicle during a diagnostic check, which may include information regarding the operation of the sensors or components of the autonomous vehicle. The model applied to the diagnostic data may analyze the diagnostic data to determine which issues the autonomous vehicle is experiencing (or is about to experience) (e.g., the issues may be predictive or purely diagnostic).
If the analysis of the diagnostic data determines that the problem is within a high criticality level (step 310), the autonomous vehicle may be safely stopped (step 312). For example, the diagnostic data may have values above or below a threshold that indicates a critical problem and/or a very urgent problem. If there are passengers in the autonomous vehicle, the system may call the nearest available backup service to pick up these passengers (step 314). In some embodiments, the system may communicate to the passengers (via a tablet in the autonomous vehicle, a notification in the ride-sharing application, etc.) that they are providing an alternative vehicle. The system may call the trailer to retrieve the autonomous vehicle and take it to a maintenance facility (e.g., the nearest maintenance facility with availability for autonomous vehicles) where the autonomous vehicle may be serviced (step 316). A work order may be created in the maintenance system using the pre-fill instructions and the diagnostic information (step 318). The autonomous vehicle may then be removed from the ride pool availability (step 320).
For example, a work order may include pre-fill instructions tailored to different maintenance facilities having a particular specialty, a particular technician, and/or a particular part. For example, some maintenance facilities may be suitable for maintenance, while others are best suited for autonomous vehicle charging. When the autonomous vehicle needs to be automatically routed to a maintenance facility, the maintenance facility may be selected and the system may pre-fill instructions in the work order for the maintenance facility based on certain parameters. These parameters may direct the autonomous vehicle to a repair shop with appropriate technicians and appropriate parts to meet the specific needs of the autonomous vehicle. For example, a work order may be applicable to a maintenance facility having a technician dedicated to studying a lidar sensor system, and may be pre-filled with instructions related to servicing the lidar system of an autonomous vehicle. Other maintenance facilities 218 may have specialized duty-off camera sensor systems, radar sensor systems, Global Positioning System (GPS) sensor systems, Inertial Measurement Units (IMUs), infrared sensor systems, laser sensor systems, sonar sensor systems, and/or other sensor systems, and thus, work orders may be pre-filled with instructions for servicing the corresponding systems of the autonomous vehicle. Still other maintenance facilities may have technicians specialized in more general vehicle systems, such as propulsion systems, braking systems, steering systems, safety systems, cabin systems (e.g., temperature control, lighting, etc.) of autonomous vehicles; the work order may be correspondingly pre-filled with instructions for servicing a particular system of the autonomous vehicle in need of servicing. Some maintenance facilities are used to charge autonomous vehicles and may have a number of charging stations that may have been plugged into the car (e.g., there may be a limited number of open charging stations that may service the autonomous vehicle). In this case, a work order may be generated for a maintenance facility with availability by pre-populating the work order with autonomous vehicle charging instructions. Diagnostic information relating to the repair issue may also be included.
If the analysis of the diagnostic data determines that the problem is within a moderate criticality level (step 322), the autonomous vehicle may complete its current operation before being dispatched to an appropriate maintenance facility. A moderate criticality level may be determined when the value of the diagnostic data is within a range that indicates that the problem, although not critical or very urgent, should be fixed within a certain period of time. For example, because there are multiple cameras in an autonomous vehicle, when one camera goes out, the autonomous vehicle may still operate safely until the camera is repaired. In this case, the autonomous vehicle may complete the alighting of any passengers (step 324), and before the specific time period has elapsed, the autonomous vehicle is scheduled to be autonomously driven to a maintenance facility that may service the problem detected by the autonomous vehicle (e.g., the latest maintenance facility available to the autonomous vehicle) (step 326). This may be accomplished via work instructions created by pre-populating the instructions and/or including diagnostic information, similar to that discussed above. The autonomous vehicle may then be removed from the ride pool availability (step 320).
If the analysis of the diagnostic data determines that the problem is within a low criticality level (step 328), a work order may be booked for the future (e.g., the work order may be booked for a time when a maintenance facility is available and the autonomous vehicle is not booked by passengers) using the pre-populated instructions and diagnostic information (step 330). This may be accomplished via work instructions created by pre-populating the instructions and/or including diagnostic information, similar to that discussed above. At the scheduled work order time, the autonomous vehicle may be scheduled to be autonomously driven to a maintenance facility that may service the problem detected by the autonomous vehicle (e.g., the nearest maintenance facility available to the autonomous vehicle) (step 332). A low criticality level may be determined when the value of the diagnostic data is below a threshold indicating that the problem is not critical or sufficiently urgent.
In some embodiments, the autonomous vehicle may recalibrate its sensors. For example, the camera may not be able to properly detect or view the environment. This may indicate that it is not properly calibrated. Based on the extent to which the current calibration values deviate from the expected range, the system can determine the criticality of the calibration problem. The autonomous vehicle may then be scheduled to drive to the nearest location to recalibrate its sensors. In some embodiments, the autonomous vehicle may be routed to a check billboard on the highway, which enables recalibration of sensors within the autonomous vehicle.
FIG. 3B illustrates a flowchart representation of a criticality determination of a detected problem according to some embodiments. The autonomous vehicle may initiate diagnostics (step 350), such as through a continuous or periodic diagnostic check by a diagnostic service that may detect one or more problems experienced or expected by the autonomous vehicle. If the diagnostic service determines that there are any problems with starting and moving the physical components of the autonomous vehicle (step 352), the criticality level is classified as high (step 354). If the diagnostic service determines that the sensor calibration values are outside of an acceptable range in the back end (e.g., remote network) as determined by the server (step 356), the criticality level is also classified as high (step 354).
However, if the diagnostic service does not detect the above but detects that hardware within the autonomous vehicle generates a non-critical warning (step 358), the criticality level is classified as medium (step 360).
If not, and the diagnostic service or remote back-end server indicates that the autonomous vehicle should undergo preventative maintenance (step 362), the criticality level is classified as low (step 364). In some embodiments, whether preventative maintenance is needed may be based on driving history, sensor data, and/or overall fleet performance and analysis. If no problems are detected (step 366), the diagnosis may end or be postponed to a later time.
As described herein, one aspect of the present technology is to collect and use data available from various sources to improve quality and experience. The present invention contemplates that in some cases, this collected data may include personal information. The present invention contemplates entities involved with such personal information honoring and respecting privacy policies and practices.
Fig. 4 shows an example of a computing system 400, which computing system 400 may be, for example, the (potential) passenger devices or any of its components making up the internal computing system 110, the remote computing system 150, the executing the ride-sharing application 170, wherein the components of the system communicate with each other using a connection 405. The connection 405 may be a physical connection via a bus or a direct connection to the processor 410, such as in a chipset architecture. The connection 405 may also be a virtual connection, a network connection, or a logical connection.
In some embodiments, computing system 400 is a distributed system in which the functionality described in this disclosure may be distributed within a data center, multiple data centers, a peer-to-peer network, and the like. In some embodiments, one or more of the described system components represent many such components, each performing some or all of the functionality of the described components. In some embodiments, a component may be a physical or virtual device.
The example system 400 includes at least one processing unit (CPU or processor) 410 and a connection 405 that couples various system components including a system memory 415, such as a Read Only Memory (ROM)420 and a Random Access Memory (RAM)425, to the processor 410. Computing system 400 may include a cache of high speed memory 412 directly connected to, proximate to, or integrated as part of processor 410.
Processor 410 may include any general purpose processor and hardware or software services, such as services 432, 434, and 436 stored in storage device 430, which are configured to control processor 410 as well as special purpose processors, with software instructions incorporated into the actual processor design. The processor 410 may be essentially an entirely separate computing system containing multiple cores or processors, buses, memory controllers, caches, and the like. The multi-core processor may be symmetric or asymmetric.
To enable user interaction, computing system 400 includes input device 445, which input device 445 may represent any number of input mechanisms, such as a microphone for speech, a touch sensitive screen for gesture or graphical input, a keyboard, a mouse, motion input, speech, and so forth. Computing system 400 may also include output device 435, which output device 435 may be one or more of a variety of output mechanisms known to those skilled in the art. In some cases, a multimodal system may enable a user to provide multiple types of input/output to communicate with the computing system 400. Computing system 400 may include a communication interface 440, where communication interface 440 may generally control and manage user inputs and system outputs. There is no limitation to the operation on any particular hardware arrangement, and thus the essential features herein may be readily replaced with modified hardware or firmware arrangements (as they are developed).
Storage device 430 may be a non-volatile storage device and may be a hard disk or other type of computer-readable medium that may store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, solid state storage devices, digital versatile disks, magnetic cassettes, Random Access Memories (RAMs), Read Only Memories (ROMs), and/or some combination of these devices.
The storage device 430 may include software services, servers, services, etc., which when executed by the processor 410 cause the system to perform functions. In some embodiments, hardware services that perform particular functions may include software components stored in computer-readable media and necessary hardware components, such as processor 410, connections 405, output devices 435, and so forth, to perform the functions.
For clarity of explanation, in some cases the present technology may be presented as including individual functional blocks comprising functional blocks of apparatus, apparatus components, steps or routines in a method embodied in software, or a combination of hardware and software.
Any of the steps, operations, functions or processes described herein may be performed or implemented by a combination of hardware and software services, alone or in combination with other devices. In some embodiments, a service may be software that resides in memory of a client device and/or one or more servers of a content management system and performs one or more functions when a processor executes software associated with the service. In some embodiments, a service is a program or collection of programs that perform a particular function. In some embodiments, a service may be considered a server. The memory may be a non-transitory computer readable medium.
In some embodiments, the computer readable storage device, medium, and memory may comprise a cable or wireless signal containing a bitstream or the like. However, when mentioned, a non-transitory computer-readable storage medium expressly excludes media such as energy, carrier wave signals, electromagnetic waves, and signals per se.
The methods according to the examples described above may be implemented using computer-executable instructions stored or otherwise retrieved from a computer-readable medium. Such instructions may include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Part of the computer resources used may be accessible via a network. The executable computer instructions may be, for example, binaries, intermediate format instructions, such as assembly language, firmware, or source code. Examples of computer readable media that may be used to store instructions, information used, and/or information created during a method according to described examples include magnetic or optical disks, solid state storage devices, flash memory, USB devices equipped with non-volatile memory, network storage devices, and so forth.
Devices implementing methods according to these disclosures may include hardware, firmware, and/or software, and may take any of a variety of physical dimensions. Typical examples of such form factors include servers, laptops, smart phones, small form factor personal computers, personal digital assistants, and the like. The functionality described herein may also be embodied in a peripheral device or add-in card. As a further example, such functionality may also be implemented on a circuit board between different chips or different processes executing in a single device.
Instructions, media for conveying such instructions, computing resources for performing them, and other structures for supporting such computing resources are means for providing the functionality described in these disclosures.
While various examples and other information are used to explain aspects within the scope of the appended claims, no limitations are intended to the claims based on the specific features or arrangements of such examples, as one of ordinary skill would be able to derive various implementations using these examples. Furthermore, although some subject matter may have been described in language specific to examples of structural features and/or methodological steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts. For example, such functionality may be distributed differently or performed in components different than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.

Claims (20)

1. A method of self-maintaining an autonomous vehicle, comprising:
analyzing diagnostic data captured by sensors of the autonomous vehicle;
determining that the autonomous vehicle requires maintenance based on the analysis of the diagnostic data; and
based on the determination, sending the analysis of the diagnostic data to a routing service and receiving instructions from the routing service to dynamically route the autonomous vehicle according to a maintenance action.
2. The method of claim 1, wherein the determining that the autonomous vehicle requires maintenance is based on the analysis of the diagnostic data that detects operational problems with the autonomous vehicle, wherein the analysis is in accordance with an autonomous vehicle operational model generated by a fleet of autonomous vehicles.
3. The method of claim 2, wherein a criticality level of an operational issue of the autonomous vehicle is determined based on the model, and wherein the model is continuously updated based on diagnostic data from the fleet.
4. The method of claim 1, further comprising:
classifying an operational issue of the autonomous vehicle as a first criticality level; and
receiving an instruction from the routing service to stop the autonomous vehicle based on the operational issue being within the first criticality level.
5. The method of claim 4, further comprising:
determining, by the sensor, that a passenger is within the autonomous vehicle; and
based on the determination, a request for backup service is sent.
6. The method of claim 1, further comprising:
classifying an operational problem of the autonomous vehicle as a second criticality level; and
receiving an instruction from the routing service to dynamically drive the autonomous vehicle to a maintenance facility capable of servicing the operational issue if a passenger in the autonomous vehicle has alight based on the operational issue being within the second criticality level.
7. The method of claim 1, further comprising:
classifying an operational issue of the autonomous vehicle as a third criticality level;
receiving a confirmation from the routing service based on the operational problem being within the third criticality class confirming that a work order has been placed at a predetermined time to reach a maintenance facility capable of servicing the operational problem; and
receiving instructions from the routing service to dynamically drive the autonomous vehicle to the maintenance facility capable of servicing the operational issue at the predetermined time.
8. A system, comprising:
one or more sensors of the autonomous vehicle; and
a processor to execute instructions stored in a memory, wherein execution of the instructions by the processor performs the following:
analyzing diagnostic data captured by the one or more sensors of the autonomous vehicle;
determining that the autonomous vehicle requires maintenance based on the analysis of the diagnostic data; and
based on the determination, sending the analysis of the diagnostic data to a routing service and receiving instructions from the routing service to dynamically route the autonomous vehicle according to a maintenance action.
9. The system of claim 8, wherein the determination that the autonomous vehicle requires maintenance is based on the analysis of the diagnostic data that detects operational problems with the autonomous vehicle, wherein the analysis is in accordance with an autonomous vehicle operational model generated by a fleet of autonomous vehicles.
10. The system of claim 9, wherein a criticality level of an operational issue of the autonomous vehicle is determined based on the model, and wherein the model is continuously updated based on diagnostic data from the fleet.
11. The system of claim 8, wherein execution of the instructions by the processor further performs the following:
classifying an operational issue of the autonomous vehicle as a first criticality level; and
receiving an instruction from the routing service to stop the autonomous vehicle based on the operational issue being within the first criticality level.
12. The system of claim 11, wherein execution of the instructions by the processor further performs the following:
determining, by the one or more sensors, that a passenger is within the autonomous vehicle; and
based on the determination, a request for backup service is sent.
13. The system of claim 8, wherein execution of the instructions by the processor further performs the following:
classifying an operational problem of the autonomous vehicle as a second criticality level; and
receiving an instruction from the routing service to dynamically drive the autonomous vehicle to a maintenance facility capable of servicing the operational issue if a passenger in the autonomous vehicle has alight based on the operational issue being within the second criticality level.
14. The system of claim 8, wherein execution of the instructions by the processor further performs the following:
classifying an operational issue of the autonomous vehicle as a third criticality level;
receiving a confirmation from the routing service based on the operational problem being within the third criticality class confirming that a work order has been placed at a predetermined time to reach a maintenance facility capable of servicing the operational problem; and
receiving instructions from the routing service to dynamically drive the autonomous vehicle to the maintenance facility capable of servicing the operational issue at the predetermined time.
15. A non-transitory computer-readable medium comprising instructions that, when executed by a computing system, cause the computing system to:
analyzing diagnostic data captured by sensors of an autonomous vehicle;
determining that the autonomous vehicle requires maintenance based on the analysis of the diagnostic data; and
based on the determination, sending the analysis of the diagnostic data to a routing service and receiving instructions from the routing service to dynamically route the autonomous vehicle according to a maintenance action.
16. The non-transitory computer-readable medium of claim 15, wherein the determining that the autonomous vehicle requires maintenance is based on the analysis of the diagnostic data that detects operational problems with the autonomous vehicle, wherein the analysis is in accordance with an autonomous vehicle model generated by a fleet of autonomous vehicles.
17. The non-transitory computer-readable medium of claim 16, wherein a criticality level of an operational issue of the autonomous vehicle is determined based on the model, and wherein the model is continuously updated based on diagnostic data from the fleet.
18. The non-transitory computer-readable medium of claim 15, the instructions further causing the computing system to:
classifying an operational issue of the autonomous vehicle as a first criticality level; and
receiving an instruction from the routing service to stop the autonomous vehicle based on the operational issue being within the first criticality level;
determining, by the sensor, that a passenger is within the autonomous vehicle; and
based on the determination, a request for backup service is sent.
19. The non-transitory computer-readable medium of claim 15, the instructions further causing the computing system to:
classifying an operational problem of the autonomous vehicle as a second criticality level; and
receiving an instruction from the routing service to dynamically drive the autonomous vehicle to a maintenance facility capable of servicing the operational issue if a passenger in the autonomous vehicle has alight based on the operational issue being within the second criticality level.
20. The non-transitory computer-readable medium of claim 15, the instructions further causing the computing system to:
classifying an operational issue of the autonomous vehicle as a third criticality level;
receiving a confirmation from the routing service based on the operational problem being within the third criticality class confirming that a work order has been placed at a predetermined time to reach a maintenance facility capable of servicing the operational problem; and
receiving instructions from the routing service to dynamically drive the autonomous vehicle to the maintenance facility capable of servicing the operational issue at the predetermined time.
CN201980094565.1A 2019-05-13 2019-12-23 Self-maintenance autonomous vehicle procedure Pending CN113874889A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/410,911 US10916072B2 (en) 2019-05-13 2019-05-13 Self-maintaining autonomous vehicle procedure
US16/410,911 2019-05-13
PCT/US2019/068322 WO2020231479A1 (en) 2019-05-13 2019-12-23 Self-maintaining autonomous vehicle procedure

Publications (1)

Publication Number Publication Date
CN113874889A true CN113874889A (en) 2021-12-31

Family

ID=69326710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980094565.1A Pending CN113874889A (en) 2019-05-13 2019-12-23 Self-maintenance autonomous vehicle procedure

Country Status (4)

Country Link
US (3) US10916072B2 (en)
EP (1) EP3924902A1 (en)
CN (1) CN113874889A (en)
WO (1) WO2020231479A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10795356B2 (en) * 2017-08-31 2020-10-06 Uatc, Llc Systems and methods for determining when to release control of an autonomous vehicle
CN112912282A (en) * 2018-11-27 2021-06-04 住友电气工业株式会社 Vehicle failure prediction system, monitoring device, vehicle failure prediction method, and vehicle failure prediction program
AU2019464395A1 (en) * 2019-09-04 2022-03-24 Beijing Tusen Zhitu Technology Co., Ltd. Method and system for auto-driving vehicle service
JP7382192B2 (en) * 2019-09-30 2023-11-16 株式会社Subaru vehicle
CN113495547A (en) * 2020-03-20 2021-10-12 北京智行者科技有限公司 Real-time safe unmanned fault diagnosis and protection method and system
US20220414568A1 (en) * 2021-06-24 2022-12-29 Honeywell International Inc Systems and methods for determining vehicle capability for dispatch
US11513814B1 (en) 2021-06-28 2022-11-29 Nvidia Corporation State suspension for optimizing start-up processes of autonomous vehicles
US11941926B2 (en) * 2021-08-04 2024-03-26 Ford Global Technologies, Llc Vehicle variation remediation
EP4141406A1 (en) * 2021-08-31 2023-03-01 Volvo Autonomous Solutions AB Remote perception station as maintenance trigger for autonomous vehicles deployed in autonomous transport solutions

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170278312A1 (en) * 2016-03-22 2017-09-28 GM Global Technology Operations LLC System and method for automatic maintenance
US9811086B1 (en) * 2016-12-14 2017-11-07 Uber Technologies, Inc. Vehicle management system
WO2017205961A1 (en) * 2016-06-04 2017-12-07 Marquardt Matthew J Management and control of driverless vehicles
US20170364869A1 (en) * 2016-06-17 2017-12-21 Toyota Motor Engineering & Manufacturing North America, Inc. Automatic maintenance for autonomous vehicle
US10049505B1 (en) * 2015-02-27 2018-08-14 State Farm Mutual Automobile Insurance Company Systems and methods for maintaining a self-driving vehicle

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140074345A1 (en) * 2012-09-13 2014-03-13 Chanan Gabay Systems, Apparatuses, Methods, Circuits and Associated Computer Executable Code for Monitoring and Assessing Vehicle Health
US9958864B2 (en) * 2015-11-04 2018-05-01 Zoox, Inc. Coordination of dispatching and maintaining fleet of autonomous vehicles
US10401852B2 (en) * 2015-11-04 2019-09-03 Zoox, Inc. Teleoperation system and method for trajectory modification of autonomous vehicles
US10521977B2 (en) * 2017-03-27 2019-12-31 GM Global Technology Operations LLC Methods and systems for integrated vehicle sensor calibration and maintenance
WO2018204253A1 (en) * 2017-05-01 2018-11-08 PiMios, LLC Automotive diagnostics using supervised learning models
US10444023B2 (en) * 2017-05-08 2019-10-15 Arnold Chase Mobile device for autonomous vehicle enhancement system
US10710590B2 (en) * 2017-12-19 2020-07-14 PlusAI Corp Method and system for risk based driving mode switching in hybrid driving
US10726644B2 (en) * 2017-12-22 2020-07-28 Lyft, Inc. Fleet maintenance management for autonomous vehicles
US10670411B2 (en) * 2017-12-29 2020-06-02 Lyft Inc. Efficient matching of service providers and service requests across a fleet of autonomous vehicles
US10579054B2 (en) * 2018-01-29 2020-03-03 Uatc, Llc Systems and methods for on-site recovery of autonomous vehicles
US11614735B1 (en) * 2018-05-18 2023-03-28 Uatc, Llc Systems and methods for remote status detection of autonomous vehicles
US20190378350A1 (en) * 2018-06-07 2019-12-12 Jeffrey Paul DeRouen Method for directing, scheduling, and facilitating maintenance requirements for autonomous vehicle
US11126496B2 (en) * 2018-12-27 2021-09-21 Intel Corporation Technologies for re-programmable hardware in autonomous vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10049505B1 (en) * 2015-02-27 2018-08-14 State Farm Mutual Automobile Insurance Company Systems and methods for maintaining a self-driving vehicle
US20170278312A1 (en) * 2016-03-22 2017-09-28 GM Global Technology Operations LLC System and method for automatic maintenance
WO2017205961A1 (en) * 2016-06-04 2017-12-07 Marquardt Matthew J Management and control of driverless vehicles
US20170364869A1 (en) * 2016-06-17 2017-12-21 Toyota Motor Engineering & Manufacturing North America, Inc. Automatic maintenance for autonomous vehicle
US9811086B1 (en) * 2016-12-14 2017-11-07 Uber Technologies, Inc. Vehicle management system

Also Published As

Publication number Publication date
US10916072B2 (en) 2021-02-09
US10957123B2 (en) 2021-03-23
US11961337B2 (en) 2024-04-16
EP3924902A1 (en) 2021-12-22
US20200364950A1 (en) 2020-11-19
US20210142587A1 (en) 2021-05-13
US20200364949A1 (en) 2020-11-19
WO2020231479A1 (en) 2020-11-19

Similar Documents

Publication Publication Date Title
CN113874889A (en) Self-maintenance autonomous vehicle procedure
US20240071150A1 (en) Vehicle Management System
CN107490488B (en) Vehicle health check via noise and vibration levels
US10571908B2 (en) Autonomous vehicle failure mode management
US9830749B2 (en) Systems and methods for executing custom fleet vehicle management scripts
WO2020223249A1 (en) Vehicle service center dispatch system
US20220236729A1 (en) Self-maintaining autonomous vehicle procedure
US20240010212A1 (en) Automatic testing of autonomous vehicles
US11970189B2 (en) Application-based controls for wheelchair-accessible autonomous vehicle
CA3047095C (en) Vehicle management system
US11636715B2 (en) Using dynamic triggers in dangerous situations to view sensor data for autonomous vehicle passengers
US20230401961A1 (en) System and method for detecting severe road events
US20210375078A1 (en) Automated vehicle body damage detection
US20230040713A1 (en) Simulation method for autonomous vehicle and method for controlling autonomous vehicle
CN117461061A (en) Device health code broadcast over hybrid vehicle communication network
US10703383B1 (en) Systems and methods for detecting software interactions for individual autonomous vehicles
US20230142642A1 (en) Vehicle control system, apparatus, and method
JP2020071594A (en) History storage device and history storage program
US12056670B2 (en) Automated work ticketing and triage
CN113805546B (en) Model deployment method and device, computer equipment and storage medium
US20240135277A1 (en) Operation management apparatus
CN114407910B (en) Fault processing method, device and storage medium for intelligent driving vehicle
JP7476854B2 (en) Information processing device, program, and information processing method
US20240144743A1 (en) Transducer-based suspension health monitoring system
JP2024061506A (en) Remote control supporting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination