CN114154510B - Control method and device for automatic driving vehicle, electronic equipment and storage medium - Google Patents
Control method and device for automatic driving vehicle, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114154510B CN114154510B CN202111439754.3A CN202111439754A CN114154510B CN 114154510 B CN114154510 B CN 114154510B CN 202111439754 A CN202111439754 A CN 202111439754A CN 114154510 B CN114154510 B CN 114154510B
- Authority
- CN
- China
- Prior art keywords
- data
- semantic data
- vehicle
- target semantic
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 40
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 230000003068 static effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 230000007613 environmental effect Effects 0.000 claims description 4
- 238000013480 data collection Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 13
- 238000004364 calculation method Methods 0.000 abstract description 9
- 230000009286 beneficial effect Effects 0.000 abstract description 4
- 230000015654 memory Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 9
- 230000006399 behavior Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 230000008447 perception Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000704 physical effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Mechanical Engineering (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
Abstract
The embodiment of the application discloses a control method and device for an automatic driving vehicle, electronic equipment and a storage medium, and relates to the technical field of intelligent driving. Wherein the method comprises the following steps: performing data processing on original data acquired by data acquisition equipment configured in a vehicle to obtain original semantic data, and grading the original semantic data according to the characteristics of the original semantic data to obtain target semantic data of at least two grades; determining target semantic data of a level corresponding to the current driving mode of the vehicle from the target semantic data of the at least two levels; and controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode. The technical scheme provided by the embodiment of the application can realize the maximization of the calculation power utility of the automatic driving and is beneficial to realizing a higher-level automatic driving planning algorithm.
Description
Technical Field
The embodiment of the application relates to the technical field of intelligent driving, in particular to a control method and device for an automatic driving vehicle, electronic equipment and a storage medium.
Background
The intelligent development of automobiles brings about the transformation of electronic and electric architecture, and an integrated electronic and electric architecture becomes a current development trend, and various functions of automobiles are controlled by specific areas through integrating and classifying various control functions on a domain controller. In the currently mainstream and accepted classification of bosch companies, the main power domain, the chassis domain, the cabin domain, the vehicle body domain and the automatic driving domain are included together.
In the prior art, the software structure of the automatic driving domain controller is complex in the direction of the automatic driving domain due to the algorithm complexity of the multiple types of sensors, and the calculation force of the automatic driving domain controller is wasted due to the low cooperation degree of the multiple types of sensors and the fuzzy semantic layering.
Disclosure of Invention
The embodiment of the application provides a control method, a control device, electronic equipment and a storage medium for an automatic driving vehicle, so as to maximize the calculation power utility of the automatic driving, and facilitate the realization of a higher-level automatic driving planning algorithm.
In a first aspect, an embodiment of the present application provides a control method for an autonomous vehicle, including:
Performing data processing on original data acquired by data acquisition equipment configured in a vehicle to obtain original semantic data, and grading the original semantic data according to the characteristics of the original semantic data to obtain target semantic data of at least two grades;
determining target semantic data of a level corresponding to the current driving mode of the vehicle from the target semantic data of the at least two levels;
and controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode.
In a second aspect, an embodiment of the present application provides a control apparatus for an autonomous vehicle, the apparatus including:
The grading module is used for carrying out data processing on the original data acquired by the data acquisition equipment configured in the vehicle to obtain original semantic data, grading the original semantic data according to the characteristics of the original semantic data, and obtaining target semantic data of at least two grades;
the determining module is used for determining target semantic data of a grade corresponding to the current driving mode of the vehicle from the target semantic data of the at least two grades;
and the control module is used for controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode.
In a third aspect, an embodiment of the present application provides an electronic device, including:
One or more processors;
a storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for controlling an autonomous vehicle according to any embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for controlling an autonomous vehicle according to any embodiment of the present application.
The embodiment of the application provides a control method and device for an automatic driving vehicle, electronic equipment and a storage medium, wherein the method comprises the following steps: performing data processing on original data acquired by data acquisition equipment configured in a vehicle to obtain original semantic data, and grading the original semantic data according to the characteristics of the original semantic data to obtain target semantic data of at least two grades; determining target semantic data of a level corresponding to the current driving mode of the vehicle from the target semantic data of the at least two levels; and controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode. The application can realize the maximization of the calculation power utility of the automatic driving and is beneficial to realizing a higher-level automatic driving planning algorithm.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
Fig. 1 is a schematic flow chart of a control method of an autopilot vehicle according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a second flow chart of a control method of an autonomous vehicle according to an embodiment of the present application;
FIG. 3 is a hierarchical schematic diagram of semantic data according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a control device for an autopilot vehicle according to an embodiment of the present application;
Fig. 5 is a block diagram of an electronic device for implementing a control method of an autonomous vehicle according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Example 1
Fig. 1 is a schematic flow chart of a control method of an automatic driving vehicle according to an embodiment of the present application, where the embodiment may be applicable to a case where after processing and semantically classifying raw data collected by different sensors, corresponding target semantic data is invoked according to a current driving mode to perform unmanned driving. The control method of the automatic driving vehicle provided by the embodiment of the application can be executed by the control device of the automatic driving vehicle provided by the embodiment of the application, and the device can be realized in a software and/or hardware mode and is integrated in an electronic device for executing the method. Preferably, the electronic device in the embodiment of the present application may be a vehicle-mounted terminal.
Referring to fig. 1, the method of the present embodiment includes, but is not limited to, the following steps:
S110, carrying out data processing on the original data acquired by the data acquisition equipment configured in the vehicle to obtain original semantic data, and grading the original semantic data according to the characteristics of the original semantic data to obtain target semantic data of at least two grades.
Wherein, data acquisition equipment includes: at least one of a vision sensor, a laser radar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, and an integrated navigation device; the raw data includes at least one of static feature information of the vehicle, dynamic feature information of the vehicle, road information, and environmental information of an environment in which the vehicle is currently located. The original semantic data refers to data after data processing of different degrees is performed on the original data.
In the embodiment of the application, by configuring different data acquisition devices such as a vision sensor, a laser radar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, a combined navigation device and the like on a vehicle, raw data related to the vehicle (such as static characteristic information of the vehicle and dynamic characteristic information of the vehicle) or related to the current environment of the vehicle (environment information) can be acquired, for example: image information of the surrounding environment of the vehicle collected by a vision sensor (such as a camera), point cloud data of the surrounding environment of the vehicle collected by a laser radar sensor, and electromagnetic wave information of the surrounding environment of the vehicle collected by a millimeter wave radar sensor or an ultrasonic radar sensor. Wherein the static characteristic information of the vehicle comprises the color, the category and the size of the vehicle, and the dynamic characteristic information of the vehicle comprises the position, the gesture and the speed of the vehicle.
Note that, in the static feature information of the vehicle and the dynamic feature information of the vehicle in this embodiment, the vehicle may be a vehicle (i.e., the host vehicle) configured with the data acquisition device, may be a vehicle (i.e., a front vehicle, a rear vehicle, a left vehicle, or a right vehicle) closer to the front, the rear, the left, or the right of the vehicle configured with the data acquisition device, or may be another vehicle.
Specifically, the original semantic data is classified according to the characteristics of the original semantic data to obtain at least two levels of target semantic data, including: determining the original data as first-level target semantic data; carrying out data processing on the first-level target semantic data to obtain semantic data which has physical dimensions and is irrelevant to vehicle running, and determining the semantic data which has physical dimensions and is irrelevant to vehicle running as second-level target semantic data; carrying out data processing on the first-level target semantic data and the second-level target semantic data to obtain semantic data related to vehicle running, and determining the semantic data related to vehicle running as third-level target semantic data; data processing is carried out on the three-level target semantic data to obtain semantic data for identifying vehicle intention, and the semantic data for identifying the vehicle intention is determined to be four-level target semantic data; combining the four-level target semantic data with at least one of the first-level target semantic data, the second-level target semantic data and the third-level target semantic data to obtain five-level target semantic data.
In the embodiment of the application, the at least two levels of target semantic data comprise low-level target semantic data and high-level target semantic data, the first-level target semantic data and the second-level target semantic data can be used as the low-level target semantic data, and the third-level target semantic data, the fourth-level target semantic data and the fifth-level target semantic data can be used as the high-level target semantic data. The low-level target semantic data may be data processed or combined to obtain high-level target semantic data.
In the embodiment of the application, the original data acquired by all the data acquisition equipment are directly used as the primary target semantic data without data processing and are stored in the memory corresponding to the primary target semantic data. The advantage of this is that it is left as a base for later servicing of the vehicle or other use based on the raw data. The primary target semantic data are the same as the original data and comprise at least one of static characteristic information of the vehicle, dynamic characteristic information of the vehicle, road information and environment information of the current environment of the vehicle.
In the embodiment of the application, the secondary target semantic data is an attribute characteristic calculated according to the primary target semantic data, is biased to a physical attribute, and is mainly characterized by having a measurement unit or dimension, but not having semantic information related to vehicle running. Such as the speed and attitude of the host vehicle or the front vehicle, the position of the host vehicle or the front vehicle, the appearance characteristics (color, class or size) of the front vehicle, the physical properties of the road (curvature, gradient, width, lane line properties), and polygon labeling information. The secondary target semantic data may be stored into a corresponding memory.
The method for determining the speed of the front vehicle in the secondary target semantic data comprises the following steps of: and judging the speed, the gesture and the acceleration of the front vehicle and the relative speed between the front vehicle and the vehicle according to the point cloud data of the surrounding environment of the vehicle, which are acquired by the laser radar sensor, and combining the mirror image distance in the multi-frame picture. The method for determining the appearance characteristics of the front vehicle in the second-level target semantic data comprises the following steps: and carrying out data processing operation according to the image information acquired by the vision sensor and the point cloud data of the surrounding environment of the vehicle acquired by the laser radar sensor to obtain the appearance characteristics of the front vehicle. The method can also carry out manual, semi-automatic or automatic polygonal labeling on automobiles, pedestrians or obstacles in the image information acquired by the visual sensor, and is used for related use of high-level target semantic data. The vehicle position in the second-level target semantic data can be determined by combined navigation equipment, wherein the combined navigation equipment can be a differential global positioning system (Global Positioning System, GPS) combined with an inertial navigation system (Inertial Measurement Unit, IMU), and the positioning precision can reach the meter level. The physical properties of roads are mainly provided by high-precision maps.
In the embodiment of the application, the three-level target semantic data refer to semantic data related to vehicle driving, and the main characteristic is that no physical dimension is generally available, so that a geometric driving area can be primarily judged. Such as historical track of the vehicle, vehicle behavior, target lane attribution, identification semantics (speed limit), lane line constraint semantics (parking space), lane direction (left turn lane), legal regulation constraint semantics, and the like. The tertiary target semantic data may be stored into a corresponding memory.
The historical track in the three-level target semantic data is obtained by performing a multi-frame position judgment superposition algorithm on the vehicle position in the two-level target semantic data and drawing a vehicle driving route. The vehicle behavior is obtained by comparing the data such as the position, the gesture and the speed of the vehicle in the second-level target semantic data with the conventional driving mode in the scene library, so that the current vehicle behavior is judged, and the method can also be used for predicting the future behavior of the vehicle in the high-level target semantic data (such as the fourth-level target semantic data). The attribution relation of the target lane is comprehensively known by the instant information and the road information. The identification semantics, the lane line constraint semantics and the lane direction are obtained by processing data aiming at the road information shot by the visual sensor. The law and regulation constraint semantics are to comprehensively obtain the relevant limit range of the current speed, lane change and the like for the real-time laws and regulations in the road information and the database.
In the embodiment of the application, the four-level target semantic data is semantic data which is recognized by the intention of the vehicle and is obtained by further processing on the basis of three-level target semantic data, and the four-level target semantic data mainly comprises the following types: (1) A combination of two three-level semantics, such as a vehicle in a certain lane; (2) The space-time two dimensions are provided with a history track and a track prediction at the same time; (3) And identifying and judging the intention, such as current behavior and behavior prediction. The four-level target semantic data may be stored into a corresponding memory.
In the embodiment of the application, the five-level target semantic data is a combination of a plurality of low-level target semantic data, for example, a traffic rule (i.e. three-level target semantic data) is added to the behavior and track prediction (i.e. four-level target semantic data) of the target, and the obtained travelable region is obtained. Five levels of target semantic data may be stored into corresponding memories.
In the embodiment of the application, the target semantic data of at least two levels refers to target semantic data of five levels, the number of the levels is not particularly limited in the implementation, and a person skilled in the art can divide the target semantic data into a plurality of levels according to actual application requirements. The implementation divides the unmanned application layer software into different functional modules according to the independence of unmanned software functions and the uniqueness of interfaces. The partitioning of the target semantic data is performed according to the functional modules described above, or may be performed in other manners. In the implementation, the grade of the target semantic data is changed from shallow to deep, and the target semantic data is gradually changed from initial original data to initial physical information, initial driving information and a drivable area; based on the target semantic data, a link mode between a sensor recognition and analysis algorithm and a functional module can be established, and analysis calling after different sensor data are transmitted in a software layer is realized.
S120, determining target semantic data of a grade corresponding to the current driving mode of the vehicle from the target semantic data of at least two grades.
In the embodiment of the present application, after determining the target semantic data of at least two levels in the step S110, it is necessary to determine the target semantic data of the level corresponding to the current driving mode of the vehicle from the target semantic data of at least two levels.
Specifically, because the grade of the target semantic data is obtained by grading the data characteristics, the data characteristics of the data required by executing the current running mode can be determined first; and determining the target semantic data of the corresponding level of the current driving mode from the target semantic data of at least two levels according to the data characteristics. The present running mode is an adaptive cruise mode, and the mode has the greatest advantage over the constant-speed cruise mode in that the vehicle speed preset by a driver can be maintained, the vehicle speed of the vehicle can be adaptively adjusted according to the state of a front vehicle, and even the vehicle is automatically stopped and automatically started at a proper time. Thus, the data features required for the pattern are dynamic feature data of the vehicle, such as speed, attitude and position information of the preceding vehicle and the own vehicle, and the corresponding secondary target semantic data of the pattern is determined.
Alternatively, the current driving mode may be target semantic data corresponding to one level, or may be target semantic data corresponding to a plurality of levels.
S130, controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode.
In the embodiment of the application, after determining one or more levels of target semantic data corresponding to the current running mode, the vehicle-mounted terminal controls the vehicle to run in the current running mode according to the one or more levels of target semantic data.
According to the technical scheme provided by the embodiment, original semantic data is obtained by carrying out data processing on the original data acquired by the data acquisition equipment configured in the vehicle, and the original semantic data is classified according to the characteristics of the original semantic data to obtain target semantic data of at least two grades; determining target semantic data of a level corresponding to the current driving mode of the vehicle from the target semantic data of at least two levels; and controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode. The application divides the information perception of automatic driving into clear semantic data grades, can enable the low-grade target semantic data to be reused when the planning decision of automatic driving is made, and directly uses the high-grade target semantic data to control the vehicle to run in the current running mode, thereby avoiding the waste of calculation power caused by repeatedly calling the original data and improving the utilization rate of the calculation power of automatic driving; in addition, the automatic driving planning algorithm can be more reasonable and simple by setting the high-level target semantic data, the information perception and the division of planning decisions in automatic driving can be more clear, and different algorithms only do work in the range of the user, so that the automatic driving planning algorithm with higher level can be realized.
Example two
Fig. 2 is a second flow chart of a control method of an autopilot vehicle according to an embodiment of the present application; fig. 3 is a schematic hierarchical diagram of semantic data according to an embodiment of the present application. The embodiment of the application is optimized based on the embodiment, and is specifically optimized as follows: the embodiment explains the calling process of the general semantic data in detail.
Referring to fig. 2, the method of the present embodiment includes, but is not limited to, the following steps:
S210, carrying out data processing on the original data acquired by the data acquisition equipment configured in the vehicle to obtain original semantic data, and grading the original semantic data according to the characteristics of the original semantic data to obtain target semantic data of at least two grades.
Wherein, data acquisition equipment includes: at least one of a vision sensor, a laser radar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, and an integrated navigation device; the raw data includes at least one of static feature information of the vehicle, dynamic feature information of the vehicle, road information, and environmental information of an environment in which the vehicle is currently located. Raw semantic data refers to data after different degrees of data processing on the raw data, wherein the different degrees of data processing include data annotation, data analysis (e.g., stream processing, interactive query, batch processing, machine learning, artificial intelligence), and visual data presentation.
FIG. 3 is a hierarchical schematic diagram of semantic data, where primary target semantic data is original data, and there is no semantic original physical signal; the second-level target semantic data is biased to physical attributes, but does not have semantic data related to vehicle driving; the third-level target semantic data is semantic data related to vehicle driving; the four-level target semantic data comprise semantic data for further processing such as prediction of the behavior of the front vehicle, prediction of the track and the like; the five-level target semantic data are semantic data of the finally obtained exercisable area. Each single sensor has different perceptibility and describes a certain attribute of the internal and external environment of the vehicle, and if an ideal completely accurate model system is established for the whole perception system in automatic driving, each sensor corresponds to a filter of the model, and each sensor expresses the attribute of a certain aspect of the ideal model. The perceived portion of autopilot is in fact the approximation of the ideal model is achieved with as many sensors plus algorithms as possible. The more sensors available, the more accurate the algorithm, and the less deviation from the ideal model.
Optionally, in the semantic hierarchical model, the high-level semantic algorithm can obtain the low-level target semantic data, so that the low-level target semantic data can be used for multiple times, the computing force is saved, and a real model of an ideal perception model can be formed finally.
S220, extracting the shared general semantic data among different driving modes from at least two levels of target semantic data.
In an embodiment of the application, the driving modes of the vehicle include automatic parking, adaptive cruise and automatic emergency braking. And general target semantic data, namely general semantic data, can be shared among the automatic parking and adaptive cruising, the adaptive cruising and automatic emergency braking and the automatic parking and automatic emergency braking, which are extracted from the target semantic data. The method has the advantages that the automatic driving domain controller can calculate the resource coordination and scheduling to maximize the power utility.
For example, the automatic parking mode, the adaptive cruise mode and the automatic emergency braking mode are not started at the same time, and after the speed exceeds 20 km, the automatic parking mode stops executing, the adaptive cruise mode and the automatic emergency braking mode are started, but the calculation force of the automatic parking mode is also idle at the same time, and the general semantic data can be calculated by the same electronic control unit, so that the calculation force is saved.
S230, acquiring a current running mode of the vehicle, and judging whether the current running mode needs to share general semantic data with other running modes.
Wherein, the current driving mode of the vehicle comprises: automatic parking, adaptive cruise, or automatic emergency braking.
In the embodiment of the application, the vehicle-mounted terminal can acquire the current running mode of the vehicle from the controller corresponding to the running mode, and can also acquire the current running mode of the vehicle in other modes. Then the vehicle-mounted controller judges whether the current running mode needs to share general semantic data with other running modes, if so, step S240 is executed; if not, step S250 is performed.
S240, if needed, the universal semantic data is called.
In the embodiment of the application, if the current running mode needs to share the universal semantic data with other running modes, which indicates that the shared universal semantic data exists between the current running mode and the other running modes, the vehicle-mounted terminal calls the universal semantic data and controls the vehicle to run in the current running mode according to the universal semantic data. The advantage of setting like this is that can be effectual carry out the sharing of general semantic data between each independent controller, avoid repeatedly making the wheel, the function realization of each controller can utilize the perception result of other controllers, avoid the wasting of resources.
Alternatively, it may be that a part of data of the current driving mode needs to call the general semantic data, and another part of data needs to determine the target semantic data of the corresponding level from the target semantic data of at least two levels.
S250, if not needed, determining target semantic data of a grade corresponding to the current driving mode of the vehicle from the target semantic data of at least two grades.
In the embodiment of the application, if the current running mode does not need to share the common semantic data with other running modes, which indicates that the shared common semantic data exists between the current running mode and other running modes, the target semantic data of the corresponding grade of the current running mode of the vehicle is determined from the target semantic data of at least two grades.
Specifically, because the grade of the target semantic data is obtained by grading the data characteristics, the data characteristics of the data required by executing the current running mode can be determined according to the current running mode and the current environment of the vehicle; and determining the target semantic data of the corresponding level of the current driving mode from the target semantic data of at least two levels according to the data characteristics. For example, when the current driving mode is an automatic parking mode and the required data features are dynamic feature data (such as vehicle position information) and parking space information of the vehicle, the mode is determined to correspond to the secondary target semantic data and the tertiary target semantic data.
And S260, controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode.
In the embodiment of the application, after determining one or more levels of target semantic data corresponding to the current running mode, the vehicle-mounted terminal controls the vehicle to run in the current running mode according to the one or more levels of target semantic data.
According to the technical scheme provided by the embodiment, original semantic data is obtained by carrying out data processing on the original data acquired by the data acquisition equipment configured in the vehicle, and the original semantic data is classified according to the characteristics of the original semantic data to obtain target semantic data of at least two grades; extracting shared general semantic data between different driving modes from at least two levels of target semantic data; acquiring a current running mode of a vehicle, and judging whether the current running mode needs to share general semantic data with other running modes; if so, calling the general semantic data; if not, determining target semantic data of a level corresponding to the current driving mode of the vehicle from the target semantic data of at least two levels; and controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode. The application can effectively share the general semantic data by extracting the general semantic data, can lead the automatic driving domain controller to calculate resource coordination and dispatch, maximize the calculation power utility and is beneficial to realizing a higher-level automatic driving planning algorithm.
Example III
Fig. 4 is a schematic structural diagram of a control device for an autopilot vehicle according to an embodiment of the present application, and as shown in fig. 4, the device 400 may include:
The grading module 410 is configured to perform data processing on original data collected by a data collection device configured in a vehicle to obtain original semantic data, and grade the original semantic data according to features of the original semantic data to obtain target semantic data of at least two grades.
A determining module 420, configured to determine target semantic data of a level corresponding to the current driving mode of the vehicle from the target semantic data of the at least two levels.
The control module 430 is configured to control the vehicle to travel in the current travel mode according to the target semantic data of the level corresponding to the current travel mode.
Further, the grading module 410 may be specifically configured to: determining the original data as first-level target semantic data; performing data processing on the first-level target semantic data to obtain semantic data which has physical dimensions and is irrelevant to vehicle running, and determining the semantic data which has physical dimensions and is irrelevant to vehicle running as second-level target semantic data; carrying out data processing on the primary target semantic data and the secondary target semantic data to obtain semantic data related to vehicle running, and determining the semantic data related to vehicle running as tertiary target semantic data; performing data processing on the three-level target semantic data to obtain semantic data for identifying vehicle intention, and determining the semantic data for identifying the vehicle intention as four-level target semantic data; and combining the four-level target semantic data with at least one of the primary target semantic data, the secondary target semantic data and the tertiary target semantic data to obtain five-level target semantic data.
Further, the control device for an autonomous vehicle may further include: a data calling module;
The data calling module is used for extracting the universal semantic data shared between different driving modes from the target semantic data of at least two levels before determining the target semantic data of the level corresponding to the current driving mode of the vehicle from the target semantic data of at least two levels; acquiring a current running mode of the vehicle; judging whether the current running mode needs to share the general semantic data with other running modes or not; if so, calling the general semantic data; and if not, triggering and executing the step of determining the target semantic data of the level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels.
Further, the determining module 420 may include a feature determining unit and a data determining unit;
The feature determination unit is used for determining data features of data required for executing the current running mode.
The data determining unit is used for determining target semantic data of the grade corresponding to the current running mode from the target semantic data of the at least two grades according to the data characteristics.
Further, the above feature determining unit may be further specifically configured to: and determining the data characteristics of the data required by executing the current running mode according to the current running mode and the current environment of the vehicle.
Optionally, the current driving mode of the vehicle includes: automatic parking, adaptive cruise, or automatic emergency braking.
Optionally, the data acquisition device includes: at least one of a vision sensor, a laser radar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, and an integrated navigation device; the raw data includes at least one of static feature information of the vehicle, dynamic feature information of the vehicle, road information, and environmental information of an environment in which the vehicle is currently located.
The control device for the automatic driving vehicle provided by the embodiment can be applied to the control method for the automatic driving vehicle provided by any embodiment, and has corresponding functions and beneficial effects.
Example IV
Fig. 5 is a block diagram of an electronic device for implementing a control method of an autonomous vehicle according to an embodiment of the present application, and fig. 5 shows a block diagram of an exemplary electronic device suitable for implementing an embodiment of the present application. The electronic device shown in fig. 5 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the present application. The electronic device may typically be a smart phone, a tablet computer, a notebook computer, a vehicle-mounted terminal, a wearable device, etc. Preferably, the electronic device in the embodiment of the present application may be a vehicle-mounted terminal.
As shown in fig. 5, the electronic device 500 is embodied in the form of a general purpose computing device. The components of electronic device 500 may include, but are not limited to: one or more processors or processing units 516, a memory 528, a bus 518 that connects the various system components (including the memory 528 and the processing unit 516).
Bus 518 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 500 typically includes many types of computer system readable media. Such media can be any available media that is accessible by electronic device 500 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 528 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 530 and/or cache memory 532. Electronic device 500 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 534 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 518 through one or more data media interfaces. Memory 528 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the application.
A program/utility 540 having a set (at least one) of program modules 542 may be stored in, for example, memory 528, such program modules 542 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 542 generally perform the functions and/or methods described in the embodiments of the present application.
The electronic device 500 may also communicate with one or more external devices 514 (e.g., keyboard, pointing device, display 524, etc.), one or more devices that enable a user to interact with the electronic device 500, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 500 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 522. Also, the electronic device 500 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through a network adapter 520. As shown in fig. 5, the network adapter 520 communicates with other modules of the electronic device 500 over the bus 518. It should be appreciated that although not shown in fig. 5, other hardware and/or software modules may be used in connection with electronic device 500, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 516 executes various functional applications and data processing by running programs stored in the memory 528, for example, to implement the control method of the autonomous vehicle provided by any of the embodiments of the present application.
Example five
A fifth embodiment of the present application also provides a computer-readable storage medium having stored thereon a computer program (or referred to as computer-executable instructions) that, when executed by a processor, is operable to perform the method for controlling an autonomous vehicle according to any of the above embodiments of the present application.
The computer storage media of embodiments of the application may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Claims (8)
1. A control method of an autonomous vehicle, the method comprising:
Performing data processing on original data acquired by data acquisition equipment configured in a vehicle to obtain original semantic data, grading the original semantic data according to characteristics of the original semantic data to obtain target semantic data of at least two grades, wherein the data processing comprises the following steps: determining the original data as first-level target semantic data; performing data processing on the first-level target semantic data to obtain semantic data which has physical dimensions and is irrelevant to vehicle running, and determining the semantic data which has physical dimensions and is irrelevant to vehicle running as second-level target semantic data; carrying out data processing on the primary target semantic data and the secondary target semantic data to obtain semantic data related to vehicle running, and determining the semantic data related to vehicle running as tertiary target semantic data; performing data processing on the three-level target semantic data to obtain semantic data for identifying vehicle intention, and determining the semantic data for identifying the vehicle intention as four-level target semantic data; combining the four-level target semantic data with at least one of the primary target semantic data, the secondary target semantic data and the tertiary target semantic data to obtain five-level target semantic data;
Determining target semantic data of a level corresponding to the current driving mode of the vehicle from the target semantic data of the at least two levels, wherein the target semantic data comprises: determining data characteristics of data required for executing the current driving mode; determining target semantic data of a corresponding level of the current driving mode from the target semantic data of at least two levels according to the data characteristics;
and controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode.
2. The control method of an automatically driven vehicle according to claim 1, characterized by further comprising, before determining target semantic data of a level corresponding to a current travel pattern of the vehicle from the target semantic data of the at least two levels:
extracting the shared general semantic data among different driving modes from the target semantic data of at least two grades;
Acquiring a current running mode of the vehicle;
judging whether the current running mode needs to share the general semantic data with other running modes or not;
if so, calling the general semantic data;
And if not, triggering and executing the step of determining the target semantic data of the level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels.
3. The control method of an autonomous vehicle according to claim 1, wherein said determining a data characteristic of data required to execute the current travel mode includes:
And determining the data characteristics of the data required by executing the current running mode according to the current running mode and the current environment of the vehicle.
4. The control method of an autonomous vehicle according to claim 2, characterized in that the current running mode of the vehicle includes: automatic parking, adaptive cruise, or automatic emergency braking.
5. The control method of an autonomous vehicle according to claim 1, wherein the data collection device includes: at least one of a vision sensor, a laser radar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, and an integrated navigation device;
The raw data includes at least one of static feature information of the vehicle, dynamic feature information of the vehicle, road information, and environmental information of an environment in which the vehicle is currently located.
6. A control device for an autonomous vehicle, the device comprising:
The grading module is used for carrying out data processing on the original data acquired by the data acquisition equipment configured in the vehicle to obtain original semantic data, grading the original semantic data according to the characteristics of the original semantic data, and obtaining target semantic data of at least two grades;
the determining module is used for determining target semantic data of a grade corresponding to the current driving mode of the vehicle from the target semantic data of the at least two grades;
the control module is used for controlling the vehicle to run in the current running mode through the target semantic data of the corresponding grade of the current running mode;
The grading module is specifically used for determining the original data as first-level target semantic data; performing data processing on the first-level target semantic data to obtain semantic data which has physical dimensions and is irrelevant to vehicle running, and determining the semantic data which has physical dimensions and is irrelevant to vehicle running as second-level target semantic data; carrying out data processing on the primary target semantic data and the secondary target semantic data to obtain semantic data related to vehicle running, and determining the semantic data related to vehicle running as tertiary target semantic data; performing data processing on the three-level target semantic data to obtain semantic data for identifying vehicle intention, and determining the semantic data for identifying the vehicle intention as four-level target semantic data; combining the four-level target semantic data with at least one of the primary target semantic data, the secondary target semantic data and the tertiary target semantic data to obtain five-level target semantic data;
the determining module comprises a characteristic determining unit and a data determining unit;
the characteristic determining unit is used for determining data characteristics of data required for executing the current running mode;
the data determining unit is used for determining target semantic data of the grade corresponding to the current running mode from the target semantic data of the at least two grades according to the data characteristics.
7. An electronic device, the electronic device comprising:
One or more processors;
a storage means for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement the method of controlling an autonomous vehicle as claimed in any one of claims 1 to 5.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the control method of an autonomous vehicle as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111439754.3A CN114154510B (en) | 2021-11-30 | 2021-11-30 | Control method and device for automatic driving vehicle, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111439754.3A CN114154510B (en) | 2021-11-30 | 2021-11-30 | Control method and device for automatic driving vehicle, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114154510A CN114154510A (en) | 2022-03-08 |
CN114154510B true CN114154510B (en) | 2024-09-20 |
Family
ID=80455033
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111439754.3A Active CN114154510B (en) | 2021-11-30 | 2021-11-30 | Control method and device for automatic driving vehicle, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114154510B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114866586B (en) * | 2022-04-28 | 2023-09-19 | 岚图汽车科技有限公司 | Intelligent driving system, method, equipment and storage medium based on SOA architecture |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488250A (en) * | 2015-11-19 | 2016-04-13 | 上汽大众汽车有限公司 | Auxiliary analysis and detection method used for measurement data of automobile body dimensional deviation |
CN110008848A (en) * | 2019-03-13 | 2019-07-12 | 华南理工大学 | A kind of travelable area recognizing method of the road based on binocular stereo vision |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200080360A (en) * | 2018-12-14 | 2020-07-07 | 현대자동차주식회사 | A terminal for a vehicle, method for providing vehicle data of the terminal, a server and method for restoring vehicle data of the server |
CN110264586A (en) * | 2019-05-28 | 2019-09-20 | 浙江零跑科技有限公司 | L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading |
CN110991523A (en) * | 2019-11-29 | 2020-04-10 | 西安交通大学 | Interpretability evaluation method for unmanned vehicle detection algorithm performance |
-
2021
- 2021-11-30 CN CN202111439754.3A patent/CN114154510B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488250A (en) * | 2015-11-19 | 2016-04-13 | 上汽大众汽车有限公司 | Auxiliary analysis and detection method used for measurement data of automobile body dimensional deviation |
CN110008848A (en) * | 2019-03-13 | 2019-07-12 | 华南理工大学 | A kind of travelable area recognizing method of the road based on binocular stereo vision |
Also Published As
Publication number | Publication date |
---|---|
CN114154510A (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106874597B (en) | highway overtaking behavior decision method applied to automatic driving vehicle | |
US20190088141A1 (en) | Vehicle scheduling method and apparatus, device and storage medium | |
JP2023514905A (en) | Behavior planning for autonomous vehicles | |
JP2023509831A (en) | Lane Change Planning and Control in Autonomous Machine Applications | |
CN113632095A (en) | Object detection using tilted polygons suitable for parking space detection | |
CN113056749A (en) | Future object trajectory prediction for autonomous machine applications | |
WO2019178548A1 (en) | Determining drivable free-space for autonomous vehicles | |
US20150309512A1 (en) | Regional operation modes for autonomous vehicles | |
WO2020123360A1 (en) | Detecting unfamiliar signs | |
CN114194217B (en) | Automatic driving method and device for vehicle, electronic equipment and storage medium | |
CN116767245A (en) | Map information object data management using neural networks of autonomous systems and applications | |
CN113189989B (en) | Vehicle intention prediction method, device, equipment and storage medium | |
CN114154510B (en) | Control method and device for automatic driving vehicle, electronic equipment and storage medium | |
CN110843768B (en) | Automatic parking control method, device and equipment for automobile and storage medium | |
CN114475656B (en) | Travel track prediction method, apparatus, electronic device and storage medium | |
WO2023230740A1 (en) | Abnormal driving behavior identification method and device and vehicle | |
CN114274980B (en) | Track control method, track control device, vehicle and storage medium | |
CN113715817B (en) | Vehicle control method, vehicle control device, computer equipment and storage medium | |
CN114771525A (en) | Automatic lane changing method and system for vehicle | |
Palade et al. | Artificial Intelligence in Cars: How Close Yet Far Are We from Fully Autonomous Vehicles? | |
CN116107576A (en) | Page component rendering method and device, electronic equipment and vehicle | |
US20220402522A1 (en) | Tree based behavior predictor | |
JP2024043564A (en) | Dialogue system using knowledge base and language model for automotive systems and application | |
JP2023133049A (en) | Perception-based parking assistance for autonomous machine system and application | |
CN117615949A (en) | Channel change decision method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |