Nothing Special   »   [go: up one dir, main page]

CN115186732A - Intelligent driving target fusion method, device and equipment and readable storage medium - Google Patents

Intelligent driving target fusion method, device and equipment and readable storage medium Download PDF

Info

Publication number
CN115186732A
CN115186732A CN202210657615.6A CN202210657615A CN115186732A CN 115186732 A CN115186732 A CN 115186732A CN 202210657615 A CN202210657615 A CN 202210657615A CN 115186732 A CN115186732 A CN 115186732A
Authority
CN
China
Prior art keywords
vehicle
information
self
positioning coordinate
coordinate information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210657615.6A
Other languages
Chinese (zh)
Inventor
高小龙
桂绍靖
李阳
李兆干
王贝贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Trucks Co ltd
Original Assignee
Dongfeng Trucks Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Trucks Co ltd filed Critical Dongfeng Trucks Co ltd
Priority to CN202210657615.6A priority Critical patent/CN115186732A/en
Publication of CN115186732A publication Critical patent/CN115186732A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to an intelligent driving target fusion method, an intelligent driving target fusion device and a readable storage medium, which relate to the technical field of intelligent driving and comprise the steps of obtaining standard time service information, carrying out time service synchronous processing on a sensing controller, a positioning controller, a sensing fusion controller and a vehicle body controller based on the standard time service information so as to reduce time delay of each controller, and correcting target object information according to synchronous time stamps of each controller, so that delay errors caused by time inconsistency of calculation of various sensing algorithms can be corrected, fusion errors caused by frequency inconsistency of different sensors are eliminated, and accuracy of sensing fusion is effectively improved.

Description

Intelligent driving target fusion method, device and equipment and readable storage medium
Technical Field
The application relates to the technical field of intelligent driving, in particular to an intelligent driving target fusion method, device and equipment and a readable storage medium.
Background
Automobile intelligence is the trend of future automobile development, and SAE (Society of Automotive Engineers ) and China automobile Automation Classification divide automobile intelligence levels into 6 levels such as L0-L5. The L3 level intelligent driving is a great jump on an automobile intelligent road, the action of a person in the driving operation is rapidly reduced from L3, the automatic vehicle driving system can complete all the driving operations under the condition permission, and a driver can take over a fault automobile only when the system fails or exceeds the designed operation condition. However, compared with L1 and L2, the number of sensors matched with L3 is obviously increased, and the cooperation requirement between the sensors is further increased, so that the multi-sensor fusion algorithm is more complex, the calculation power of the required controller is greatly increased, and higher requirements are provided for the architecture of the intelligent system than before.
The L3 can be divided into a plurality of functional modules from the functional logic, such as visual perception, laser radar perception, millimeter wave radar perception, high-precision positioning, perception fusion, decision planning, vehicle control and the like. To implement the functional logic of L3, there are two architectural solutions existing in the current industry: the distributed system architecture is characterized in that vision perception, laser radar perception and millimeter wave radar perception are processed by separate controllers, barrier target information is output after the processing, and high-precision positioning, perception fusion, decision planning and vehicle control can be deployed on one or more controllers; the other is a centralized system architecture, and all the functional modules are deployed on a domain controller (single controller).
Under a distributed system architecture, due to the fact that the output frequencies of a vision sensor, a laser radar sensor and a millimeter wave sensor are different and the calculation time of a perception algorithm is different, a subsequent perception fusion function module is difficult to accurately perform fusion calculation on an obstacle target, so that a correct fusion result cannot be output, and great hidden danger is caused to driving safety; in addition, there are more controllers in a distributed architecture, and communication between controllers has a larger delay than in a centralized architecture, and there is a risk of data transmission loss. Therefore, how to realize accurate fusion of the targets under the distributed system architecture and reduce the delay of the distributed system architecture becomes a problem which needs to be solved at present.
Disclosure of Invention
The application provides an intelligent driving target fusion method, an intelligent driving target fusion device, intelligent driving target fusion equipment and a readable storage medium, and aims to solve the problems that accurate fusion of targets cannot be achieved under a distributed system architecture in the related technology and delay is large.
In a first aspect, an intelligent driving target fusion method is provided, which includes the following steps:
acquiring standard time service information, and performing time service synchronous processing on a perception controller, a positioning controller, a perception fusion controller and a vehicle body controller based on the standard time service information;
acquiring the self-vehicle positioning coordinate information of each moment in the current period based on a positioning sensor, wherein the self-vehicle positioning coordinate information comprises timestamp information subjected to time service synchronous processing;
acquiring vehicle body state information at each moment in a current period based on a vehicle body controller, and respectively establishing a mapping relation between the vehicle body state information at each moment and self-vehicle positioning coordinate information at a corresponding moment to obtain a plurality of vehicle state information and form a vehicle state set;
acquiring a plurality of single-frame target object information sent by a perception sensor and a first timestamp corresponding to each single-frame target object information, determining a timestamp range based on the first timestamps, and screening a vehicle state subset having the same range as the timestamp range from a vehicle state set;
correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the vehicle state subset to obtain corrected self-vehicle positioning coordinate information;
calculating to obtain delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range based on the vehicle state subset and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range;
calculating to obtain a self-vehicle position error according to the corrected self-vehicle positioning coordinate information and the delayed self-vehicle positioning coordinate information, and correcting each single-frame target object information respectively on the basis of the self-vehicle position error to obtain a plurality of corrected single-frame target object information;
and performing target fusion processing on the plurality of pieces of corrected single-frame target object information based on the perception fusion controller to obtain a target fusion result.
In some embodiments, the modifying, based on the subset of vehicle states, the own vehicle positioning coordinate information having the same timestamp information as the timestamp range upper limit value to obtain modified own vehicle positioning coordinate information includes:
simulating a vehicle track according to the vehicle state information in the vehicle state subset and calculating to obtain first vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range;
and correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the first self-vehicle positioning coordinate information to obtain the corrected self-vehicle positioning coordinate information.
In some embodiments, the calculating the delayed own vehicle positioning coordinate information corresponding to the timestamp range upper limit value based on the vehicle state subset and the own vehicle positioning coordinate information having the same timestamp information as the timestamp range lower limit value includes:
calculating the moving distance of the vehicle within the time stamp range based on the vehicle body state information in the vehicle state subset;
and calculating the delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range according to the self-vehicle moving distance and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range.
In some embodiments, the performing, based on the standard time service information, time service synchronization processing on the sensing controller, the positioning controller, the sensing fusion controller, and the vehicle body controller includes:
and transmitting the standard time service information to a perception controller, a positioning controller, a perception fusion controller and a vehicle body controller through a universal asynchronous receiving and transmitting transmitter so as to realize time service synchronization of the controllers.
In some embodiments, the body state information includes throttle data, brake data, steering data, and vehicle speed.
In a second aspect, an intelligent driving target fusion device is provided, which includes:
the time synchronization unit is used for acquiring standard time service information and carrying out time service synchronization processing on the perception controller, the positioning controller, the perception fusion controller and the vehicle body controller based on the standard time service information;
the data acquisition unit is used for acquiring the self-vehicle positioning coordinate information of each moment in the current period based on the positioning sensor, and the self-vehicle positioning coordinate information comprises timestamp information subjected to time service synchronization processing; acquiring vehicle body state information of each moment in a current period based on a vehicle body controller; acquiring a plurality of single-frame target object information sent by a perception sensor and a first timestamp corresponding to each single-frame target object information;
the system comprises a relation establishing unit, a vehicle state information acquiring unit and a vehicle positioning coordinate information acquiring unit, wherein the relation establishing unit is used for respectively establishing the mapping relation between the vehicle state information at each moment and the self vehicle positioning coordinate information at the corresponding moment to obtain a plurality of vehicle state information and form a vehicle state set;
the data screening unit is used for determining a timestamp range based on the first timestamp and screening a vehicle state subset with the same range as the timestamp range from the vehicle state set;
the data correction unit is used for correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the vehicle state subset to obtain corrected self-vehicle positioning coordinate information;
the data processing unit is used for calculating delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range based on the vehicle state subset and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range; calculating to obtain a self-vehicle position error according to the corrected self-vehicle positioning coordinate information and the delayed self-vehicle positioning coordinate information, and correcting each single-frame target object information respectively on the basis of the self-vehicle position error to obtain a plurality of corrected single-frame target object information;
and the target fusion unit is used for carrying out target fusion processing on the plurality of corrected single-frame target object information based on the perception fusion controller to obtain a target fusion result.
In some embodiments, the data modification unit is specifically configured to:
simulating a vehicle track according to the vehicle state information in the vehicle state subset and calculating to obtain first vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range;
and correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the first self-vehicle positioning coordinate information to obtain the corrected self-vehicle positioning coordinate information.
In some embodiments, the data processing unit is specifically configured to:
calculating the moving distance of the vehicle within the time stamp range based on the vehicle body state information in the vehicle state subset;
and calculating the delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range according to the self-vehicle moving distance and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range.
In a third aspect, an intelligent driving target fusion device is provided, which includes: the intelligent driving target fusion method comprises a memory and a processor, wherein at least one instruction is stored in the memory and loaded and executed by the processor so as to realize the intelligent driving target fusion method.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the aforementioned intelligent driving goal fusion method.
The beneficial effect that technical scheme that this application provided brought includes: the method can effectively realize accurate fusion of the targets and reduce the time delay of each controller.
The application provides an intelligent driving target fusion method, an intelligent driving target fusion device, intelligent driving target fusion equipment and a readable storage medium, wherein the method comprises the steps of obtaining standard time service information, and carrying out time service synchronous processing on a perception controller, a positioning controller, a perception fusion controller and a vehicle body controller based on the standard time service information; acquiring the self-vehicle positioning coordinate information of each moment in the current period based on a positioning sensor, wherein the self-vehicle positioning coordinate information comprises timestamp information subjected to time service synchronization processing; acquiring vehicle body state information at each moment in a current period based on a vehicle body controller, and respectively establishing a mapping relation between the vehicle body state information at each moment and self-vehicle positioning coordinate information at the corresponding moment to obtain a plurality of vehicle state information and form a vehicle state set; acquiring a plurality of single-frame target object information sent by a perception sensor and a first timestamp corresponding to each single-frame target object information, determining a timestamp range based on the first timestamps, and screening a vehicle state subset having the same range as the timestamp range from a vehicle state set; correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the vehicle state subset to obtain corrected self-vehicle positioning coordinate information; calculating to obtain delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range based on the vehicle state subset and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range; calculating to obtain a self-vehicle position error according to the corrected self-vehicle positioning coordinate information and the delayed self-vehicle positioning coordinate information, and correcting each single-frame target object information respectively based on the self-vehicle position error to obtain a plurality of corrected single-frame target object information; and performing target fusion processing on the plurality of pieces of corrected single-frame target object information based on the perception fusion controller to obtain a target fusion result. According to the method and the device, time service synchronization of the controllers is achieved, so that time delay of the controllers is reduced, target object information is corrected according to the synchronous time stamps of the controllers, time delay errors caused by inconsistent calculation time of various sensing algorithms can be corrected, fusion errors caused by inconsistent frequencies of different sensors are eliminated, and accuracy of sensing fusion is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an intelligent driving target fusion method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a position relationship between a host vehicle and a target vehicle in the prior art;
fig. 3 is a schematic view of a corrected positional relationship between a host vehicle and a target vehicle according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an intelligent driving target fusion device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making creative efforts shall fall within the protection scope of the present application.
The embodiment of the application provides an intelligent driving target fusion method, an intelligent driving target fusion device, intelligent driving target fusion equipment and a readable storage medium, and can solve the problems that accurate fusion of targets cannot be achieved under a distributed system architecture in the related technology and delay is large.
Fig. 1 is a method for fusing intelligent driving targets provided in an embodiment of the present application, including the following steps:
step S10: acquiring standard time service information, and performing time service synchronous processing on a perception controller, a positioning controller, a perception fusion controller and a vehicle body controller based on the standard time service information;
further, the time service synchronous processing is performed on the perception controller, the positioning controller, the perception fusion controller and the vehicle body controller based on the standard time service information, and the time service synchronous processing method includes:
and sending the standard time service information to a perception controller, a positioning controller, a perception fusion controller and a vehicle body controller through a universal asynchronous receiving and sending transmitter so as to realize time service synchronization of all controllers.
Step S20: acquiring the self-vehicle positioning coordinate information of each moment in the current period based on a positioning sensor, wherein the self-vehicle positioning coordinate information comprises timestamp information subjected to time service synchronous processing;
step S30: acquiring vehicle body state information at each moment in a current period based on a vehicle body controller, and respectively establishing a mapping relation between the vehicle body state information at each moment and self-vehicle positioning coordinate information at the corresponding moment to obtain a plurality of vehicle state information and form a vehicle state set; the vehicle body state information comprises throttle data, brake data, steering data and vehicle speed.
Step S40: acquiring a plurality of single-frame target object information sent by a perception sensor and a first timestamp corresponding to each single-frame target object information, determining a timestamp range based on the first timestamps, and screening a vehicle state subset having the same range as the timestamp range from a vehicle state set;
step S50: correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the vehicle state subset to obtain corrected self-vehicle positioning coordinate information;
further, the correcting the own vehicle positioning coordinate information having the same timestamp information as the timestamp range upper limit value based on the vehicle state subset to obtain the corrected own vehicle positioning coordinate information includes:
simulating a vehicle track according to the vehicle state information in the vehicle state subset and calculating to obtain first vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range;
and correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the first self-vehicle positioning coordinate information to obtain the corrected self-vehicle positioning coordinate information.
Step S60: calculating to obtain delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range based on the vehicle state subset and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range;
further, the calculating the delayed own vehicle positioning coordinate information corresponding to the timestamp range upper limit value based on the vehicle state subset and the own vehicle positioning coordinate information having the same timestamp information as the timestamp range lower limit value includes:
calculating the moving distance of the vehicle within the time stamp range based on the vehicle body state information in the vehicle state subset;
and calculating the delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range according to the self-vehicle moving distance and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range.
Step S70: calculating to obtain a self-vehicle position error according to the corrected self-vehicle positioning coordinate information and the delayed self-vehicle positioning coordinate information, and correcting each single-frame target object information respectively on the basis of the self-vehicle position error to obtain a plurality of corrected single-frame target object information;
step S80: and performing target fusion processing on the plurality of pieces of corrected single-frame target object information based on the perception fusion controller to obtain a target fusion result.
Therefore, according to the embodiment, time service synchronization of each controller can be realized, so that time delay of each controller is reduced, target object information is corrected according to the synchronization time stamp of each controller, time delay errors caused by time inconsistency of various sensing algorithm calculation can be corrected, fusion errors caused by frequency inconsistency of different sensors are eliminated, and accuracy of sensing fusion is effectively improved.
Exemplarily, no matter a distributed intelligent driving system architecture or a centralized intelligent driving system architecture, a certain objective defect exists at present. For example, under a distributed system architecture, because the output frequencies of a vision sensor, a laser radar sensor and a millimeter wave sensor are different, and the calculation time of a perception algorithm is different, a subsequent perception fusion function module is difficult to perform accurate fusion calculation on an obstacle target, so that a correct fusion result cannot be output, and great hidden danger is generated for driving safety; in addition, there are more controllers in a distributed architecture, and communication between controllers has a larger delay than in a centralized architecture, and there is a risk of data transmission loss. Under a centralized system architecture, all functional modules are deployed on domain control, compared with a distributed system architecture, although data delay and transmission risks are reduced, huge and complex software systems are difficult to be efficiently organized and managed in a centralized manner, once functional logic has problems, the problem of rapid and accurate positioning puts a very high requirement on developers, the centralized system architecture is difficult to meet the requirement of high-level functional safety of intelligent driving, complex and variable functional logic is difficult to expand, development and test resources are repeatedly input due to the fact that the whole body is dragged, and the efficiency is low. Therefore, the present embodiment aims to provide a method for accurately fusing targets, which can reduce the risk of delay uncertainty caused by the conventional distributed system architecture and reduce the software integration complexity of the centralized system architecture.
In the embodiment, standard time service information can be acquired through a UART (universal asynchronous receiver transmitter) interface, and the standard time service information is sent to the distributed sensing controller, the positioning controller, the sensing fusion controller and the vehicle body controller through the UART interface, so that time service synchronization of the controllers is realized, the risk of delay uncertainty caused by the traditional distributed system architecture is reduced, delay errors caused by inconsistent calculation time of various sensing algorithms are corrected, and the accuracy of the sensing fusion function is improved. The sensing controller comprises a visual sensing controller, a laser radar sensing controller and a millimeter wave radar sensing controller; the positioning controller may be a high precision positioning controller.
As shown in fig. 2, in the related art, at T 0 At the moment, the own vehicle and the two target vehicles (A and B) respectively correspond to L 0 、A 0 And its corresponding target position and confidence interval, B 0 And the corresponding target position and confidence interval thereof, and the sensor identifies and calculates the target vehicleThen, it will reach T after a certain delay 1 At the moment, the own vehicle and the two target vehicles respectively correspond to L 1 、A 1 And its corresponding target position and confidence interval, B 1 And its corresponding target location and confidence interval. Wherein, from T 0 To T 1 If the own vehicle and the two target vehicles keep running at a constant speed in the time period of (4), the own vehicle is in T 0 Distance S between time of day and target vehicle B 0 At T with the bicycle 1 The distance S1 between the moment and the target vehicle B is equal; however, in the scenario of fig. 2, the insertion of the subject vehicle a facing the front takes an emergency braking deceleration, resulting in a sudden deceleration at T 1 At the moment, the distance of the target recognized by the sensor is greater than the real distance (S) 0 >S 1 ). Will T 1 The target output by the time visual perception controller is recorded as A 1C 、B 1C (actually T) 0 Time of day visually perceived target state), T 1 The output target of the millimeter wave sensing controller at the moment is recorded as B 1R At this time, according to the target fusion result of the visual perception controller and the millimeter wave perception controller, the probability of the fusion result is A 1C And B 1R The same target vehicle (target vehicle B) is identified, i.e. target vehicle a is lost, which could lead to driving safety risks.
Therefore, the embodiment will calculate and eliminate the calculation delay of the visual perception controller in the image recognition according to the synchronous time stamps of the respective controllers, thereby correcting the target output result of the visual perception controller so that T 1 The target result of the visual perception controller is more accurate at any moment, so that the accuracy of perception fusion is improved.
Specifically, in this embodiment, the vehicle positioning coordinate information C is obtained by the high-precision positioning controller, and the vehicle positioning coordinate information at each time in the current period, such as the continuous T, is stored 0 To T n Corresponding self-vehicle positioning coordinate information C at moment 0 To C n (C n As the most recent data). And the positioning coordinate information C comprises timestamp information of unified time service.
Secondly, the vehicle body state information S which can contain throttle data and brake data is obtained in real timeSteering data and vehicle speed, and storing the vehicle body state information at each moment in the current period, and associating the vehicle body state information acquired each time to the latest self-vehicle positioning coordinate information C n Further generate and C 0 To C n One-to-one correspondence of vehicle state information V 0 To V n I.e. generating a set of vehicle states [ V ] 0 ,V n ]. And the frequency of the vehicle body state information is inconsistent with that of the positioning coordinate information, so that the vehicle body state information in the vehicle state information keeps consistent with the vehicle body state information of the previous frame when not updated.
Then, a frame of sensor perception data (i.e. a plurality of single frame of target object information, such as single frame of target object information D sent by a visual perception controller) is obtained c (the single frame object information D c Obtained by visual image perception calculation), single-frame target object information D sent by laser radar perception controller l (the single frame of the object information D l Obtained by sensing and calculating laser radar) and single-frame target object information D sent by millimeter wave radar sensing controller r (the single frame object information D r Calculated by sensing of millimeter wave radar)), and reads the time stamps of the data transmitted by different types of controllers (the time stamps are subjected to uniform time service), and the range of the time stamps is determined to be [ T ] S ,T E ]。
Then, a vehicle state set [ V ] is obtained according to the timestamp range 0 ,V n ]Consecutive subsets of vehicle states [ V ] of the same time range S ,V E ](ii) a And according to the mapping relation between the self-vehicle positioning coordinate information and the vehicle body state information in the vehicle state set and the timestamp information which is contained in the self-vehicle positioning coordinate information and subjected to time service synchronization processing, the time information is converted from [ V ] S ,V E ]Screening out corresponding vehicle body state information, and simulating a vehicle track according to the screened vehicle body state information to calculate to obtain T E Estimating the current position coordinate information of the vehicle, and comparing the estimated value of the current position coordinate information with T E Current location coordinate information C E Filtering calculation is carried out to obtain more accurate self-vehicle positioning coordinate information C E * (see FIG. 3 for a description of the preferred embodiments)。
Further, the present embodiment can pass V within the range of the condition that the target vehicle is assumed to travel at a constant speed S The [ T ] is obtained by calculating the vehicle speed information in S ,T E ]Moving distance of self vehicle in time window, then combining T E Current location coordinate information C S Calculating to obtain delayed self-vehicle positioning coordinate information C E @ (see FIG. 3).
Then through C E * And C E @ Calculating to obtain the self-parking position error E, and correcting D through E c 、D l And D r Obtaining the synchronously corrected D according to the target object distance information in c * (see FIG. 3), D l * 、D r * (ii) a Finally, perception fusion controller pair D c * 、D l * 、D r * And target fusion processing is carried out, so that a more accurate target fusion result is obtained, and fusion errors caused by different sensor frequency inconsistency are eliminated.
The embodiment of the present application further provides an intelligent driving target fusion device, including:
the time synchronization unit is used for acquiring standard time service information and performing time service synchronization processing on the sensing controller, the positioning controller, the sensing fusion controller and the vehicle body controller based on the standard time service information;
the data acquisition unit is used for acquiring the self-vehicle positioning coordinate information of each moment in the current period based on the positioning sensor, and the self-vehicle positioning coordinate information comprises timestamp information subjected to time service synchronization processing; acquiring vehicle body state information of each moment in a current period based on a vehicle body controller; acquiring a plurality of single-frame target object information sent by a perception sensor and a first timestamp corresponding to each single-frame target object information;
the system comprises a relation establishing unit, a vehicle state information acquiring unit and a vehicle positioning coordinate information acquiring unit, wherein the relation establishing unit is used for respectively establishing the mapping relation between the vehicle state information at each moment and the self vehicle positioning coordinate information at the corresponding moment to obtain a plurality of vehicle state information and form a vehicle state set;
the data screening unit is used for determining a timestamp range based on the first timestamp and screening a vehicle state subset with the same range as the timestamp range from the vehicle state set;
the data correction unit is used for correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the vehicle state subset to obtain corrected self-vehicle positioning coordinate information;
the data processing unit is used for calculating delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range based on the vehicle state subset and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range; calculating to obtain a self-vehicle position error according to the corrected self-vehicle positioning coordinate information and the delayed self-vehicle positioning coordinate information, and correcting each single-frame target object information respectively on the basis of the self-vehicle position error to obtain a plurality of corrected single-frame target object information;
and the target fusion unit is used for carrying out target fusion processing on the plurality of corrected single-frame target object information based on the perception fusion controller to obtain a target fusion result.
The time synchronization unit in this embodiment may be integrated into the time synchronization controller, the data acquisition unit, the relationship creation unit, the data screening unit, the data modification unit, and the data processing unit are integrated into the intelligent synchronization modification module, and the intelligent synchronization modification module and the target fusion unit are integrated into the sensing fusion controller, thereby forming an improved distributed system architecture. Therefore, in the embodiment, on the basis of the traditional distributed system architecture, a time synchronization controller is added, the time synchronization controller is respectively in physical connection with a visual perception controller, a laser radar perception controller, a millimeter wave radar perception controller, a high-precision positioning controller, a perception fusion controller and a vehicle body controller in an intelligent system through a UART interface, and then time service synchronization is carried out on data through a UART data protocol; and an intelligent synchronous correction function module is added on the perception fusion controller to correct the synchronization problem caused by inconsistent working frequency of the sensor in the L3 level intelligent driving and reduce the delay risk caused by large operation, data transmission and the like.
In summary, the improved distributed system architecture provided in this embodiment can reduce the risk of uncertainty in delay caused by the conventional distributed system architecture, and can also reduce the software integration complexity of the centralized system architecture, thereby meeting the overall high-level functional security requirement of the intelligent system. Therefore, according to the embodiment, time service synchronization of each controller can be realized, so that time delay of each controller is reduced, and target object information is corrected according to the synchronization time stamp of each controller, so that time delay errors caused by time inconsistency of various sensing algorithms can be corrected, fusion errors caused by frequency inconsistency of different sensors are eliminated, and accuracy of sensing fusion is effectively improved.
Further, the data modification unit is specifically configured to:
simulating a vehicle track according to the vehicle state information in the vehicle state subset and calculating to obtain first vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range;
and correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the first self-vehicle positioning coordinate information to obtain the corrected self-vehicle positioning coordinate information.
Further, the data processing unit is specifically configured to:
calculating the moving distance of the vehicle in the time stamp range based on the vehicle body state information in the vehicle state subset;
and calculating the delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range according to the self-vehicle moving distance and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range.
Further, the time synchronization unit is specifically configured to:
and sending the standard time service information to a perception controller, a positioning controller, a perception fusion controller and a vehicle body controller through a universal asynchronous receiving and sending transmitter so as to realize time service synchronization of all controllers.
Further, the vehicle body state information comprises throttle data, brake data, steering data and vehicle speed.
It should be noted that, as will be clearly understood by those skilled in the art, for convenience and brevity of description, the specific working processes of the apparatus and the units described above may refer to the corresponding processes in the foregoing embodiment of the intelligent driving target fusion method, and are not described herein again.
The intelligent driving target fusion device provided by the above embodiment can be implemented in the form of a computer program, and the computer program can be run on the intelligent driving target fusion device shown in fig. 4.
The embodiment of the present application further provides an intelligent driving target fusion device, including: the intelligent driving target fusion method comprises a memory, a processor and a network interface which are connected through a system bus, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor so as to realize all steps or part of steps of the intelligent driving target fusion method.
The network interface is used for performing network communication, such as sending distributed tasks. Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The Processor may be a CPU, other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the computer device and the various parts of the overall computer device being connected by various interfaces and lines.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a video playing function, an image playing function, etc.), and the like; the storage data area may store data (such as video data, image data, etc.) created according to the use of the cellular phone, etc. Further, the memory may include high speed random access memory, and may include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, all steps or part of steps of the intelligent driving target fusion method are realized.
The embodiments of the present application may implement all or part of the foregoing processes, and may also be implemented by a computer program instructing related hardware, where the computer program may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the foregoing methods. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer memory, read-Only memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, software distribution medium, etc. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, server, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or system that comprises the element.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An intelligent driving target fusion method is characterized by comprising the following steps:
acquiring standard time service information, and performing time service synchronous processing on a perception controller, a positioning controller, a perception fusion controller and a vehicle body controller based on the standard time service information;
acquiring the self-vehicle positioning coordinate information of each moment in the current period based on a positioning sensor, wherein the self-vehicle positioning coordinate information comprises timestamp information subjected to time service synchronous processing;
acquiring vehicle body state information at each moment in a current period based on a vehicle body controller, and respectively establishing a mapping relation between the vehicle body state information at each moment and self-vehicle positioning coordinate information at the corresponding moment to obtain a plurality of vehicle state information and form a vehicle state set;
acquiring a plurality of single-frame target object information sent by a perception sensor and a first timestamp corresponding to each single-frame target object information, determining a timestamp range based on the first timestamps, and screening a vehicle state subset having the same range as the timestamp range from a vehicle state set;
correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the vehicle state subset to obtain corrected self-vehicle positioning coordinate information;
calculating to obtain delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range based on the vehicle state subset and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range;
calculating to obtain a self-vehicle position error according to the corrected self-vehicle positioning coordinate information and the delayed self-vehicle positioning coordinate information, and correcting each single-frame target object information respectively on the basis of the self-vehicle position error to obtain a plurality of corrected single-frame target object information;
and performing target fusion processing on the plurality of corrected single-frame target object information based on the perception fusion controller to obtain a target fusion result.
2. The intelligent driving target fusion method according to claim 1, wherein the modifying the own vehicle positioning coordinate information having the same timestamp information as the timestamp range upper limit value based on the vehicle state subset to obtain the modified own vehicle positioning coordinate information comprises:
simulating a vehicle track according to the vehicle state information in the vehicle state subset and calculating to obtain first vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range;
and correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the first self-vehicle positioning coordinate information to obtain the corrected self-vehicle positioning coordinate information.
3. The intelligent driving target fusion method according to claim 1, wherein the calculating of the delayed own vehicle positioning coordinate information corresponding to the timestamp range upper limit value based on the vehicle state subset and the own vehicle positioning coordinate information having the same timestamp information as the timestamp range lower limit value comprises:
calculating the moving distance of the vehicle within the time stamp range based on the vehicle body state information in the vehicle state subset;
and calculating the delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range according to the self-vehicle moving distance and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range.
4. The intelligent driving target fusion method according to claim 1, wherein the time service synchronization process for the perception controller, the positioning controller, the perception fusion controller and the vehicle body controller based on the standard time service information comprises:
and sending the standard time service information to a perception controller, a positioning controller, a perception fusion controller and a vehicle body controller through a universal asynchronous receiving and sending transmitter so as to realize time service synchronization of all controllers.
5. The intelligent driving objective fusion method of claim 1, wherein: the vehicle body state information includes throttle data, brake data, steering data, and vehicle speed.
6. An intelligent driving target fusion device, comprising:
the time synchronization unit is used for acquiring standard time service information and carrying out time service synchronization processing on the perception controller, the positioning controller, the perception fusion controller and the vehicle body controller based on the standard time service information;
the data acquisition unit is used for acquiring the self-vehicle positioning coordinate information of each moment in the current period based on the positioning sensor, and the self-vehicle positioning coordinate information comprises timestamp information subjected to time service synchronous processing; acquiring vehicle body state information of each moment in a current period based on a vehicle body controller; acquiring a plurality of single-frame target object information sent by a perception sensor and a first timestamp corresponding to each single-frame target object information;
the system comprises a relation establishing unit, a vehicle state information acquiring unit and a vehicle positioning coordinate information acquiring unit, wherein the relation establishing unit is used for respectively establishing the mapping relation between the vehicle state information at each moment and the self vehicle positioning coordinate information at the corresponding moment to obtain a plurality of vehicle state information and form a vehicle state set;
the data screening unit is used for determining a timestamp range based on the first timestamp and screening a vehicle state subset with the same range as the timestamp range from the vehicle state set;
the data correction unit is used for correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the vehicle state subset to obtain corrected self-vehicle positioning coordinate information;
the data processing unit is used for calculating delayed self-vehicle positioning coordinate information corresponding to the timestamp range upper limit value based on the vehicle state subset and the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range lower limit value; calculating to obtain a self-vehicle position error according to the corrected self-vehicle positioning coordinate information and the delayed self-vehicle positioning coordinate information, and correcting each single-frame target object information respectively on the basis of the self-vehicle position error to obtain a plurality of corrected single-frame target object information;
and the target fusion unit is used for carrying out target fusion processing on the plurality of pieces of corrected single-frame target object information based on the perception fusion controller to obtain a target fusion result.
7. The intelligent driving-target fusion device according to claim 6, wherein the data correction unit is specifically configured to:
simulating a vehicle track according to the vehicle state information in the vehicle state subset and calculating to obtain first vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range;
and correcting the self-vehicle positioning coordinate information with the same timestamp information as the timestamp range upper limit value based on the first self-vehicle positioning coordinate information to obtain the corrected self-vehicle positioning coordinate information.
8. The intelligent driving target fusion device of claim 6, wherein the data processing unit is specifically configured to:
calculating the moving distance of the vehicle within the time stamp range based on the vehicle body state information in the vehicle state subset;
and calculating the delayed self-vehicle positioning coordinate information corresponding to the upper limit value of the timestamp range according to the self-vehicle moving distance and the self-vehicle positioning coordinate information with the same timestamp information as the lower limit value of the timestamp range.
9. An intelligent driving target fusion device, comprising: a memory and a processor, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement the intelligent driving objective fusion method of any of claims 1-5.
10. A computer-readable storage medium characterized by: the computer storage medium stores a computer program that, when executed by a processor, implements the intelligent driving goal fusion method of any one of claims 1 to 5.
CN202210657615.6A 2022-06-10 2022-06-10 Intelligent driving target fusion method, device and equipment and readable storage medium Pending CN115186732A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210657615.6A CN115186732A (en) 2022-06-10 2022-06-10 Intelligent driving target fusion method, device and equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210657615.6A CN115186732A (en) 2022-06-10 2022-06-10 Intelligent driving target fusion method, device and equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115186732A true CN115186732A (en) 2022-10-14

Family

ID=83513686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210657615.6A Pending CN115186732A (en) 2022-06-10 2022-06-10 Intelligent driving target fusion method, device and equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115186732A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152950A (en) * 2022-12-28 2023-05-23 宁波高新区阶梯科技有限公司 Data processing method, device and system, electronic equipment and storage medium
CN117953459A (en) * 2024-03-25 2024-04-30 安徽蔚来智驾科技有限公司 Perception fusion result acquisition method, readable storage medium and intelligent device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152950A (en) * 2022-12-28 2023-05-23 宁波高新区阶梯科技有限公司 Data processing method, device and system, electronic equipment and storage medium
CN117953459A (en) * 2024-03-25 2024-04-30 安徽蔚来智驾科技有限公司 Perception fusion result acquisition method, readable storage medium and intelligent device

Similar Documents

Publication Publication Date Title
CN110333730B (en) Verification method, platform and storage medium for safety of expected function of automatic driving algorithm
CN115186732A (en) Intelligent driving target fusion method, device and equipment and readable storage medium
US20210142658A1 (en) Traffic signal management for autonomous vehicle operation
EP3982651A1 (en) Vehicle, device, computer program and method for implementation in a vehicle
CN115470884A (en) Platform for perception system development of an autopilot system
DE102020122086A1 (en) MEASURING CONFIDENCE IN DEEP NEURAL NETWORKS
CN115714792A (en) Digital key calibration method, mobile terminal, cloud server and storage medium
JP7542726B2 (en) Identifying critical scenarios for vehicle validation and verification
CN112396183A (en) Method, device and equipment for automatic driving decision and computer storage medium
CN118247359A (en) Automatic calibration method and device for fish-eye camera, computer equipment and storage medium
DE102020114379A1 (en) STORING VEHICLE DATA
CN114006672B (en) Vehicle-mounted multi-sensor data synchronous acquisition method and system
CN116304992A (en) Sensor time difference determining method, device, computer equipment and storage medium
CN115932453A (en) Intelligent network-connected automobile whole-automobile in-loop test system and method
CN115236696A (en) Method and device for determining obstacle, electronic equipment and storage medium
EP3819888B1 (en) Vehicle system of a vehicle for detecting and validating an event using a deep learning model
CN113781838A (en) Vehicle, method, computer program and device for merging object information
CN114779732A (en) Vehicle testing method and device, electronic equipment and storage medium
CN114821131A (en) Target detection method and device and unmanned vehicle
CN111382774A (en) Data processing method and device
CN113504520B (en) Millimeter wave radar target simulation method, device and equipment and readable storage medium
US12017649B2 (en) Blockchain system to aid vehicle actions
CN118055382A (en) Method, device, equipment and storage medium for data processing
CN117302258B (en) Track coefficient conversion generation method and system for lane change path of automatic driving vehicle
CN115270967A (en) Data processing method, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination