Nothing Special   »   [go: up one dir, main page]

CN112615604A - Filtering method and device of intelligent driving perception system and electronic equipment - Google Patents

Filtering method and device of intelligent driving perception system and electronic equipment Download PDF

Info

Publication number
CN112615604A
CN112615604A CN202011461427.3A CN202011461427A CN112615604A CN 112615604 A CN112615604 A CN 112615604A CN 202011461427 A CN202011461427 A CN 202011461427A CN 112615604 A CN112615604 A CN 112615604A
Authority
CN
China
Prior art keywords
target
model
filter
vsimm
model set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011461427.3A
Other languages
Chinese (zh)
Inventor
吴宏升
韩志华
杜一光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhitu Technology Co Ltd
Original Assignee
Suzhou Zhitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhitu Technology Co Ltd filed Critical Suzhou Zhitu Technology Co Ltd
Priority to CN202011461427.3A priority Critical patent/CN112615604A/en
Publication of CN112615604A publication Critical patent/CN112615604A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03HIMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
    • H03H21/00Adaptive networks
    • H03H21/0012Digital adaptive filters
    • H03H21/0043Adaptive algorithms

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention provides a filtering method and device of an intelligent driving perception system and electronic equipment. Wherein, the method comprises the following steps: determining a target motion model set of the barrier target based on a motion model in a priori knowledge base, and judging whether the target motion model set is consistent with a model set of a VSIMM filter at the previous moment; if not, determining a target model set of the VSIMM at the current moment based on the target motion model set and a model set of the VSIMM at the last moment; acquiring measurement information of the barrier target at the current moment so that the VSIMM filter carries out filtering state estimation according to the measurement information and the target model set to obtain total estimation output information of the VSIMM filter at the current moment; the obstacle target is tracked according to the total estimation output information, so that the tracking precision of the obstacle target is improved through the VSIMM filter, the performance of the intelligent driving sensing system is improved, and the practical value is high.

Description

Filtering method and device of intelligent driving perception system and electronic equipment
Technical Field
The invention relates to the technical field of intelligent driving environment perception, in particular to a filtering method and device of an intelligent driving perception system and electronic equipment.
Background
In an intelligent driving perception system, the performance of the intelligent driving perception system is mainly improved by the capabilities of obstacle detection and obstacle target tracking, wherein the target tracking is a very important link. The filter is used for calculating the position, the speed, the track, the quantity, the type and the characteristic of the surrounding obstacle target in the target tracking process. In the field of intelligent driving, commonly used sensors are typically laser radar, video cameras, millimeter wave radar, ultrasonic radar, and the like.
The calculation optimization problem faced by target tracking filtering mainly has two points: firstly, the target mobility of the obstacle in the intelligent driving environment is strong, the difference is large, and the target mobility is difficult to predict; secondly, the vehicle environment is complex, a large amount of noise exists, the sensor has certain error noise, and fitting is difficult, so that the problems that the filter state estimation precision is not high, even filtering divergence is caused in the target tracking process and the like are caused. The existing method mainly carries out filtering through a self-adaptive filtering algorithm, although the method can improve the tracking precision of the obstacle target to a certain extent, due to the strong mobility and randomness of the obstacle target, the movement model of the obstacle target is difficult to determine, the filtering effect is influenced, the tracking precision of the obstacle target is reduced, even the obstacle target is lost, and the performance of the intelligent driving perception system is influenced.
Disclosure of Invention
In view of the above, the present invention provides a filtering method and apparatus for an intelligent driving sensing system, and an electronic device, so as to alleviate the above problems.
In a first aspect, an embodiment of the present invention provides a filtering method for an intelligent driving sensing system, where the method is applied to a server of the intelligent driving sensing system, where the server is provided with a priori knowledge base and a variable-structure interactive multi-model VSIMM filter, and the method includes: determining a target motion model set of the obstacle target based on motion models in a priori knowledge base, wherein the motion models comprise at least one of: a uniform velocity CV model, a uniform acceleration CA model, a uniform rotation speed CT model, a Current model and a curvilinear motion CM model; judging whether the target motion model set is consistent with a model set at the last moment of the VSIMM filter or not; the model set of the VSIMM filter at the previous moment comprises a motion model corresponding to each filter in the VSIMM filter at the previous moment; if not, determining a target model set of the VSIMM at the current moment based on the target motion model set and a model set of the VSIMM at the last moment; acquiring measurement information of the barrier target at the current moment so that the VSIMM filter carries out filtering state estimation according to the measurement information and the target model set to obtain total estimation output information of the VSIMM filter at the current moment; and tracking the obstacle target according to the total estimation output information.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of determining whether the target motion model set is consistent with a model set at a previous time on the VSIMM filter includes: judging whether each target motion model in the target motion model set is completely the same as the motion model corresponding to each filter in the VSIMM filter at the previous moment; if so, judging that the target motion model set is consistent with a model set at the last moment of the VSIMM filter; if any difference exists, the target motion model set is judged to be inconsistent with the model set at the last moment of the VSIMM filter.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of determining the target model set of the VSIMM filter at the current time based on the target motion model set and the model set of the VSIMM filter at the previous time includes: obtaining a model set of the VSIMM filter at the current moment according to a union set of the target motion model set and a model set of the VSIMM filter at the last moment; acquiring a first model probability of a target motion model set and a second model probability of the model set at the last moment of a VSIMM filter; and determining a target model set of the VSIMM filter at the current moment according to the ratio of the first model probability and the second model probability.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the step of determining, according to a ratio of the first model probability and the second model probability, a target model set of the VSIMM filter at the current time includes: judging whether the ratio is larger than a first threshold value or not, and if so, determining that a target model set of the VSIMM filter at the current moment is a target motion model set; if not, judging whether the ratio is smaller than a second threshold value, if so, determining that the target model set of the VSIMM filter at the current moment is the model set of the VSIMM filter at the previous moment; wherein the second threshold is less than the first threshold.
With reference to the third possible implementation manner of the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the step of performing, by the VSIMM filter, filter state estimation according to the measurement information and the target model set includes: determining a corresponding target filter in the VSIMM filter according to each target model in the target model set; carrying out filtering state estimation through each target filter according to the measurement information and the corresponding target model to obtain total estimation output information of the VSIMM filter at the current moment; the total estimation output information comprises a total state estimation value and a total error covariance, the total state estimation value comprises a state estimation value of each target filter at the current moment, and the total error covariance comprises an error covariance of each target filter at the current moment.
With reference to the fourth possible implementation manner of the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where each target model is further configured with a model probability, and the method further includes: updating the model probability of each target model based on the likelihood function of each target model; wherein the likelihood function is calculated by:
Figure BDA0002823722070000031
wherein, Λj(k) Representing the likelihood function of the jth object model at time k, Sj(k) Represents the measured covariance, v, of the jth target model at time kj(k) Representing the kalman filter residual of the jth target model at time k.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the step of determining a target motion model set of the obstacle target based on motion models in the a priori knowledge base includes: acquiring acquisition information of an obstacle target; the acquisition information comprises type information, position information and parameter information; determining a first set of motion models for the obstacle objective based on the type information; determining a second set of motion models for the obstacle objective based on the position information; determining a third set of motion models of the obstacle objective based on the parametric information; wherein the parameter information includes at least one of: acquiring device information, obstacle target contour information and meteorological information; and determining a target motion model set of the obstacle target according to the first motion model set, the second motion model set and the third motion model set.
In a second aspect, an embodiment of the present invention further provides a filtering apparatus for an intelligent driving sensing system, where the apparatus is applied to a server of the intelligent driving sensing system, where the server is provided with a priori knowledge base and a variable-structure interactive multi-model VSIMM filter, and the apparatus includes: a first determining module, configured to determine a target motion model set of the obstacle target based on motion models in a priori knowledge base, where the motion models include at least one of: a uniform velocity CV model, a uniform acceleration CA model, a uniform rotation speed CT model, a Current model and a curvilinear motion CM model; the judging module is used for judging whether the target motion model set is consistent with a model set at the last moment of the VSIMM filter or not; the model set of the VSIMM filter at the previous moment comprises a motion model corresponding to each filter in the VSIMM filter at the previous moment; a second determination module, configured to determine, if the target motion model set is not the target motion model set, a target model set of the VSIMM filter at the current time based on the target motion model set and a model set of the VSIMM filter at the previous time; the filtering estimation module is used for acquiring the measurement information of the barrier target at the current moment so as to enable the VSIMM filter to carry out filtering state estimation according to the measurement information and the target model set, and total estimation output information of the VSIMM filter at the current moment is obtained; and the tracking module is used for tracking the obstacle target according to the total estimation output information.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the filtering method of the intelligent driving perception system according to the first aspect when executing the computer program.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the filtering method of the intelligent driving perception system of the first aspect are executed.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a filtering method, a filtering device and electronic equipment of an intelligent driving perception system, wherein a target motion model of an obstacle target is dynamically determined based on a motion model in a priori knowledge base, and a target model set of a VSIMM filter at the current moment is determined according to the target motion model so as to carry out filtering processing, so that the VSIMM filter can be rapidly converged when the obstacle target maneuvers, the tracking precision of the obstacle target is improved, the performance of the intelligent driving perception system is improved, and the intelligent driving perception system has a good practical value.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of a filtering method of an intelligent driving sensing system according to an embodiment of the present invention;
fig. 2 is a flowchart of a filtering method of an intelligent driving sensing system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a road topography provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a set of object motion models for constructing an obstacle object according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of determining a target model set according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a VSIMM filter according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a filtering apparatus of an intelligent driving sensing system according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Aiming at the problem that the filtering effect is influenced because the motion model of the existing barrier target is difficult to determine, the embodiment of the invention provides the filtering method and the filtering device of the intelligent driving perception system and the electronic equipment, so that the tracking precision of the barrier target is improved, the performance of the intelligent driving perception system is improved, and the practical value is good.
To facilitate understanding of the present embodiment, first, a filtering method of an intelligent driving sensing system according to an embodiment of the present invention is described in detail below.
The embodiment of the invention provides a filtering method of an intelligent driving perception system, which is applied to the intelligent driving perception system. The intelligent driving perception system comprises a server and a detection unit connected with the server, wherein the detection unit comprises a plurality of detection elements such as sensors and the like so as to obtain environment perception information of the intelligent driving perception system, and the environment perception information comprises sensor information, electronic map information, environment other information and the like.
As shown in fig. 1, the server constructs a priori knowledge base according to the acquired environment sensing information, the priori knowledge base outputs a corresponding filter motion Model set, adjusts a Model set of a VSIMM filter according to a Model probability calculated by the filter motion Model set and the VSIMM (Variable Structure Interacting Multiple Model) filter, and performs filtering state estimation based on the adjusted VSIMM filter according to the input obstacle motion data of the obstacle target so as to track the obstacle target. It should be noted that the VSIMM filter is also referred to as a VSIMM filtering algorithm or a VSIMM algorithm.
Based on the server, a filtering method of an intelligent driving sensing system provided by the embodiment of the invention is shown in fig. 2, and the method includes the following steps:
step S202, determining a target motion model set of the obstacle target based on a motion model in a priori knowledge base;
wherein, in the intelligent driving perception system, the motion model of the obstacle target comprises at least one of the following: a CV (Constant Velocity) model, a CA (Constant Acceleration) model, a CT (Constant Turn) model, a Current model, and a CM (Curvilinear Motion) model. It should be noted that, the motion models of other obstacle targets may be set according to actual situations, and this is not limited in the embodiment of the present invention.
For ease of understanding, the various motion models are described herein as follows:
(1) a CV model; wherein the CV model is calculated according to:
Figure BDA0002823722070000081
wherein x iskRepresenting the displacement of the obstacle target in the x direction at time k,
Figure BDA0002823722070000082
representing the speed, y, of the obstacle target in the x-direction at time kkIndicating the displacement of the obstacle target in the y-direction at time k,
Figure BDA0002823722070000083
representing the velocity, x, of the obstacle target at time k in the y-directionk+1Representing the displacement of the obstacle target in the x direction at time k +1,
Figure BDA0002823722070000084
denotes the speed, y, of the obstacle target in the x direction at time k +1k+1Representing the displacement of the obstacle target in the y direction at time k +1,
Figure BDA0002823722070000085
the velocity of the obstacle target in the y direction at time k +1 is indicated, and T represents the time interval from time k to time k + 1.
(2) A CA model; wherein the CA model is calculated according to the following formula:
Figure BDA0002823722070000086
wherein x iskRepresenting the displacement of the obstacle target in the x direction at time k,
Figure BDA0002823722070000087
representing the velocity of the obstacle target in the x direction at time k,
Figure BDA0002823722070000088
represents the acceleration of the obstacle target in the x-direction at time k, ykIndicating the displacement of the obstacle target in the y-direction at time k,
Figure BDA0002823722070000089
representing the velocity of the obstacle target in the y-direction at time k,
Figure BDA00028237220700000810
represents the acceleration, x, of the obstacle target in the y-direction at time kk+1Representing the displacement of the obstacle target in the x direction at time k +1,
Figure BDA00028237220700000811
representing the velocity of the obstacle target in the x direction at time k +1,
Figure BDA00028237220700000812
represents the acceleration of the obstacle target in the x direction at the time k +1, yk+1Representing the displacement of the obstacle target in the y direction at time k +1,
Figure BDA00028237220700000813
representing the velocity of the obstacle target in the y direction at time k +1,
Figure BDA00028237220700000814
represents the acceleration of the obstacle target in the y direction at time k +1, and T represents the time interval.
(3) A CT model; wherein the CT model is calculated according to:
Figure BDA0002823722070000091
wherein x iskRepresenting the displacement of the obstacle target in the x direction at time k,
Figure BDA0002823722070000092
representing the speed, y, of the obstacle target in the x-direction at time kkBits representing obstacle target in y-direction at time kThe movement of the movable part is carried out,
Figure BDA0002823722070000093
representing the velocity, x, of the obstacle target at time k in the y-directionk+1Representing the displacement of the obstacle target in the x direction at time k +1,
Figure BDA0002823722070000094
denotes the speed, y, of the obstacle target in the x direction at time k +1k+1Representing the displacement of the obstacle target in the y direction at time k +1,
Figure BDA0002823722070000095
denotes the velocity of the obstacle target in the y direction at the time k +1, T denotes the time interval, and ω denotes the angular velocity of the obstacle target.
(4) A Current model; wherein the Current model is calculated according to the following formula:
Figure BDA0002823722070000096
wherein x iskRepresenting the displacement of the obstacle target in the x direction at time k,
Figure BDA0002823722070000097
representing the velocity of the obstacle target in the x direction at time k,
Figure BDA0002823722070000098
representing the acceleration of the obstacle target in the x direction at time k,
Figure BDA0002823722070000099
indicating the jerk of the obstacle target in the x direction at time k,
Figure BDA00028237220700000910
mean value of random acceleration, w (t) mean value zero, variance of maneuvering acceleration σa 2And α represents the inverse of the maneuvering acceleration time constant. It should be noted that, here
Figure BDA00028237220700000911
The sampling interval is constant, the maneuvering frequency alpha is generally obtained according to experience and determined through actual measurement, and the maneuvering frequency alpha can be specifically set according to actual conditions, which is not limited and explained in the embodiment of the invention.
The dispersion equation is as follows:
Figure BDA0002823722070000101
wherein x iskRepresenting the displacement of the obstacle target in the x direction at time k,
Figure BDA0002823722070000102
representing the velocity of the obstacle target in the x direction at time k,
Figure BDA0002823722070000103
representing the acceleration of the obstacle target in the x-direction at time k, xk+1Representing the displacement of the obstacle target in the x direction at time k +1,
Figure BDA0002823722070000104
representing the velocity of the obstacle target in the x direction at time k +1,
Figure BDA0002823722070000105
represents the acceleration of the obstacle target in the x direction at time k +1,
Figure BDA0002823722070000106
represents the mean of random accelerations, α represents the coefficient of gaussian white noise and noise variance, T represents the time interval, and w (T) represents the discrete value of gaussian white noise with a mean of zero.
(5) A CM model; wherein the CM model is calculated according to:
Figure BDA0002823722070000107
wherein x iskRepresenting the displacement of the obstacle target in the x direction at time k,
Figure BDA0002823722070000108
representing the speed, y, of the obstacle target in the x-direction at time kkIndicating the displacement of the obstacle target in the y-direction at time k,
Figure BDA0002823722070000109
representing the velocity, x, of the obstacle target at time k in the y-directionk+1Representing the displacement of the obstacle target in the x direction at time k +1,
Figure BDA00028237220700001010
denotes the speed, y, of the obstacle target in the x direction at time k +1k+1Representing the displacement of the obstacle target in the y direction at time k +1,
Figure BDA00028237220700001011
denotes the speed of the y-direction obstacle target at the time k +1, T denotes the time interval, Delta theta denotes the amount of change in the speed direction angle, atRepresenting the tangential acceleration of the curvilinear movement, Δ tkRepresents the time difference between the time k +1 and the time k, thetakIndicating the velocity direction angle of the obstacle target at time k.
Wherein the amount of change in the speed direction angle is calculated according to the following equation:
Δθ=θk+1k (6)
where Δ θ represents the amount of change in the speed direction angle, θk+1Represents the speed direction angle theta of the obstacle target at the time k +1kIndicating the velocity direction angle of the obstacle target at time k.
Therefore, according to the motion model prestored in the prior knowledge base, a target motion model set of the obstacle target can be constructed, wherein one possible construction mode of the target motion model set is as follows:
(1) acquiring acquisition information of an obstacle target; the acquisition information comprises list information not only limited to type information, position information and parameter information; the method can be set according to actual conditions, and the embodiment of the invention is not limited to be described.
(2) Determining a first set of motion models for the obstacle objective based on the type information; specifically, a corresponding first motion model set is determined according to the type information of the obstacle object, for example, a large-scale motor vehicle such as an automobile generally moves at a constant speed, turns at a constant speed and moves in a curve due to fixed gears, that is, when the type information of the obstacle object is the large-scale motor vehicle, the corresponding first motion model set is a { CV model, a CT model, a CM model }; for pedestrians, the pedestrian generally adopts a uniform motion and a Current model of straight line walking or sprinting, namely when the type information of the obstacle target is the pedestrian, the corresponding first motion model set is a { CV model, a Current model }; for small motor vehicles such as bicycles or motorcycles, certain acceleration and deceleration may exist, that is, when the type information of the obstacle object is a small motor vehicle, the corresponding first motion model set is a { CV model, CA model, Current model } or the like, and therefore, the first motion model set corresponding to each type of information may be specifically set according to an actual situation.
(3) Determining a second set of motion models for the obstacle objective based on the position information; specifically, the position information of the obstacle target, such as a straight lane, a u-turn lane, and the like, is obtained according to the high-precision electronic map and the position matching of the obstacle target in the electronic map, as shown in fig. 3. For the motor vehicle, traffic rules must be followed, for example, on a straight-ahead lane, the corresponding second set of motion models is { CV model, CA model, Current model }; on the overpass, the corner lane or the U-turn lane, the corresponding second motion model set is a { CT model, a Current model }; in the area to be turned or the variable roads such as the branch road and the ramp, the corresponding second motion model set is a { CM model, a Current model } and the like; the specific setting can be carried out according to the actual situation.
(4) Determining a third set of motion models of the obstacle objective based on the parametric information; wherein the parameter information includes at least one of: acquiring device information, obstacle target contour information and meteorological information; the information of the acquisition device, such as sensor information and the like, is rarely considered in the existing method, so that the third motion model set is defaulted to be a complete set, namely the third motion model set is { a CV model, a CA model, a CT model, a CM model and a Current model }; the other non-corpus conditions can be set according to the actual scene.
(5) And determining a target motion model set of the obstacle target according to the first motion model set, the second motion model set and the third motion model set. Specifically, an intersection of the first motion model set, the second motion model set and the third motion model set is determined, and the intersection is determined as a target motion model set of the obstacle target, so that the target motion model set can accurately simulate the motion of the obstacle target.
For ease of understanding, the sensors are illustrated here. As shown in fig. 4, the collected information includes sensor sensing information, high-precision map information, and other information of the environment; the sensor perception information comprises sensor information and obstacle categories, and the sub-model set M of the obstacle target can be determined according to the sensor information1According to the obstacle category, a sub-model set M of the obstacle target can be determined2The sensor perception information and the high-precision map information can be matched to determine the terrain of the obstacle, and the set M of sub-models of the target of the obstacle is determined based on the terrain of the obstacle3And a set M of submodels for determining the obstacle target based on other sensing information in other information of the environmentnWherein n is the number of collected information, and finally, collecting M according to the submodels1Set of submodels M2Set of submodels M3And submodel set MnDetermines a target motion model set of the obstacle target, i.e. the target motion model set M ═ M1∩M2∩M3∩…Mn
It should be noted that, a target motion model set of a specific obstacle target may be constructed according to an actual application scenario, so as to ensure that the target motion model set can accurately simulate the motion of the obstacle target, which is not limited in the embodiment of the present invention.
In practical application, since the target maneuvering type, maneuvering intensity and maneuvering time of the obstacle target are unknown, especially the obstacle target changes the movement pattern many times during the maneuvering process, for example: turning, accelerating, climbing and the like, therefore, the motion model is generally difficult to determine, i.e. the target motion model of the obstacle target is difficult to match the actual motion pattern of the obstacle target, thereby causing the filter to diverge, the tracking accuracy to decline and even the target to be lost. The embodiment of the invention determines the target motion model set of the obstacle target by using the prior knowledge such as map environment information, the type information of the obstacle target and the like, thereby relieving the problem that the motion model of the existing obstacle target is difficult to determine, and the target motion model set of the obstacle target is expanded by constructing the prior knowledge base, so that the target motion model set better fits the actual motion of the obstacle target, and the obstacle target state estimation precision is improved.
Step S204, judging whether the target motion model set is consistent with a model set of the VSIMM filter at the previous moment;
the model set of the VSIMM filter at the previous moment comprises a motion model corresponding to each filter in the VSIMM filter at the previous moment; specifically, whether each target motion model in the target motion model set is identical to a motion model corresponding to each filter in the VSIMM filter at the previous moment is judged; if so, judging that the target motion model set is consistent with the model set at the last moment of the VSIMM filter, namely that no new filter model is activated in the model set at the last moment of the VSIMM filter; if any difference exists, the target motion model set is judged to be inconsistent with the model set of the VSIMM filter at the previous moment, namely a new filter model exists in the model set of the VSIMM filter at the current moment compared with the model set of the VSIMM filter at the previous moment, and at the moment, the model set of the VSIMM filter at the current moment needs to be determined, namely, the filter model of the VSIMM filter is adjusted.
Step S206, if not, determining a target model set of the VSIMM filter at the current moment based on the target motion model set and a model set of the VSIMM filter at the last moment;
in practical application, the adaptive filtering algorithm enables the target tracking system to perform adaptive adjustment according to the maneuvering condition of the target, and greatly improves the tracking precision. The common adaptive filtering algorithms mainly have three types: detecting an adaptive filtering algorithm, identifying the adaptive filtering algorithm in real time and a multi-model method; the detection adaptive filtering algorithm and the real-time identification adaptive filtering algorithm modify a Model or noise in a filtering process, but have general hysteresis and are greatly dependent on the Model, and a Multi-Model method such as an IMM (Interactive Multi-future Model) filtering algorithm does not need maneuvering detection, but realizes switching between different modes by adjusting the probability of each Model, so that comprehensive self-adaptation is achieved. However, these algorithms do not change the fixed structure of the IMM per se, and still cannot solve the problem of performance degradation caused by the interference of the redundant target motion model in the IMM algorithm. In the embodiment of the invention, the fixed structure of the IMM algorithm is changed as much as possible by using the priori knowledge, the external information of the sensor and the like, namely, the VSIMM algorithm is used for improving the performance of the filter, for example, a motion model of an obstacle target can be dynamically increased or decreased, or the estimated position of the obstacle target is corrected, so that when the obstacle target maneuvers, the tracking filter in the VSIMM filter can be more quickly converged, higher tracking accuracy is obtained, and the operation speed is improved.
Specifically, since each target motion model in the target motion model set is not completely the same as the motion model corresponding to each filter in the VSIMM filter at the previous time, the filter model is adjusted in the target motion model set determined in the prior knowledge base according to a certain rule. Specifically, firstly, a model set of the VSIMM filter at the current moment is obtained according to a union set of a target motion model set and a model set of the VSIMM filter at the last moment; then, acquiring a first model probability of a target motion model set and a second model probability of the model set at the last moment of the VSIMM filter, acquiring a ratio of the first model probability and the second model probability, judging whether the ratio is greater than a first threshold value, and if so, determining that the target model set at the current moment of the VSIMM filter is the target motion model set; and if the ratio is not larger than the first threshold, judging whether the ratio is smaller than a second threshold, and if so, determining that the target model set of the VSIMM filter at the current moment is the model set of the VSIMM filter at the last moment. It should be noted that the specific values of the first threshold and the second threshold may be set according to actual situations, as long as the second threshold is smaller than the first threshold.
For the sake of understanding, a model set M at a time, i.e. k, on the VSIMM filter is used herekAnd a set of object motion models MmFor illustration purposes. As shown in fig. 5, according to MkAnd MmJudging whether the model set of the VSIMM filter at the current moment, namely the k +1 moment, is updated or not; if not, the model set of the VSIMM filter at the k +1 moment is Mk+1And, Mk+1=MkIf there is an update, then set M according to the object motion modelmAnd model set M at VSIMM filter time kkModel set M for determining VSIMM filter k +1 momentk+1=Mk∪MmAnd obtaining a set M of target motion modelsmFirst model probability of
Figure BDA0002823722070000151
And model set M at VSIMM filter time kkSecond model probability of
Figure BDA0002823722070000152
Wherein the model probabilities of the model set are calculated according to:
Figure BDA0002823722070000153
wherein u isM(k) Model probability, M, representing a set of models MiRepresenting the number of models in the model set M, ui(k) Representing the probability of each model in the set of models.
Therefore, the temperature of the molten metal is controlled,the probability of the first model can be calculated according to the formula
Figure BDA0002823722070000154
And second model probability
Figure BDA0002823722070000155
And calculating the ratio of the first model probability to the second model probability according to the following formula:
Figure BDA0002823722070000156
wherein t represents a ratio of the first model probability and the second model probability,
Figure BDA0002823722070000157
set M of models representing the motion of an object at time kmIs determined by the first model probability of (a),
Figure BDA0002823722070000158
model set M representing VSIMM filter k timekThe second model probability of (1).
After the ratio is obtained through calculation, whether the ratio t is larger than a first threshold value t or not is judged1If yes, determining the target model set at the moment of VSIMM filter k +1 as a target motion model set (M)k+1=Mm(ii) a If not, judging whether the ratio t is smaller than a second threshold value t or not2If yes, determining the target model set at the moment k +1 of the VSIMM filter as the model set at the moment k of the VSIMM filter, namely Mk+1=MkThus according to the object motion model set MmAnd model set u at VSIMM filter time kMm(k) Determining a target model set M at time instance of VSIMM filter k +1k+1If the ratio t is greater than or equal to a second threshold t2And is less than or equal to the first threshold value t1Then continue according to Mk+1=Mk∪MmTarget model set M for determining VSIMM filter k +1 momentk+1
Step S208, obtaining the measurement information of the barrier target at the current moment, so that the VSIMM filter carries out filtering state estimation according to the measurement information and the target model set, and total estimation output information of the VSIMM filter at the current moment is obtained;
for a VSIMM filter, the basic idea is: firstly, establishing a total model set, combining a plurality of independent and compatible model set sequences into the model set, and selecting a model set which is most consistent with a target motion state in the total model set at each moment in a target tracking process according to prior knowledge and an estimated state of a target; the model sets can be mutually converted according to a certain rule, namely, each filter model in the model set is independent and can be mutually converted. Therefore, for the determined target model set, the VSIMM filter further allocates a corresponding target model to each filter in the VSIMM filter according to a certain allocation rule, so that each filter performs filtering according to the allocated target model.
Specifically, a target filter in a corresponding VSIMM filter is determined according to each target model in a target model set; then, carrying out filtering state estimation through each target filter according to the measurement information and the corresponding target model to obtain total estimation output information of the VSIMM filter at the current moment; the total estimate output information includes a total state estimate comprising a state estimate for each target filter at the current time and a total error covariance comprising an error covariance for each target filter at the current time. The measurement information of the obstacle target includes, but is not limited to, position information and speed information of the obstacle target, and may be set according to actual conditions.
For the sake of understanding, a model set M at a time, i.e. k, on the VSIMM filter is used herekAnd a set of object motion models MmFor illustration purposes. If the filter model of the VSIMM filter needs to be adjusted at the moment k +1, the target model set M at the determined VSIMM filter moment k +1k+1An initial model may be assigned to the target model corresponding to each target filterThe probability, and initialization are performed, and the model probability of each target filter may also be adjusted according to a transition probability matrix, where the transition probability matrix includes a plurality of transition probabilities, where a transition probability includes transition probabilities corresponding to a plurality of target filters, and a transition probability corresponding to each target filter further includes a plurality of transition probabilities between different target models corresponding to the target filter.
Target model set M for VSIMM filter k +1 time instantk+1And the input of the target filter corresponding to the r target models at the time k +1 comprises: the hybrid state estimate and the hybrid error covariance output at time k. Wherein the hybrid state estimate at time k may be calculated according to:
Figure BDA0002823722070000171
wherein,
Figure BDA0002823722070000172
representing the hybrid state estimate at time k,
Figure BDA0002823722070000173
represents the total state estimation value, u, of the target model i at time ki|j(k) Representing the mixed probability of the target model i to the target model j at the moment k, wherein the target model i is a model set M of the VSIMM filter at the moment kkThe target model j is a model set M of the VSIMM filter at the moment k +1k+1The object model of (1).
It should be noted that the above-mentioned mixed probability is the transition probability from the target model i to the target model j at the time k +1 determined based on the model probabilities of the target model i and the target model j and the initial transition probabilities of the target model i and the target model j, so that by adjusting the transition probability between the models at each time, the problem that the original transition probability matrix is uncertain due to the uncertainty of the maneuvering of the obstacle target in the existing method, so that the switching of the real motion model of the obstacle target cannot be reflected well is solved, and the tracking performance is improved.
And calculating the hybrid error covariance at time k according to:
Figure BDA0002823722070000174
wherein, P0j(k) Representing the covariance of the mixing error at time k, Pi(k) Representing the total error covariance of the target model i at time k,
Figure BDA0002823722070000175
representing the overall state estimate of the target model i at time k,
Figure BDA0002823722070000176
represents the hybrid state estimate at time k, ui|j(k) Representing the mixed probability of the target model i to the target model j at the moment k, wherein the target model i is a model set M of the VSIMM filter at the moment kkThe target model j is a model set M of the VSIMM filter at the moment k +1k+1The object model of (1).
Wherein the mixing probability may be calculated according to:
Figure BDA0002823722070000181
wherein u isi|j(k) Representing the probability of mixing of the object model i to the object model j at time k, pijRepresenting the transition probability, u, of a target model i to a target model ji(k) Representing the model probability of the object model i at time k,
Figure BDA0002823722070000182
representing the predicted probability of the target model j.
The predicted probability of the target model j is calculated according to the following formula:
Figure BDA0002823722070000183
wherein,
Figure BDA0002823722070000184
representing the prediction probability, p, of the target model jijRepresenting the mixed probability, u, of the transformation of the object model i into the object model ji(k) Representing the model probability of a target model i at the k moment, wherein the target model i is a model set M of the VSIMM filter at the k momentkThe target model j is a model set M of the VSIMM filter at the moment k +1k+1The object model of (1).
At this time, the target model set M is subjected to the hybrid state estimation and the hybrid error covariance at the time kk+1Filtering by the target filter to obtain the total estimated output information of each target filter at the moment k +1, namely obtaining the total state estimation value of each target filter at the moment k +1
Figure BDA0002823722070000185
Sum total error covariance Pj(k + 1). And updating the model probability of the target model j based on the likelihood function of the target model j, wherein the likelihood function of the target model j is calculated according to the following formula:
Figure BDA0002823722070000186
wherein, Λj(k +1) represents the likelihood function of the j-th object model, i.e. the object model j, at the time k +1, Sj(k +1) represents the measured covariance of the target model j at time k +1, vj(k +1) represents the Kalman filter residual of the target model j at time k + 1.
Updating the model probability of the target model j at the moment k +1 according to the following formula:
Figure BDA0002823722070000187
wherein u isj(k +1) representing the target model j at time k +1Model probability, Λj(k +1) represents the likelihood function of the target model j at time k +1,
Figure BDA0002823722070000191
the prediction probability of the target model j is represented, and C represents a model probability normalization constant.
Wherein the model probability normalization constant is calculated according to the following formula:
Figure BDA0002823722070000192
wherein C represents a model probability normalization constant, Λj(k +1) represents the likelihood function of the target model j at time k +1,
Figure BDA0002823722070000193
representing the prediction probability of a target model j, wherein the target model j is a model set M at the moment of VSIMM filter k +1k+1The object model of (1).
According to the formula, a target model set M of the VSIMM filter at the k +1 moment can be obtained through calculationk+1The total estimate of (2) outputs information. Wherein the total state estimate is calculated according to
Figure BDA0002823722070000194
Figure BDA0002823722070000195
Wherein,
Figure BDA0002823722070000196
representing a set of target models Mk+1Is determined by the total state estimate of (a),
Figure BDA0002823722070000197
represents the total state estimation value u of the target model j at the moment k +1j(k +1) represents the model probability of the target model j at time k + 1.
And, calculating a total error covariance according to:
Figure BDA0002823722070000198
wherein P (k +1) represents the target model set Mk+1Total error covariance of uj(k +1) represents the model probability, P, of the target model j at time k +1j(k +1) represents the total error covariance of the target model j at time k +1,
Figure BDA0002823722070000199
representing the overall state estimate of the target model j at time k +1,
Figure BDA00028237220700001910
representing a set of target models Mk+1The total state estimate of (a).
This is illustrated here for ease of understanding. As shown in FIG. 6, based on the metrology information at time k +1, the VSIMM filter's target model set M at time k +1k+1The output of a first target filter corresponding to a first target model in the target set M is provided with first estimation output information and first model probability, the output of a second target filter corresponding to a second target model is provided with second estimation output information and second model probability, and so on, the output of an Nth target filter corresponding to an Nth target model is provided with Nth estimation output information and Nth model probability, and a target model set M can be obtained according to the first estimation output information, the second estimation output information and the Nth estimation output informationk+1The corresponding total estimate outputs information. In addition, model probability updating can be carried out on the first model probability, the second model probability and the Nth model probability respectively based on the likelihood function, so that the accuracy of the model probability corresponding to each target filter at each moment is improved, and the target tracking performance is improved.
Therefore, the transition probability is adjusted through the mixed probability of the VSIMM filter, and the model probability of each target filter is updated through the likelihood function, so that the problem that the original method for setting the transition probability matrix in the VSIMM filter cannot well reflect the switching of the real motion model of the target due to the uncertainty of the maneuver of the obstacle target is relieved; and the model probability of the obstacle target under different environments is greatly changed due to the interference of a plurality of external environment information, the problem of tracking performance reduction caused by the fact that the original setting method is difficult to obtain more accurate model probability is solved, and the problem that the tracking precision can be ensured only by real-time estimation due to the fact that the measurement model of the sensor is difficult to determine and the measured noise variance is changed constantly is solved, so that the target tracking performance is improved.
Step S210, tracking the obstacle target according to the total estimation output information.
Therefore, the filtering method of the intelligent driving sensing system fully utilizes various priori knowledge of the intelligent driving sensing system to constrain the motion model of the obstacle target and eliminate the wrong motion model, thereby determining the target motion model of the obstacle target, exerting the utilization rate of the priori knowledge information to the maximum extent, reducing the parallel operation of a large number of motion models, and ensuring that the target motion model of the obstacle target better conforms to the actual motion of the obstacle target.
In addition, the problem that the traditional tracking filter diverges to cause tracking failure is solved through the VSIMM algorithm, the problem that the target model set of the filter in the general VSIMM algorithm is difficult to determine to cause tracking precision reduction and calculation time increase is solved, the calculation time of the VSIMM filter is reduced through determining the target model set of the VSIMM filter at each moment, the state estimation precision of the tracking filtering of the VSIMM filter is improved, the tracking precision of an obstacle target is improved, the performance of an intelligent driving perception system is improved, and the practical value is good.
On the basis of the above embodiment, the embodiment of the invention also provides a filtering device of the intelligent driving perception system, which is applied to a server of the intelligent driving perception system, wherein the server is provided with a priori knowledge base and a variable-structure interactive multi-model VSIMM filter. As shown in fig. 7, the apparatus includes a first determining module 71, a judging module 72, a second determining module 73, a filtering estimating module 74 and a tracking module 75 connected in sequence; the functions of each module are as follows:
a first determining module 71, configured to determine a set of object motion models of the obstacle target based on motion models in the a priori knowledge base, where the motion models include at least one of: a uniform velocity CV model, a uniform acceleration CA model, a uniform rotation speed CT model, a Current model and a curvilinear motion CM model;
a judging module 72, configured to judge whether the target motion model set is consistent with a model set of the VSIMM filter at a previous time; the model set of the VSIMM filter at the previous moment comprises a motion model corresponding to each filter in the VSIMM filter at the previous moment;
a second determining module 73, configured to determine, if the VSIMM filter is not in the target motion state, a target model set of the VSIMM filter at the current time based on the target motion model set and a model set of the VSIMM filter at a previous time;
the filtering estimation module 74 is configured to obtain measurement information of the obstacle target at the current time, so that the VSIMM filter performs filtering state estimation according to the measurement information and the target model set to obtain total estimation output information of the VSIMM filter at the current time;
a tracking module 75 for tracking the obstacle target according to the total estimated output information.
The filtering device of the intelligent driving sensing system provided by the embodiment of the invention dynamically determines the target motion model of the obstacle target based on the motion model in the priori knowledge base, and determines the target model set of the VSIMM filter at the current moment according to the target motion model so as to carry out filtering processing, so that the VSIMM filter can be rapidly converged when the obstacle target maneuvers, the tracking precision of the obstacle target is improved, the performance of the intelligent driving sensing system is further improved, and the filtering device has better practical value.
In one possible embodiment, the determining module 72 is further configured to: judging whether each target motion model in the target motion model set is completely the same as the motion model corresponding to each filter in the VSIMM filter at the previous moment; if so, judging that the target motion model set is consistent with a model set at the last moment of the VSIMM filter; if any difference exists, the target motion model set is judged to be inconsistent with the model set at the last moment of the VSIMM filter.
In another possible embodiment, the second determining module 73 is further configured to: obtaining a model set of the VSIMM filter at the current moment according to a union set of the target motion model set and a model set of the VSIMM filter at the last moment; acquiring a first model probability of a target motion model set and a second model probability of the model set at the last moment of a VSIMM filter; and determining a target model set of the VSIMM filter at the current moment according to the ratio of the first model probability and the second model probability.
In another possible embodiment, the second determining module 73 is further configured to: judging whether the ratio is larger than a first threshold value or not, and if so, determining that a target model set of the VSIMM filter at the current moment is a target motion model set; if not, judging whether the ratio is smaller than a second threshold value, if so, determining that the target model set of the VSIMM filter at the current moment is the model set of the VSIMM filter at the previous moment; wherein the second threshold is less than the first threshold.
In another possible embodiment, the filter estimation module 74 is further configured to: determining a corresponding target filter in the VSIMM filter according to each target model in the target model set; carrying out filtering state estimation through each target filter according to the measurement information and the corresponding target model to obtain total estimation output information of the VSIMM filter at the current moment; the total estimation output information comprises a total state estimation value and a total error covariance, the total state estimation value comprises a state estimation value of each target filter at the current moment, and the total error covariance comprises an error covariance of each target filter at the current moment.
In another possible embodiment, each target model is further configured with a model probability, and the apparatus further includes: updating the model probability of each target model based on the likelihood function of each target model; wherein the likelihood function is calculated by:
Figure BDA0002823722070000231
wherein, Λj(k) Representing the likelihood function of the jth object model at time k, Sj(k) Represents the measured covariance, v, of the jth target model at time kj(k) Representing the kalman filter residual of the jth target model at time k.
In another possible embodiment, the first determining module 71 is further configured to: acquiring acquisition information of an obstacle target; the acquisition information comprises type information, position information and parameter information; determining a first set of motion models for the obstacle objective based on the type information; determining a second set of motion models for the obstacle objective based on the position information; determining a third set of motion models of the obstacle objective based on the parametric information; wherein the parameter information includes at least one of: acquiring device information, obstacle target contour information and meteorological information; and determining a target motion model set of the obstacle target according to the first motion model set, the second motion model set and the third motion model set.
The filtering device of the intelligent driving perception system provided by the embodiment of the invention has the same technical characteristics as the filtering method of the intelligent driving perception system provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment of the invention also provides electronic equipment which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the filtering method of the intelligent driving perception system.
Referring to fig. 8, the electronic device includes a processor 80 and a memory 81, the memory 81 stores machine executable instructions capable of being executed by the processor 80, and the processor 80 executes the machine executable instructions to implement the filtering method of the intelligent driving perception system.
Further, the electronic device shown in fig. 8 further includes a bus 82 and a communication interface 83, and the processor 80, the communication interface 83, and the memory 81 are connected through the bus 82.
The Memory 81 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 83 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, etc. may be used. The bus 82 may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Enhanced Industry Standard Architecture) bus, or the like. The above-mentioned bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one double-headed arrow is shown in FIG. 8, but that does not indicate only one bus or one type of bus.
The processor 80 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 80. The Processor 80 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 81, and the processor 80 reads information in the memory 81 and performs the steps of the method of the previous embodiment in combination with hardware thereof.
The present embodiments also provide a machine-readable storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the filtering method of the smart driving perception system described above.
The filtering method and device for the intelligent driving sensing system and the computer program product of the electronic device provided by the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A filtering method of an intelligent driving perception system is applied to a server of the intelligent driving perception system, wherein the server is provided with a priori knowledge base and a variable structure interactive multi-model VSIMM filter, and the method comprises the following steps:
determining a set of object motion models of an obstacle object based on motion models in the a priori knowledge base, wherein the motion models include at least one of: a uniform velocity CV model, a uniform acceleration CA model, a uniform rotation speed CT model, a Current model and a curvilinear motion CM model;
judging whether the target motion model set is consistent with a model set at the last moment of the VSIMM filter or not; the model set of the VSIMM filter at the last moment comprises a motion model corresponding to each filter in the VSIMM filter at the last moment;
if not, determining a target model set of the VSIMM at the current moment based on the target motion model set and a model set of the VSIMM at the last moment;
acquiring measurement information of the barrier target at the current moment, so that the VSIMM filter carries out filtering state estimation according to the measurement information and the target model set to obtain total estimation output information of the VSIMM filter at the current moment;
tracking the obstacle target according to the total estimated output information.
2. The filtering method of the smart driving perception system according to claim 1, wherein the step of determining whether the set of target motion models is consistent with the set of models at a previous time on the VSIMM filter comprises:
judging whether each target motion model in the target motion model set is identical to a motion model corresponding to each filter in the VSIMM filter at the previous moment or not;
if so, judging that the target motion model set is consistent with a model set at the last moment of the VSIMM filter;
and if any difference exists, judging that the target motion model set is inconsistent with a model set at the last moment of the VSIMM filter.
3. The filtering method of the smart driving perception system according to claim 2, wherein the step of determining the target model set of the VSIMM filter at the current time based on the target motion model set and the model set of the VSIMM filter at the previous time comprises:
obtaining a model set of the VSIMM filter at the current moment according to the union set of the target motion model set and the model set of the VSIMM filter at the last moment;
acquiring a first model probability of the target motion model set and a second model probability of the model set at the last moment of the VSIMM filter;
and determining a target model set of the VSIMM filter at the current moment according to the ratio of the first model probability and the second model probability.
4. The filtering method of the smart driving perception system according to claim 3, wherein the step of determining the target model set of the VSIMM filter at the current moment according to the ratio of the first model probability and the second model probability includes:
judging whether the ratio is larger than a first threshold value or not, and if so, determining that a target model set of the VSIMM filter at the current moment is the target motion model set;
if not, judging whether the ratio is smaller than a second threshold value, if so, determining that the target model set of the VSIMM filter at the current moment is the model set of the VSIMM filter at the last moment; wherein the second threshold is less than the first threshold.
5. The filtering method of intelligent driving perception system according to claim 4, wherein the step of performing filtering state estimation by the VSIMM filter according to the measured information and the target model set comprises:
determining a corresponding target filter in the VSIMM filter according to each target model in the target model set;
carrying out filtering state estimation through each target filter according to the measurement information and the corresponding target model to obtain total estimation output information of the VSIMM filter at the current moment; wherein the total estimated output information includes a total state estimate including a state estimate for each of the target filters at the current time and a total error covariance including an error covariance for each of the target filters at the current time.
6. The filtering method for intelligent driving perception system according to claim 5, wherein each of the target models is further configured with a model probability, the method further comprising:
updating the model probability of each target model based on the likelihood function of each target model; wherein the likelihood function is calculated by:
Figure FDA0002823722060000031
wherein, Λj(k) Representing the likelihood function of the jth object model at time k, Sj(k) Represents the measured covariance, v, of the jth target model at time kj(k) Representing the kalman filter residual of the jth target model at time k.
7. The filtering method of the intelligent driving perception system according to claim 1, wherein the step of determining a set of target motion models of the obstacle target based on the motion models in the prior knowledge base includes:
acquiring acquisition information of the obstacle target; the acquisition information comprises type information, position information and parameter information;
determining a first set of motion models for the obstacle objective based on the type information;
determining a second set of motion models for the obstacle target based on the location information;
determining a third set of motion models for the obstacle objective based on the parameter information; wherein the parameter information comprises at least one of: acquiring device information, obstacle target contour information and meteorological information;
and determining a target motion model set of the obstacle target according to the first motion model set, the second motion model set and the third motion model set.
8. A filtering apparatus for a smart driving perception system, the apparatus being applied to a server of the smart driving perception system, wherein the server is provided with a priori knowledge base and a variable structure interactive multi-model VSIMM filter, the apparatus comprising:
a first determining module, configured to determine a set of object motion models of the obstacle target based on motion models in the a priori knowledge base, wherein the motion models include at least one of: a uniform velocity CV model, a uniform acceleration CA model, a uniform rotation speed CT model, a Current model and a curvilinear motion CM model;
the judging module is used for judging whether the target motion model set is consistent with a model set at the last moment of the VSIMM filter or not; the model set of the VSIMM filter at the last moment comprises a motion model corresponding to each filter in the VSIMM filter at the last moment;
a second determination module, configured to determine, if not, a target model set of the VSIMM filter at a current time based on the target motion model set and a model set of the VSIMM filter at a previous time;
the filtering estimation module is used for acquiring the measurement information of the barrier target at the current moment so as to enable the VSIMM filter to carry out filtering state estimation according to the measurement information and the target model set, and total estimation output information of the VSIMM filter at the current moment is obtained;
and the tracking module is used for tracking the obstacle target according to the total estimation output information.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the filtering method of the intelligent driving perception system according to any one of the preceding claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the filtering method of the intelligent driving perception system according to any one of the claims 1-7.
CN202011461427.3A 2020-12-08 2020-12-08 Filtering method and device of intelligent driving perception system and electronic equipment Pending CN112615604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011461427.3A CN112615604A (en) 2020-12-08 2020-12-08 Filtering method and device of intelligent driving perception system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011461427.3A CN112615604A (en) 2020-12-08 2020-12-08 Filtering method and device of intelligent driving perception system and electronic equipment

Publications (1)

Publication Number Publication Date
CN112615604A true CN112615604A (en) 2021-04-06

Family

ID=75234475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011461427.3A Pending CN112615604A (en) 2020-12-08 2020-12-08 Filtering method and device of intelligent driving perception system and electronic equipment

Country Status (1)

Country Link
CN (1) CN112615604A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920166A (en) * 2021-10-29 2022-01-11 广州文远知行科技有限公司 Method and device for selecting object motion model, vehicle and storage medium
CN114572233A (en) * 2022-03-25 2022-06-03 阿波罗智能技术(北京)有限公司 Model set-based prediction method, electronic equipment and automatic driving vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5325098A (en) * 1993-06-01 1994-06-28 The United States Of America As Represented By The Secretary Of The Navy Interacting multiple bias model filter system for tracking maneuvering targets
CN104020466A (en) * 2014-06-17 2014-09-03 西安电子科技大学 Maneuvering target tracking method based on variable structure multiple models
CN107704432A (en) * 2017-07-28 2018-02-16 西安理工大学 A kind of adaptive Interactive Multiple-Model method for tracking target of transition probability
CN109687844A (en) * 2018-08-17 2019-04-26 西安理工大学 A kind of intelligent maneuver method for tracking target
US20190195631A1 (en) * 2017-12-22 2019-06-27 Ubtech Robotics Corp Positioning method, positioning device, and robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5325098A (en) * 1993-06-01 1994-06-28 The United States Of America As Represented By The Secretary Of The Navy Interacting multiple bias model filter system for tracking maneuvering targets
CN104020466A (en) * 2014-06-17 2014-09-03 西安电子科技大学 Maneuvering target tracking method based on variable structure multiple models
CN107704432A (en) * 2017-07-28 2018-02-16 西安理工大学 A kind of adaptive Interactive Multiple-Model method for tracking target of transition probability
US20190195631A1 (en) * 2017-12-22 2019-06-27 Ubtech Robotics Corp Positioning method, positioning device, and robot
CN109687844A (en) * 2018-08-17 2019-04-26 西安理工大学 A kind of intelligent maneuver method for tracking target

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王占磊;张建业;张鹏;程洪炳;: "一种改进的变结构交互多模型被动跟踪算法", 空军工程大学学报(自然科学版), no. 04, 25 August 2011 (2011-08-25) *
雷世文;吴慈伶;孙伟;: "一种基于VSMM的自适应高机动目标跟踪方法", 现代雷达, no. 06, 15 June 2010 (2010-06-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920166A (en) * 2021-10-29 2022-01-11 广州文远知行科技有限公司 Method and device for selecting object motion model, vehicle and storage medium
CN113920166B (en) * 2021-10-29 2024-05-28 广州文远知行科技有限公司 Method, device, vehicle and storage medium for selecting object motion model
CN114572233A (en) * 2022-03-25 2022-06-03 阿波罗智能技术(北京)有限公司 Model set-based prediction method, electronic equipment and automatic driving vehicle
CN114572233B (en) * 2022-03-25 2022-11-29 阿波罗智能技术(北京)有限公司 Model set-based prediction method, electronic equipment and automatic driving vehicle

Similar Documents

Publication Publication Date Title
CN111489585A (en) Vehicle and pedestrian collision avoidance method based on edge calculation
CN112198503A (en) Target track prediction optimization method and device and radar system
US11731649B2 (en) High precision position estimation method through road shape classification-based map matching and autonomous vehicle thereof
US20230228593A1 (en) Method, device and system for perceiving multi-site roadbed network and terminal
CN112615604A (en) Filtering method and device of intelligent driving perception system and electronic equipment
CN112465868A (en) Target detection tracking method and device, storage medium and electronic device
CN113895460A (en) Pedestrian trajectory prediction method, device and storage medium
CN116022163A (en) Automatic driving vehicle scanning matching and radar attitude estimator based on super local subgraph
CN112465193B (en) Parameter optimization method and device for multi-sensor data fusion
CN114973195A (en) Vehicle tracking method, device and system based on multi-information fusion
CN112428991B (en) Vehicle control method, device, medium, equipment and vehicle
CN114419573B (en) Dynamic occupancy grid estimation method and device
CN110632916A (en) Behavior prediction device and automatic driving device
CN108871365A (en) Method for estimating state and system under a kind of constraint of course
IL292906A (en) Method for estimating coverage of the area of traffic scenarios
US20210366274A1 (en) Method and device for predicting the trajectory of a traffic participant, and sensor system
WO2022062019A1 (en) Map matching method and apparatus, and electronic device and storage medium
CN113511194A (en) Longitudinal collision avoidance early warning method and related device
CN113421298A (en) Vehicle distance measuring method, vehicle control device, vehicle and readable storage medium
CN113176562A (en) Multi-target tracking method and device, electronic equipment and readable storage medium
CN115979288A (en) Course angle determining method, electronic equipment and storage medium
CN113470070B (en) Driving scene target tracking method, device, equipment and storage medium
CN112163521A (en) Vehicle driving behavior identification method, device and equipment
CN112477868B (en) Collision time calculation method and device, readable storage medium and computer equipment
Hakobyan et al. Distributionally robust optimization with unscented transform for learning-based motion control in dynamic environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination