CN112477868B - Collision time calculation method and device, readable storage medium and computer equipment - Google Patents
Collision time calculation method and device, readable storage medium and computer equipment Download PDFInfo
- Publication number
- CN112477868B CN112477868B CN202011428028.7A CN202011428028A CN112477868B CN 112477868 B CN112477868 B CN 112477868B CN 202011428028 A CN202011428028 A CN 202011428028A CN 112477868 B CN112477868 B CN 112477868B
- Authority
- CN
- China
- Prior art keywords
- collision time
- vehicle
- target
- safety line
- algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/04—Traffic conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Mathematical Physics (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a collision time calculation method, a collision time calculation device, a readable storage medium and computer equipment. According to the technical scheme provided by the embodiment of the invention, the target image is calculated through a linear detection algorithm and a perspective transformation algorithm to generate the aerial view, the aerial view is calculated through a horizontal integration algorithm to obtain the safety line of the vehicle, the target objects around the vehicle are identified through a target detection algorithm, the vertical coordinate pixel distance between the safety line and the target objects is calculated through a collision time model to obtain the collision time between the target objects and the vehicle, GPS (global positioning system) assistance and manual calibration are not needed, online calibration can be carried out when the camera pose is greatly changed, time and labor are saved, the operation is simple, and the cost is low.
Description
[ technical field ] A method for producing a semiconductor device
The present invention relates to the field of data processing, and in particular, to a collision time calculation method, device, readable storage medium, and computer apparatus.
[ background of the invention ]
Researches find that the driver is not concentrated in attention, fatigue driving and driving behavior is immature, which are main factors causing traffic accidents, and under the background, the collision early warning technology is rapidly developed.
In the related art, there are two main methods for calculating collision time: millimeter-wave radar-based methods and vision-based methods. The millimeter wave radar-based method is high in calculation accuracy, but is expensive in manufacturing cost. The vision-based method is relatively low in cost and generally consists of vehicle distance detection and speed detection for calculating the collision time. The speed detection is generally provided by a Global Positioning System (GPS). The vehicle distance detection method comprises two methods: one is to calculate by the horizontal pixel or vertical pixel ratio of the target object, and the method needs to establish a mapping table of the target object pixel ratio and the actual distance; the other method is to calculate by calibrating the pose of the camera, and the method needs to specially calibrate the pose of the camera, establish the actual pose of the camera and further calculate the actual distance between the target object and the vehicle.
However, the two vehicle distance detection methods require the GPS to perform assistance and manual calibration, which is time-consuming and labor-consuming, and once the pose of the camera changes, the actual pose of the camera or the mapping table needs to be manually calibrated again, which results in high calibration cost. Therefore, the collision time calculation method in the related art needs the GPS for assistance and manual calibration, which is time-consuming, labor-consuming, complex to operate and high in cost.
[ summary of the invention ]
In view of this, embodiments of the present invention provide a method and an apparatus for calculating a collision time, a readable storage medium, and a computer device, which do not require GPS to perform assistance and manual calibration, and can perform online calibration when a pose of a camera changes greatly, so that time and labor are saved, the operation is simple, and the cost is low.
In a first aspect, an embodiment of the present invention provides a collision time calculation method, where the method includes:
calculating the target image through a linear detection algorithm and a perspective transformation algorithm to generate a bird's-eye view;
calculating the aerial view through a horizontal integration algorithm to obtain a safety line of the vehicle;
identifying target objects around the vehicle by a target detection algorithm;
and calculating the vertical coordinate pixel distance between the safety line and the target object through a collision time model to obtain the collision time between the target object and the vehicle.
Optionally, before the calculating the target image by the straight line detection algorithm and the perspective transformation algorithm to generate the bird's-eye view, the method further includes:
and acquiring the target image through a vehicle-mounted camera.
Optionally, the target detection algorithm comprises an artificial intelligence algorithm or an image processing algorithm.
Optionally, after the calculating, by the collision time model, the ordinate pixel distance between the safety line and the target object to obtain the collision time between the target object and the vehicle, the method further includes:
judging whether the collision time is greater than a time threshold value;
if the collision time is judged to be larger than the time threshold, continuing to execute the step of acquiring the target image through the vehicle-mounted camera;
and if the collision time is judged to be less than or equal to the time threshold, generating alarm information.
Optionally, the target image comprises at least a first frame image and a second frame image;
the acquiring the target image through the vehicle-mounted camera specifically includes:
and acquiring the first frame image through the vehicle-mounted camera, and acquiring the second frame image after preset time.
Optionally, the time-of-collision model comprises:
Y0+v0t=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the pixel movement speed.
Optionally, the time-of-collision model comprises:
Y0+v0t+(a0t2)/2=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Is the initial acceleration of the ordinate pixel movement.
Optionally, the time-of-collision model comprises:
Y0+v0t+(a0t2)/2+(j0t3)/6=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Initial acceleration for the ordinate pixel movement; j is a function of0The jerk of the ordinate pixel movement.
In another aspect, an embodiment of the present invention provides a collision time calculation apparatus, where the apparatus includes:
the aerial view generating module is used for calculating the target image through a linear detection algorithm and a perspective transformation algorithm to generate an aerial view;
the safety line determining module is used for calculating the aerial view through a horizontal integration algorithm to obtain a safety line of the vehicle;
a target object identification module for identifying target objects around the vehicle by a target detection algorithm;
and the collision time calculation module is used for calculating the vertical coordinate pixel distance between the safety line and the target object through a collision time model to obtain the collision time between the target object and the vehicle.
Optionally, the acquiring module is configured to acquire the target image through a vehicle-mounted camera.
Optionally, the determining module is configured to determine whether the collision time is greater than a time threshold;
the judging module is further used for continuing to execute the step of acquiring the target image through the vehicle-mounted camera if the collision time is judged to be larger than the time threshold;
and the alarm module is used for generating alarm information if the collision time is judged to be less than or equal to the time threshold.
Optionally, the target image comprises at least a first frame image and a second frame image;
the acquisition module is specifically configured to acquire the first frame image through the vehicle-mounted camera, and acquire the second frame image after a preset time.
Optionally, the time-of-collision model comprises:
Y0+v0t=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the pixel movement speed.
Optionally, the time-of-collision model comprises:
Y0+v0t+(a0t2)/2=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Is the initial acceleration of the ordinate pixel movement.
Optionally, the time-of-collision model comprises:
Y0+v0t+(a0t2)/2+(j0t3)/6=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Initial acceleration for the ordinate pixel movement; j is a function of0The jerk of the ordinate pixel movement.
On the other hand, the embodiment of the invention provides a readable storage medium, which comprises a stored program, wherein when the program runs, a device where the readable storage medium is located is controlled to execute the collision time calculation method.
In another aspect, an embodiment of the present invention provides a computer device, including a memory and a processor, where the memory is used to store information including program instructions, and the processor is used to control execution of the program instructions, where the program instructions are loaded by the processor and executed to implement the steps of the above-mentioned collision time calculation method.
According to the collision time calculation method, the collision time calculation device, the readable storage medium and the computer equipment, the target image is calculated through the linear detection algorithm and the perspective transformation algorithm to generate the aerial view, the aerial view is calculated through the horizontal integration algorithm to obtain the safety line of the vehicle, the target objects around the vehicle are identified through the target detection algorithm, the vertical coordinate pixel distance between the safety line and the target objects is calculated through the collision time model to obtain the collision time between the target objects and the vehicle, GPS (global positioning system) is not needed for assistance and manual calibration, online calibration can be carried out when the camera pose changes greatly, time and labor are saved, operation is simple, and cost is low.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a collision time calculation method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a collision time calculation method according to another embodiment of the present invention;
FIG. 3 is a bird's eye view provided by an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a collision time calculation apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a computer device according to an embodiment of the present invention.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 is a flowchart of a collision time calculation method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
And 102, calculating the aerial view through a horizontal integration algorithm to obtain a safety line of the vehicle.
And 103, identifying target objects around the vehicle through a target detection algorithm.
And 104, calculating the vertical coordinate pixel distance between the safety line and the target object through the collision time model to obtain the collision time between the target object and the vehicle.
According to the technical scheme of the collision time calculation method provided by the embodiment of the invention, the target image is calculated through a linear detection algorithm and a perspective transformation algorithm to generate the aerial view, the aerial view is calculated through a horizontal integration algorithm to obtain the safety line of the vehicle, the target objects around the vehicle are identified through the target detection algorithm, the longitudinal coordinate pixel distance between the safety line and the target objects is calculated through a collision time model to obtain the collision time between the target objects and the vehicle, GPS (global positioning system) assistance and manual calibration are not needed, online calibration can be carried out when the pose of the camera is greatly changed, time and labor are saved, the operation is simple, and the cost is low.
Fig. 2 is a flowchart of a collision time calculation method according to another embodiment of the present invention, which is applied to a vehicle-mounted scene. As shown in fig. 2, the method includes:
In the embodiment of the invention, the target image at least comprises the first frame image and the second frame image. Step 201 specifically includes: the method comprises the steps of obtaining a first frame image through a vehicle-mounted camera, and obtaining a second frame image after preset time.
And step 202, calculating the target image through a straight line detection algorithm and a perspective transformation algorithm to generate the aerial view.
In the embodiment of the invention, under the condition of clear lane lines, the lane lines are parallel lines in the real road, the lane lines are detected through a straight line detection algorithm to obtain a perspective transformation matrix, the perspective transformation matrix is stored, and then the perspective transformation matrix and the target image are subjected to perspective transformation through the perspective transformation algorithm to convert the target image into the aerial view.
Specifically, after lane lines are detected, two points are selected on each lane line, four points are selected from the two lane lines, a trapezoid is formed after the four points are connected, and the four trapezoidal points are changed into four rectangular points. The affine relationship between the four points of the trapezoid and the four points of the rectangle is the perspective transformation matrix.
According to the embodiment of the invention, the perspective transformation matrix is obtained in a lane line detection mode, so that the cost of manually calibrating the camera is saved.
And step 203, calculating the aerial view through a horizontal integration algorithm to obtain a safety line of the vehicle.
Fig. 3 is a bird's-eye view provided by an embodiment of the present invention, as shown in fig. 3, the rear side of the vehicle is set as the positive direction, the horizontal integral of the gray scale of the vehicle on the bird's-eye view is projected by a horizontal integral algorithm, the position of the sudden change closest to the lower part of the projection curve is the position of the vehicle head, and the ordinate Y of the position of the vehicle head is set as the position of the vehicle headcarIs determined as the safety line of the vehicle.
The safety line in the embodiment of the invention can be set by the user, and can also be determined by selecting a proper algorithm. Wherein, the user sets up the safety line oneself and can set up the safety line according to risk preference is nimble.
In the embodiment of the invention, because the pose of the vehicle-mounted camera fixed on the vehicle does not change all the time, the acquisition of the perspective transformation matrix and the determination of the safety line do not need to be calculated and updated in real time, a user can set the updating frequency by himself or herself, and the user can update the vehicle-mounted camera in a manual setting mode when a lane line and similar scenes exist. The purpose of updating the perspective transformation matrix and the safety line in the embodiment of the invention is to prevent the problem of inaccurate measurement caused by the change of the pose of the vehicle-mounted camera, and the updating frequency is set to reduce the occupied memory and improve the calculation speed.
And step 204, identifying target objects around the vehicle through a target detection algorithm.
In the embodiment of the invention, the target detection algorithm comprises an artificial intelligence algorithm or an image processing algorithm.
It should be noted that, after the target object is recognized, the ordinate pixel Y of the target object needs to be determinedobAnd then calculating the vertical coordinate pixel distance Y between the target object and the safety according to a vertical coordinate pixel distance formula.
Wherein the ordinate pixel distance formula includes:
Ycar-Yob=Y
in the present example, when Y iscar=YobI.e. Y ═ Ycar-YobWhen 0, the target object collides with the vehicle.
In addition, since the pixel distance on the ordinate and the actual distance are in a proportional relationship after the target image is converted into the bird's eye view, the collision time can be calculated using the pixel on the ordinate instead of the actual distance.
Suppose the ordinate Y of the security threadcarKeeping the image frame unchanged, extracting the image frame at equal time intervals for calculation, and setting the ordinate of the target object in the first frame image as Yob0Then distance of pixel on ordinate
Y0=Ycar-Yob0
Similarly, the distance of the vertical coordinate pixel in the second frame image
Y1=Y0ar-Yob1
In the embodiment of the present invention, the collision time model includes:
Y0+v0t=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the pixel movement speed.
Further, in order to improve the accuracy of the collision time model, the collision time model may further include:
Y0+v0t+(a0t2)/2=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Is the initial acceleration of the ordinate pixel movement.
Generally, the collision model can be used for accurately estimating the collision time.
Further, in order to improve the accuracy of the collision time model, the collision time model may further include:
Y0+v0t+(a0t2)/2+(j0t3)/6=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1As an image of the ordinate between the security line and the target object in the second frame imageThe prime distance; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Initial acceleration for the ordinate pixel movement; j is a function of0The jerk of the ordinate pixel movement.
The embodiment of the invention provides the three collision time models, and in practical application, a proper collision time model can be selected according to actual requirements to obtain corresponding precision.
And step 207, generating alarm information.
In practical application, a grading threshold value can be set according to the degree of urgency to realize collision time early warning of different degrees.
According to the embodiment of the invention, the collision time between the target object and the vehicle can be calculated only by adopting the vehicle-mounted camera and matching with the related algorithm, other auxiliary equipment is not needed, the method is simple and convenient to implement, and the cost is low; the pose of the vehicle-mounted camera is not depended on, and accurate detection can be carried out when the pose of the camera is greatly changed.
According to the technical scheme of the collision time calculation method provided by the embodiment of the invention, the target image is calculated through a linear detection algorithm and a perspective transformation algorithm to generate the aerial view, the aerial view is calculated through a horizontal integration algorithm to obtain the safety line of the vehicle, the target objects around the vehicle are identified through the target detection algorithm, the longitudinal coordinate pixel distance between the safety line and the target objects is calculated through a collision time model to obtain the collision time between the target objects and the vehicle, GPS (global positioning system) assistance and manual calibration are not needed, online calibration can be carried out when the pose of the camera is greatly changed, time and labor are saved, the operation is simple, and the cost is low.
Fig. 4 is a schematic structural diagram of an apparatus for calculating a collision time according to an embodiment of the present invention, as shown in fig. 4, the apparatus includes: the bird's eye view generation module 31, the safety line determination module 32, the target object recognition module 33, and the collision time calculation module 34.
And the bird's-eye view generating module 31 is used for calculating the target image through a straight line detection algorithm and a perspective transformation algorithm to generate the bird's-eye view.
In the embodiment of the invention, the target image at least comprises the first frame image and the second frame image.
And the safety line determining module 32 is used for calculating the aerial view through a horizontal integration algorithm to obtain a safety line of the vehicle.
And a target object recognition module 33 for recognizing target objects around the vehicle through a target detection algorithm.
And the collision time calculation module 34 is used for calculating the vertical coordinate pixel distance between the safety line and the target object through a collision time model to obtain the collision time between the target object and the vehicle.
In the embodiment of the present invention, the collision time model includes:
Y0+v0t=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the pixel movement speed.
Further, in order to improve the accuracy of the collision time model, the collision time model may further include:
Y0+v0t+(a0t2)/2=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Is the initial acceleration of the ordinate pixel movement.
Further, in order to improve the accuracy of the collision time model, the collision time model may further include:
Y0+v0t+(a0t2)/2+(j0t3)/6=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Initial acceleration for the ordinate pixel movement; j is a function of0The jerk of the ordinate pixel movement.
In the embodiment of the present invention, the apparatus further includes: an acquisition module 30.
And the acquisition module 30 is used for acquiring the target image through the vehicle-mounted camera.
In the embodiment of the present invention, the obtaining module 30 is specifically configured to obtain a first frame image through a vehicle-mounted camera, and obtain a second frame image after a preset time.
In the embodiment of the present invention, the apparatus further includes: a judgment module 35 and an alarm module 36.
A judging module 35, configured to judge whether the collision time is greater than a time threshold;
the judging module 35 is further configured to continue to perform the operation of obtaining the target image through the vehicle-mounted camera if the judging module 35 judges that the collision time is greater than the time threshold;
and the alarm module 36 is configured to generate alarm information if the judgment module 35 judges that the collision time is less than or equal to the time threshold.
The collision time calculation device provided by the embodiment of the present invention can be used to implement the collision time calculation method in fig. 1 to 2, and for specific description, reference may be made to the embodiment of the collision time calculation method described above, and a description thereof is not repeated here.
According to the technical scheme of the collision time calculation device, the target image is calculated through the linear detection algorithm and the perspective transformation algorithm to generate the aerial view, the aerial view is calculated through the horizontal integration algorithm to obtain the safety line of the vehicle, the target objects around the vehicle are identified through the target detection algorithm, the longitudinal coordinate pixel distance between the safety line and the target objects is calculated through the collision time model to obtain the collision time between the target objects and the vehicle, GPS (global positioning system) assistance and manual calibration are not needed, online calibration can be carried out when the pose of the camera is greatly changed, time and labor are saved, operation is simple, and cost is low.
Fig. 5 is a schematic diagram of a computer device according to an embodiment of the present invention. As shown in fig. 5, the computer device 20 of this embodiment includes: the processor 21, the memory 22, and the computer program 23 stored in the memory 22 and capable of running on the processor 21, where the computer program 23 is executed by the processor 21 to implement the method for calculating collision time in the embodiment, and in order to avoid repetition, the details are not repeated herein. Alternatively, the computer program is executed by the processor 21 to implement the functions of the models/units applied to the collision time calculation apparatus in the embodiments, which are not repeated herein to avoid repetition.
The computer device 20 includes, but is not limited to, a processor 21, a memory 22. Those skilled in the art will appreciate that fig. 5 is only an example of a computer device 20 and is not intended to limit the computer device 20 and that it may include more or less components than shown, or some components may be combined, or different components, e.g., the computer device may also include input output devices, network access devices, buses, etc.
The Processor 21 may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 22 may be an internal storage unit of the computer device 20, such as a hard disk or a memory of the computer device 20. The memory 22 may also be an external storage device of the computer device 20, such as a plug-in hard disk provided on the computer device 20, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 22 may also include both internal storage units of the computer device 20 and external storage devices. The memory 22 is used for storing computer programs and other programs and data required by the computer device. The memory 22 may also be used to temporarily store data that has been output or is to be output.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a readable storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned readable storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A collision time calculation method, characterized in that the method comprises:
calculating the target image through a linear detection algorithm and a perspective transformation algorithm to generate a bird's-eye view;
calculating the aerial view through a horizontal integration algorithm to obtain a safety line of the vehicle;
identifying target objects around the vehicle by a target detection algorithm;
calculating the vertical coordinate pixel distance between the safety line and the target object through a collision time model to obtain the collision time between the target object and the vehicle;
the method for calculating the target image through the straight line detection algorithm and the perspective transformation algorithm to generate the aerial view comprises the following steps: performing lane line detection through the straight line detection algorithm to obtain a perspective transformation matrix, and performing perspective transformation on the perspective transformation matrix and the target image by using a perspective transformation algorithm to convert the target image into the aerial view;
the method for calculating the aerial view through the horizontal integration algorithm to obtain the safety line of the vehicle comprises the following steps: projecting the gray level integral of the vehicle on the aerial view by the horizontal integral algorithm to generate a projection curve, and determining the vertical coordinate of the position of the vehicle head in the projection curve as the safety line of the vehicle;
the target detection algorithm comprises an artificial intelligence algorithm or an image processing algorithm;
the target image at least comprises a first frame image and a second frame image;
the collision time model includes:
Y0+v0t=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the pixel movement speed;
or
The collision time model includes:
Y0+v0t+(a0t2)/2=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Initial acceleration for the ordinate pixel movement;
or
The collision time model includes:
Y0+v0t+(a0t2)/2+(j0t3)/6=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0For the initiation of ordinate pixel movementSpeed; a is0Initial acceleration for the ordinate pixel movement; j is a function of0The jerk of the ordinate pixel movement.
2. The method of claim 1, wherein before the calculating the target image by the line detection algorithm and the perspective transformation algorithm to generate the bird's eye view, the method further comprises:
and acquiring the target image through a vehicle-mounted camera.
3. The method of claim 2, wherein after calculating the ordinate pixel distance between the safety line and the target object by the collision time model to derive the collision time between the target object and the vehicle, further comprising:
judging whether the collision time is greater than a time threshold value;
if the collision time is judged to be larger than the time threshold, continuing to execute the step of acquiring the target image through the vehicle-mounted camera;
and if the collision time is judged to be less than or equal to the time threshold, generating alarm information.
4. The collision time calculation method according to claim 2,
the acquiring the target image through the vehicle-mounted camera specifically includes:
and acquiring the first frame image through the vehicle-mounted camera, and acquiring the second frame image after preset time.
5. A collision time calculation apparatus, characterized in that the apparatus comprises:
the aerial view generating module is used for calculating the target image through a linear detection algorithm and a perspective transformation algorithm to generate an aerial view;
the safety line determining module is used for calculating the aerial view through a horizontal integration algorithm to obtain a safety line of the vehicle;
a target object identification module for identifying target objects around the vehicle by a target detection algorithm;
the collision time calculation module is used for calculating the vertical coordinate pixel distance between the safety line and the target object through a collision time model to obtain the collision time between the target object and the vehicle;
the bird's-eye view generation module is specifically configured to: performing lane line detection through the straight line detection algorithm to obtain a perspective transformation matrix, and performing perspective transformation on the perspective transformation matrix and the target image by using a perspective transformation algorithm to convert the target image into the aerial view;
the safety line determination module is specifically configured to: projecting the gray level integral of the vehicle on the aerial view by the horizontal integral algorithm to generate a projection curve, and determining the vertical coordinate of the position of the vehicle head in the projection curve as the safety line of the vehicle;
the target detection algorithm comprises an artificial intelligence algorithm or an image processing algorithm;
the target image at least comprises a first frame image and a second frame image;
the collision time model includes:
Y0+v0t=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the pixel movement speed;
or
The collision time model includes:
Y0+v0t+(a0t2)/2=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1For the security in the second frame imageA vertical coordinate pixel distance between the line and the target object; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Initial acceleration for the ordinate pixel movement;
or
The collision time model includes:
Y0+v0t+(a0t2)/2+(j0t3)/6=Y1
Y0the vertical coordinate pixel distance between the safety line and the target object in the first frame image is obtained; y is1The vertical coordinate pixel distance between the safety line and the target object in the second frame image is obtained; t is the collision time; v. of0Is the initial velocity of the ordinate pixel movement; a is0Initial acceleration for the ordinate pixel movement; j is a function of0The jerk of the ordinate pixel movement.
6. The apparatus of claim 5, further comprising:
and the acquisition module is used for acquiring the target image through the vehicle-mounted camera.
7. The apparatus of claim 6, further comprising:
the judging module is used for judging whether the collision time is greater than a time threshold value;
the judging module is further used for continuing to execute the step of acquiring the target image through the vehicle-mounted camera if the collision time is judged to be larger than the time threshold;
and the alarm module is used for generating alarm information if the collision time is judged to be less than or equal to the time threshold.
8. The apparatus of claim 6,
the acquisition module is specifically configured to acquire the first frame image through the vehicle-mounted camera, and acquire the second frame image after a preset time.
9. A readable storage medium, characterized in that the readable storage medium comprises a stored program, wherein when the program runs, the apparatus where the readable storage medium is located is controlled to execute the collision time calculation method according to any one of claims 1 to 4.
10. A computer device comprising a memory for storing information including program instructions and a processor for controlling the execution of the program instructions, characterized in that the program instructions are loaded and executed by the processor to implement the steps of the collision time calculation method according to any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011428028.7A CN112477868B (en) | 2020-12-07 | 2020-12-07 | Collision time calculation method and device, readable storage medium and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011428028.7A CN112477868B (en) | 2020-12-07 | 2020-12-07 | Collision time calculation method and device, readable storage medium and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112477868A CN112477868A (en) | 2021-03-12 |
CN112477868B true CN112477868B (en) | 2022-03-29 |
Family
ID=74940702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011428028.7A Active CN112477868B (en) | 2020-12-07 | 2020-12-07 | Collision time calculation method and device, readable storage medium and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112477868B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113954867B (en) * | 2021-09-29 | 2023-10-20 | 广州文远知行科技有限公司 | Method, device, equipment and storage medium for rapidly calculating time from object to collision |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108725440B (en) * | 2018-04-20 | 2020-11-27 | 深圳市商汤科技有限公司 | Forward collision control method and apparatus, electronic device, program, and medium |
CN109159667A (en) * | 2018-07-28 | 2019-01-08 | 上海商汤智能科技有限公司 | Intelligent driving control method and device, vehicle, electronic equipment, medium, product |
US11260852B2 (en) * | 2019-03-26 | 2022-03-01 | GM Global Technology Operations LLC | Collision behavior recognition and avoidance |
CN110203210A (en) * | 2019-06-19 | 2019-09-06 | 厦门金龙联合汽车工业有限公司 | A kind of lane departure warning method, terminal device and storage medium |
CN110992710B (en) * | 2019-12-13 | 2021-03-16 | 潍柴动力股份有限公司 | Curve speed measurement early warning method and device, control equipment and readable storage medium |
-
2020
- 2020-12-07 CN CN202011428028.7A patent/CN112477868B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112477868A (en) | 2021-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3293487A1 (en) | Data structure of environment map, environment map preparing system and method, and environment map updating system and method | |
US11144770B2 (en) | Method and device for positioning vehicle, device, and computer readable storage medium | |
CN109532662B (en) | Method and device for calculating distance between vehicles and collision time | |
US11087224B2 (en) | Out-of-vehicle communication device, out-of-vehicle communication method, information processing device, and computer readable medium | |
JP2017138660A (en) | Object detection method, object detection device and program | |
CN114495064A (en) | Monocular depth estimation-based vehicle surrounding obstacle early warning method | |
CN114419143A (en) | Depth measuring method, depth measuring apparatus, and storage medium | |
CN112927309A (en) | Vehicle-mounted camera calibration method and device, vehicle-mounted camera and storage medium | |
US11935304B2 (en) | Calibration system, calibration method, and calibration program | |
CN116486351A (en) | Driving early warning method, device, equipment and storage medium | |
CN111881245B (en) | Method, device, equipment and storage medium for generating visibility dynamic map | |
CN112477868B (en) | Collision time calculation method and device, readable storage medium and computer equipment | |
CN116592872A (en) | Method and device for updating occupied grid map and related equipment | |
CN114662587B (en) | Three-dimensional target perception method, device and system based on laser radar | |
CN114919584A (en) | Motor vehicle fixed point target distance measuring method and device and computer readable storage medium | |
JP2020118575A (en) | Inter-vehicle distance measurement device, error model generation device, learning model generation device, and method and program thereof | |
CN118311955A (en) | Unmanned aerial vehicle control method, terminal, unmanned aerial vehicle and storage medium | |
CN113240750A (en) | Three-dimensional space information measuring and calculating method and device | |
CN112837384A (en) | Vehicle marking method and device and electronic equipment | |
CN115542301B (en) | Method, device and equipment for calibrating external parameters of laser radar and storage medium | |
CN112902911A (en) | Monocular camera-based distance measurement method, device, equipment and storage medium | |
CN114638947A (en) | Data labeling method and device, electronic equipment and storage medium | |
CN115930978A (en) | Map creating method and device | |
JP2020034451A (en) | Data structure, storage medium, and storage device | |
CN111596288B (en) | Method and device for measuring speed, vehicle-mounted terminal and vehicle-mounted speed measuring system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |