Nothing Special   »   [go: up one dir, main page]

CN115436900A - Target detection method, device, equipment and medium based on radar map - Google Patents

Target detection method, device, equipment and medium based on radar map Download PDF

Info

Publication number
CN115436900A
CN115436900A CN202211065877.XA CN202211065877A CN115436900A CN 115436900 A CN115436900 A CN 115436900A CN 202211065877 A CN202211065877 A CN 202211065877A CN 115436900 A CN115436900 A CN 115436900A
Authority
CN
China
Prior art keywords
target
information
current
frame information
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211065877.XA
Other languages
Chinese (zh)
Inventor
陶征
张军
许孝勇
顾超
章庆
朱大安
仇世豪
王长冬
张辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Hurys Intelligent Technology Co Ltd
Original Assignee
Nanjing Hurys Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Hurys Intelligent Technology Co Ltd filed Critical Nanjing Hurys Intelligent Technology Co Ltd
Priority to CN202211065877.XA priority Critical patent/CN115436900A/en
Publication of CN115436900A publication Critical patent/CN115436900A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • G01S7/412Identification of targets based on measurements of radar reflectivity based on a comparison between measured values and known or stored values

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a target detection method, a target detection device, target detection equipment and a target detection medium based on a radar map. The method comprises the following steps: acquiring a current radar image of a radar detection area; obtaining a current outline frame information base according to the current radar image; the current outline frame information base comprises current outline frame information of at least one object to be detected; detecting the target to be detected according to the current contour frame information of the target to be detected and the matching information of the historical contour frame information base; and determining the historical profile frame information base according to the historical radar image. According to the technical scheme, the target to be detected is accurately detected through the matching condition of the current outline frame information of the target to be detected in the current radar image and the historical outline frame information in the historical radar image.

Description

Target detection method, device, equipment and medium based on radar map
Technical Field
The invention relates to the technical field of image detection, in particular to a target detection method, a target detection device, target detection equipment and a target detection medium based on a radar map.
Background
Digital traffic is an important field of digital economic development, and is gradually developed in intellectualization, digitalization and informatization along with the traffic industry. The problems caused by road projectiles are increasingly appreciated. For example, in traffic application scenarios such as urban roads, expressways, railways, water transportation and the like, the existence of the sprinkled objects can easily cause a series of traffic accidents, greatly affect the traffic capacity and bring about serious safety problems. How to accurately and timely identify and process the throwing objects becomes an important topic in the field of intelligent traffic security.
According to the related scheme, aiming at detection of the sprinkled objects on the road, a scanning radar is used for generating a panoramic radar map to detect the sprinkled objects. However, there are many image factors on the road, which may interfere with the detection of the projectile in the radar map, making it difficult to quickly and accurately detect whether the projectile is present, and possibly causing a problem of misjudgment of the projectile. Therefore, it is vital to detect the objects thrown and identify the traffic events quickly and accurately, and then improve the traffic efficiency and guarantee the safety.
Disclosure of Invention
The invention provides a target detection method, a target detection device, target detection equipment and a target detection medium based on a radar map, and aims to solve the problem of how to quickly detect whether a target is a projectile through a radar image.
According to an aspect of the present invention, there is provided a radar map-based object detection method, the method including:
acquiring a current radar image of a radar detection area;
obtaining a current outline frame information base according to the current radar image; the current outline frame information base comprises current outline frame information of at least one target to be detected;
detecting the target to be detected according to the matching information of the current contour frame information and the historical contour frame information base of the target to be detected; and determining the historical profile frame information base according to the historical radar image.
According to another aspect of the present invention, there is provided a radar map-based object detecting apparatus, the apparatus including:
the image determining module is used for acquiring a current radar image of a radar detection area;
the information base determining module is used for obtaining a current outline frame information base according to the current radar image; the current outline frame information base comprises current outline frame information of at least one object to be detected;
the detection module is used for detecting the target to be detected according to the current contour frame information of the target to be detected and the matching information of the historical contour frame information base; and determining the historical profile frame information base according to the historical radar image.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform a radar map based object detection method according to any one of the embodiments of the invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the radar map-based target detection method according to any one of the embodiments of the present invention when executed.
According to the technical scheme of the embodiment of the invention, the current outline frame information base is accurately obtained through the obtained current radar image of the radar detection area, wherein the current outline frame information base comprises the current outline frame information of at least one target to be detected, the target to be detected is detected according to the current outline frame information of the target to be detected and the matching information of the historical outline frame information base, and the historical outline frame information base is determined according to the historical radar image. According to the technical scheme, the target to be detected is accurately detected through the matching condition of the current outline frame information of the target to be detected in the current radar image and the historical outline frame information in the historical radar image.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a target detection method based on a radar chart according to an embodiment of the present invention;
FIG. 2 is a flowchart of a target detection method based on a radar chart according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a method for selecting a polygon area according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a target detection apparatus based on a radar chart according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device implementing the radar map-based target detection method according to the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of a target detection method based on a radar chart according to an embodiment of the present invention, where the present embodiment is applicable to a situation where a target is detected based on a radar chart in a scene (e.g., an expressway) fixed relative to a radar detection area, and the method may be performed by a target detection apparatus based on a radar chart, where the target detection apparatus based on a radar chart may be implemented in hardware and/or software, and the target detection apparatus based on a radar chart may be configured in an electronic device of the target detection method based on a radar chart. As shown in fig. 1, the method includes:
and S110, acquiring a current radar image of the radar detection area.
The current radar image can be a radar image formed by a radar transmitter transmitting radio waves to a radar detection area and a receiver receiving scattered echoes after a target enters the radar detection area. A plurality of little checks can be cut apart into to current radar image, every little check can be called a pixel, carry out analysis processes through the information to every pixel, just can acquire the information characteristic of each detection position point that corresponds in the radar detection region that each pixel corresponds, and then confirm the information of waiting to detect the target, for example, after carrying out analysis processes through information such as the position to each pixel in current radar image, colour and luminance, and then the accurate condition that obtains waiting to detect the target that radar detection region corresponds.
S120, obtaining a current outline frame information base according to the current radar image; and the current outline frame information base comprises current outline frame information of at least one object to be detected.
The current outline frame information base can be an information base used for storing current outline frame information of the target to be detected, which is obtained after the current radar image is analyzed and processed. The current outline border information is used for describing border information of the outline of the target to be detected in the current radar image, for example, the outline border information may be border information of a minimum rectangle in which the current outline border includes the outline and is perpendicular to the border.
Specifically, contour extraction is carried out on a target to be detected in a current radar image, current contour frame information of the target to be detected in the current radar image is accurately obtained and is put into a current contour frame information base, and it can be ensured that detection of a spill on the target to be detected is more accurate.
S130, detecting the target to be detected according to the current contour frame information of the target to be detected and the matching information of the historical contour frame information base; and determining the historical profile frame information base according to the historical radar image.
And the current outline frame information comprises outline frame position information. The outline border position information may be position coordinate information indicating where the object is located. The historical outline frame information base comprises at least one piece of historical outline frame information, and the historical outline frame information at least comprises outline frame position information and outline frame attribute information; the historical outline frame information base can be an information base of historical outline frame information of the target to be detected, which is obtained by analyzing and processing the historical radar image. Wherein the historical radar image may be a previous frame radar image of the current radar image. The matching information comprises the coincidence degree of the outline border position information.
In a feasible embodiment, the detecting the target to be detected according to the matching information of the current contour frame information and the historical contour frame information base of the target to be detected includes the following steps A1-A4:
and A1, traversing the historical outline frame information in the historical outline frame information base, and sequentially determining the contact ratio of the historical outline frame information and each piece of current outline frame information in the current outline frame information base according to the outline frame position information.
And A2, determining the current contour frame information of the target with the highest coincidence degree with the historical contour frame information in the current contour frame information base.
And A3, updating the historical outline frame information according to the coincidence degree of the current outline frame information and the historical outline frame information of the target.
And A4, detecting the target to be detected corresponding to the current contour frame information of the target according to the updated historical contour frame information.
Specifically, the historical contour frame information in the historical contour frame information base is traversed, the contact ratio of the historical contour frame information and each piece of current contour frame information in the current contour frame information base is determined in sequence according to the contour frame position information, and the contact ratio can be determined according to the following formula:
Figure BDA0003827519860000061
wherein,
Figure BDA0003827519860000062
and
Figure BDA0003827519860000063
upper-left corner position information and lower-right corner position information of a rectangular bounding box respectively representing history contour bounding box information,
Figure BDA0003827519860000064
and
Figure BDA0003827519860000065
and respectively representing the position information of the upper left corner and the position information of the lower right corner of the rectangular bounding box of the current outline bounding box information.
According to the coincidence degree of the determined historical contour frame information and the current contour frame information in the current contour frame information base, the target current contour frame information with the highest coincidence degree with the historical contour frame information in the current contour frame information base is found out, and the historical contour frame information is updated according to the coincidence degree, so that the historical contour frame information base is more accurate, and the object to be detected can be accurately subjected to the spill object detection according to the historical contour frame information base.
According to the technical scheme, the coincidence degree of the historical contour frame information and each piece of current contour frame information in a current contour frame information base is accurately calculated through a formula, the current contour frame information with the highest coincidence degree is found out to serve as the target current contour frame information corresponding to the historical contour frame information, the historical contour frame information is updated according to the coincidence degree, the historical contour frame information can more accurately represent the target to be detected, the target to be detected corresponding to the target current contour frame information is detected according to the updated historical contour frame information, and whether the target to be detected is a spill object or not is quickly and accurately detected.
In a possible embodiment, updating the historical contour bounding box information according to the coincidence ratio of the target current contour bounding box information and the historical contour bounding box information may include the following steps B1-B2:
b1, if the contact ratio is greater than or equal to a preset contact ratio threshold value, updating the position information of the contour frame in the historical contour frame information according to the position information of the contour frame in the current contour frame information of the target, updating and counting the continuous static frame number, and setting the continuous non-target associated frame number to zero;
and B2, if the contact ratio is smaller than a preset contact ratio threshold value, updating and counting the continuous non-target associated frame number, and setting the continuous static frame number to be zero.
The historical outline border information also comprises outline border attribute information. The outline border attribute information includes at least a continuous number of non-target associated frames and a continuous number of still frames. The continuous target-free associated frame number phi can be the number of radar image frames of the target to be detected which continuously does not appear and corresponds to the current outline frame information of the target. The continuous static frame number tau is used for representing the frame number of the position stop change of the target to be detected in the moving process, and can be used for judging the time length of the stop movement of the target. The preset contact ratio threshold T can be used to determine a minimum value of the historical profile frame information matching each current profile frame information in the current profile frame information base, and needs to be set according to actual conditions.
Specifically, obtaining the contact ratio of each historical contour frame information and each current contour frame information in the current contour frame information base, if the contact ratio is greater than or equal to a preset contact ratio threshold value, indicating that the historical contour frame information and the corresponding target current contour frame information can represent the same target, updating the contour frame position information in the historical contour frame information according to the contour frame position information in the target current contour frame information, namely (r) tl ,c tl ) And (r) br ,c br ) Are respectively updated to
Figure BDA0003827519860000071
And
Figure BDA0003827519860000072
and further for the continuous static frame numberNew count, i.e., τ = τ +1, zero the number of consecutive no-target association frames, i.e., τ =0; and if the contact ratio is smaller than a preset contact ratio threshold value, which indicates that the target is not in the radar detection area at the current moment, updating and counting the continuous non-target associated frame number, namely phi = phi +1, and setting the continuous static frame number to zero, namely phi =0.
According to the technical scheme, the overlap ratio and the preset overlap ratio threshold value are used for judging, the historical contour frame information is accurately updated, the follow-up accurate detection of the target is facilitated, and the detection error of the projectile caused by misjudgment is avoided.
In a feasible embodiment, detecting the target to be detected corresponding to the current contour frame information of the target according to the updated historical contour frame information may include the following steps C1-C2:
and step C1, if the updated continuous static frame number is greater than a preset continuous static frame number threshold value and the moving distance is smaller than a preset distance threshold value, determining that the target to be detected corresponding to the current contour frame information of the target is a projectile, and deleting the current contour frame information of the target from a current contour frame information base.
And step C2, if the updated continuous static frame number is larger than a preset continuous static frame number threshold value and the moving distance is larger than or equal to a preset distance threshold value, determining that the target to be detected corresponding to the target current contour frame information is a parking vehicle, and deleting the target current contour frame information from a current contour frame information base.
The historical outline frame information also comprises outline frame initial position information; the matching information further includes a moving distance between the outline border start position information and the updated outline border position information. The travel distance L can be formulated as:
Figure BDA0003827519860000081
wherein (r) 0 ,c 0 ) Is the information of the starting position of the outline border,
Figure BDA0003827519860000082
and
Figure BDA0003827519860000083
the updated outline border position information.
The preset continuous stationary frame number threshold is the minimum continuous stationary frame number at which the determination target has stopped. The preset distance threshold may be a maximum value that can be reached by the distance traveled by the target when the target is a projectile.
Specifically, after the contact ratio is determined to be greater than or equal to a preset contact ratio threshold value, extracting updated historical contour frame information of a target to be detected corresponding to the current contour frame information of the target in a historical contour frame information base, judging whether an updated continuous static frame number is greater than a preset continuous static frame number threshold value, if the updated continuous static frame number is greater than the preset continuous static frame number threshold value, indicating that the target to be detected stops, judging whether a moving distance is smaller than a preset distance threshold value, if the moving distance is smaller than the preset distance threshold value, determining that the target to be detected corresponding to the current contour frame information of the target is a spill object, deleting the current contour frame information of the target from the current contour frame information base, and if the moving distance is greater than or equal to the preset distance threshold value, determining that the target to be detected corresponding to the current contour frame information of the target is a parking vehicle, and deleting the current contour frame information of the target from the current contour frame information base; and if the updated continuous static frame number is smaller than the preset continuous static frame number threshold, indicating that the target to be detected does not stop, and continuously updating the historical outline frame information.
According to the technical scheme, the target to be detected corresponding to the current contour frame information of the target is detected through the updated contour frame information, vehicles and sprinklers are accurately distinguished, and the target to be detected is accurately and quickly judged.
In a feasible embodiment, after the detection of the projectile on the target to be detected corresponding to the current contour frame information of the target is performed according to the updated historical contour frame information, the method further includes:
and if the updated continuous non-target associated frame number is greater than a preset continuous non-target associated frame number threshold value, deleting the historical profile frame information from a historical profile frame information base.
The preset continuous non-target associated frame number threshold is used for judging whether the target still exists in the radar detection area.
Specifically, after the coincidence degree is determined to be smaller than a preset coincidence degree threshold value, updated historical contour frame information of a target to be detected corresponding to the current contour frame information of the target in the historical contour frame information base is extracted, if the updated continuous non-target associated frame number is larger than a preset continuous non-target associated frame number threshold value, it is indicated that the object is not in the radar detection area, and the updated historical contour frame information of the target to be detected is invalid information, the updated historical contour frame information is deleted from the historical contour frame information base.
According to the technical scheme, whether the updated historical profile frame information of the target to be detected is invalid information is determined by judging whether the updated continuous non-target associated frame number is greater than a preset continuous non-target associated frame number threshold value, and if the updated historical profile frame information is invalid information, the updated historical profile frame information is deleted from the historical profile frame information base in time so as to ensure the accuracy of the historical profile frame information base.
In a feasible embodiment, after the projectile detection is performed on the target to be detected corresponding to the current contour frame information of the target according to the updated historical contour frame information, the method further includes:
adding new historical outline frame information in the historical outline frame information base according to the remaining current outline frame information in the current outline frame information base, initializing the outline frame attribute information of the new historical outline frame information, and taking the outline frame position information in the current outline frame information base as the outline frame initial position information.
Specifically, the current outline border information base is traversed, and if the current outline border information still exists in the current outline border information base, the current outline border information base is explainedIf a new target appears in the radar detection area, new historical contour frame information needs to be added to the residual current contour frame information in the current contour frame information base in the historical contour frame information base to ensure the richness of the historical contour frame information base, the contour frame attribute information of the new historical contour frame information is initialized, namely tau =0 and phi =0, and meanwhile, the contour frame position information in the current contour frame information base is used as the contour frame initial position information (r) 0 ,c 0 ) And then, in the updating process of the historical outline border information, the outline border starting position information is kept unchanged all the time, but the outline border position information is continuously updated. The outline border start position information may be expressed as:
Figure BDA0003827519860000101
wherein (r) tl ,c tl ) And (r) br ,c br ) And respectively representing the upper left corner position information and the lower right corner position information of the minimum bounding box of the circumscribed rectangle.
Exemplarily, the current contour frame information base C includes three current contour frame information of C1, C2, and C3, the historical contour frame information base S includes three historical contour frame information of S1, S2, and S3, and it is determined that C1 is the target current contour frame information corresponding to S1, C2 is the target current contour frame information corresponding to S2, C3 is the target current contour frame information corresponding to S3, and the coincidence degree of C1 with S1 is greater than a preset coincidence degree threshold value through calculation of coincidence degree in matching information, the historical contour frame information of S1 is updated, and C1 is deleted from the current contour frame information base C; and if the contact ratio of C2 to S2 and the contact ratio of C3 to S3 is less than a preset contact ratio threshold value, directly updating the attribute information of the outline border in the current outline border information of S2 and S3, and adding C2 and C3 into the historical outline border information base S.
According to the technical scheme, the residual current contour frame information in the current contour frame information base is added into the historical contour frame information base, so that the historical contour frame information base is expanded, the accuracy of the historical contour frame information base is ensured, the condition that misjudgment is caused to target detection due to information omission is avoided, and the accuracy of the target detection is improved.
In a feasible embodiment, after the target to be detected is detected according to the current contour frame information of the target to be detected and the matching information of the historical contour frame information base, the method further includes:
determining the position information of the target according to the position information of the contour frame in the current contour frame information of the target, wherein the position information of the target can be determined according to the following formula:
Figure BDA0003827519860000111
wherein (x, y) represents position information of the target, (r, c) represents center position information of the bounding box of the minimum bounding rectangle,
Figure BDA0003827519860000112
(r tl ,c tl ) And (r) br ,c br ) And respectively representing the position information of the upper left corner and the position information of the lower right corner of the minimum circumscribed rectangle bounding box information. And delta represents the corresponding relation between pixel points in the current radar image and an actual area, and P, Q represents the number of rows and columns of the current radar image respectively.
And the outline border position information is minimum circumscribed rectangle border information.
According to the technical scheme, the position information of the target is accurately calculated and obtained through a formula according to the position information of the contour frame in the current contour frame information of the target, the target on the road can be processed in time according to the position information of the target, and the driving safety of the road is ensured.
According to the technical scheme of the embodiment of the invention, the current radar image of the radar detection area is obtained, the current radar image is analyzed and processed, and then the current outline frame information base is accurately obtained, wherein the current outline frame information base comprises the current outline frame information of at least one target to be detected, the contact ratio of the historical outline frame information and each current outline frame information in the current outline frame information base is sequentially determined according to the outline frame position information, the outline frame attribute information of the historical outline frame information is updated according to the contact ratio so as to ensure the accuracy of the historical outline frame information base, and finally the target to be detected is detected according to the updated historical outline frame information so as to accurately determine and accurately position the target, so that the target to be detected is quickly and accurately detected.
Example two
Fig. 2 is a flowchart of a target detection method based on a radar chart according to a second embodiment of the present invention, and this embodiment describes details of obtaining a current outline border information base according to a current radar image in the above embodiment. As shown in fig. 2, the method includes:
s210, obtaining a current radar image of a radar detection area, and separating a background and a foreground in the current radar image to obtain a target radar image with the background removed.
Specifically, the radar detection area is arranged in the highway scene, and due to the influence of the highway structure, the highway area actually required to be detected in the current radar image is a polygonal area. For example, referring to fig. 3 for a radar image obtained by radar shooting in an expressway scene, in the figure, a circular area 1 is a radar detectable area, and a polygonal area 2 is an actually required detection area where an expressway is located. Specifically, inflection point position information on the expressway is determined according to map information of the expressway, position information on the radar image corresponding to each inflection point of the expressway is determined according to the corresponding relation between the pixel point in the radar image and the actual area, and then the corresponding polygonal area is obtained and used as the final current radar image.
In addition, because trees on two sides of the expressway shake with wind, the background noise change during radar detection can be large, the background of the current radar image can be removed by using a self-learning background removing method, and the accurate target radar image with the background removed can be obtained. When the self-learning background removing method is used for learning, the detected expressway needs to be temporarily closed (about 10 minutes), then background learning is carried out, the background removing method can well remove the background of the radar image with large fluctuation of the background noise, and the robustness is good.
In a possible embodiment, the separating the background and the foreground in the current radar image to obtain the target radar image with the background removed may include the following steps D1-D3:
and D1, aiming at the pixel points to be identified in the current radar image, determining target detection position points corresponding to the pixel points to be identified, which are mapped in a radar detection area, and a preset signal intensity probability distribution model corresponding to the radar when the radar scans at the target detection position points.
D2, detecting a matching result of a preset signal intensity probability distribution model corresponding to the value of the pixel point to be identified and the target detection position point; the preset signal intensity probability distribution model is used for describing the signal intensity probability distribution of radar reflected waves when target detection position points are scanned under the condition that the radar detection area does not include the foreground.
And D3, separating the background and the foreground in the current radar image according to the matching result of each pixel point to be identified to obtain a target radar image.
And each pixel point value in the current radar image is used for describing the signal intensity of a radar reflected wave when the radar scans at the detection position point, and the current radar image belongs to a gray level image. The pixel points to be identified can be pixel points which need to be detected in the current radar image. The target detection position points can be detection positions in the radar detection area corresponding to pixel points to be identified in the current radar image, and each pixel point in the current radar image and each target detection position point in the radar detection area have a one-to-one correspondence relationship. The preset signal intensity probability distribution model is a normal distribution model set obtained by training the normal distribution model corresponding to the target detection position points in each background so as to ensure that the normal distribution model corresponding to each target detection position point in the background is satisfied.
Specifically, a radar detection area is scanned through a microwave radar, a current radar image of the radar detection area is obtained, and then target detection position points corresponding to pixel points to be identified in the current radar image in the radar detection area are determined, so that the accuracy of the pixel points of the current image and the corresponding target detection position points is ensured, the position of the target detection position points can be accurately corresponding to the subsequent pixel points to be identified after analysis processing is carried out on the pixel points to be identified, and the target detection position points are conveniently processed; meanwhile, a normal distribution model corresponding to the target detection position point is accurately learned to obtain an accurate preset signal intensity probability distribution model, then pixel values to be recognized in the current radar image are brought into each normal distribution model in the preset signal intensity probability distribution model to judge whether the pixel values to be recognized are matched with the preset signal intensity probability distribution model, if the pixel values to be recognized are matched with one normal distribution model in the preset signal intensity probability distribution model, the pixel values to be recognized are matched with the preset signal intensity probability distribution model, the matching result of the preset signal intensity probability distribution model corresponding to the pixel values to be recognized and the target detection position point can be accurately obtained, finally, the pixel values to be recognized can be distinguished to belong to background pixels or foreground pixels according to the matching result, separation of foreground and background in the current radar image can be achieved, and the target radar image with the background removed can be obtained.
According to the technical scheme, the corresponding preset signal intensity probability distribution model of the radar during scanning of the target detection position point is accurately acquired through accurate self-learning, the value of the pixel point to be identified in the current radar image is brought into each normal distribution model in the preset signal intensity probability distribution model, whether the value of the pixel point to be identified is matched with the preset signal intensity probability distribution model is judged, and therefore a matching result is obtained, the pixel of the target detection position point in the current radar image is accurately determined, namely, each pixel point to be identified belongs to a background pixel or a foreground pixel, the background in the current radar image is accurately removed, only the foreground image is reserved, and the target radar image with the background removed from the current radar image is quickly and accurately acquired.
In a feasible embodiment, detecting a matching result of a preset signal intensity probability distribution model corresponding to the value of the pixel point to be identified and the target detection position point may include the following steps E1 to E3:
e1, detecting whether at least one normal distribution model in a preset signal intensity probability distribution model corresponding to the value of the pixel point to be identified and the target detection position point meets a preset matching condition; the preset matching condition comprises that the value of the pixel point to be identified and the mean value of the normal distribution model meet a preset Laviand criterion.
And E2, if at least one normal distribution model meeting the preset matching condition exists, determining that the pixel point to be identified belongs to the background pixel in the current radar image.
And E3, if the normal distribution model meeting the preset matching condition does not exist, determining that the pixel point to be identified belongs to the foreground pixel in the current radar image.
The preset matching condition can be at least one normal distribution model in a preset signal intensity probability distribution model used for judging whether the value of the pixel point to be identified meets the target detection position point. The preset Laviand criterion may be expressed by the following formula:
Figure BDA0003827519860000141
in the formula, x ij The value of the pixel point to be identified is obtained,
Figure BDA0003827519860000151
is the average value in the preset signal intensity probability distribution model corresponding to the target detection position point,
Figure BDA0003827519860000152
and calculating the variance in the preset signal intensity probability distribution model corresponding to the target detection position point.
Specifically, a current radar image is obtained by scanning a radar detection area through a radar, a value of a pixel point to be identified in the current radar image is input into a probability distribution model corresponding to a target detection position point and corresponding to a preset signal intensity, whether at least one normal distribution model in the probability distribution model corresponding to the target detection position point and corresponding to the preset signal intensity meets a preset matching condition or not is judged, and if at least one normal distribution model meeting the preset matching condition exists, the pixel point to be identified is determined to belong to a background pixel in the current radar image; and if the normal distribution model meeting the preset matching condition does not exist, determining that the pixel point to be identified belongs to the foreground pixel in the current radar image.
According to the technical scheme, whether at least one normal distribution model in the preset signal intensity probability distribution model meets the preset matching condition or not is detected through detecting the value of the pixel to be identified and the corresponding target detection position point, whether the pixel to be identified belongs to a background pixel or a foreground pixel in the current radar image can be accurately obtained, the pixel in the radar image can be accurately judged, and then the background and the foreground in the current radar image can be accurately separated.
And S220, determining the position information of the target area to be detected in the foreground of the target radar image.
The target area to be detected can be a minimum circumscribed rectangular area of a target to be detected in the target radar image.
Specifically, the current contour frame information can be accurately determined only by accurately acquiring the position information of the target area to be detected included in the foreground in the target radar image.
In a possible embodiment, determining the position information of the target region to be detected included in the foreground in the target radar image may include the following steps F1 to F3:
step F1, performing morphological processing on the target radar image to obtain a processed radar image; a target area to be detected in the foreground of the target radar image is divided into different sub-areas after the foreground and the background are separated.
And F2, performing Gaussian smoothing on the processed radar image, and performing edge detection on the processed radar image after Gaussian smoothing to obtain an edge detection image of the target radar image.
And F3, extracting the outer boundary inflection point of the edge detection graph to obtain the outer boundary inflection point position information of the target area to be detected in the foreground of the target radar image, wherein the outer boundary inflection point position information is used as the position information of the target area to be detected.
Specifically, in the process of forming the target radar image, the target area to be detected may be divided into different sub-areas, so that in order to eliminate internal voids and/or gaps of neighboring areas between the different sub-areas corresponding to the target area to be detected, morphological expansion operation needs to be performed on the target radar image to obtain an expanded radar image. Because the region can grow after the inflation, therefore obtain the radar image after the corruption to radar image morphology corrosion operation after the inflation, and as handling back radar image, can let the regional area resume before the inflation, make the representation that the radar image after handling can be more accurate wait to detect the target area, because radar detection can produce some noises, so need carry out Gaussian smooth processing to the radar image after handling, further eliminate because the little noise point of part that radar detection arouses, the accuracy of image has been strengthened more, then carry out edge detection to the radar image after the processing of Gaussian smooth and obtain the edge detection map of target radar image, carry out outer boundary inflection point extraction to the edge detection map finally, the outer boundary position information of waiting to detect the target area in the prospect that obtains the target radar image, as the position information of waiting to detect the target area. Any method in the prior art may be adopted for the outer boundary inflection point extraction method, which is not limited in the embodiment of the present invention. The finally determined position information of the target region to be detected can be represented as follows:
Figure BDA0003827519860000161
wherein D is i A set of outer boundary corner coordinates representing the ith target region,
Figure BDA0003827519860000162
the row and column geometric pixel coordinates of the m-th inflection point representing the outer boundary of the i-th target region.
This technical scheme, obtain the back radar image of handling through carrying out morphological processing to the target radar image, make the back radar image of handling more accurately show treat the target area of waiting to detect, in addition, carry out gaussian smoothing to the radar image after handling, further eliminate because the noise point that radar detection arouses partly is little, the accuracy of image has been strengthened more, then carry out edge detection to the back radar image of handling after gaussian smoothing and obtain the edge detection map of target radar image, carry out the outer boundary inflection point to the edge detection map at last and extract, the outer boundary inflection point positional information who treats the target area in the prospect of obtaining the target radar image, as the positional information who treats the target area of waiting, be favorable to follow-up accurate current outline frame information that obtains through the positional information who treats the target area of waiting to detect.
And S230, determining the minimum circumscribed rectangle bounding box information of the target area to be detected as the current outline bounding box information according to the position information of the target area to be detected.
Specifically, position information of a target area to be detected in the target radar image, namely position information of a minimum circumscribed rectangle of the target area to be detected, is obtained, and the minimum circumscribed rectangle bounding box information of the target area to be detected is obtained by calculating the position information of the minimum circumscribed rectangle of the target area to be detected and is used as current contour frame information. The current outline bounding box information may be expressed as:
Figure BDA0003827519860000171
wherein, C i Denoted as the ith minimum bounding box of the rectangle,
Figure BDA0003827519860000172
and
Figure BDA0003827519860000173
and respectively representing the upper left corner position information and the lower right corner position information of the minimum circumscribed rectangle bounding box of the ith target to be detected.
S240, constructing a current outline frame information base according to the current outline frame information of all target areas to be detected in the target radar image.
Specifically, the current contour frame information of all target areas to be detected in the target radar image is obtained, and the current contour frame information of all target areas to be detected is stored in a database to construct an accurate current contour frame information base, so that the historical contour frame information base can be updated conveniently through the current contour frame information, and the accuracy of target detection is ensured.
And S250, detecting the target to be detected according to the current contour frame information of the target to be detected and the matching information of the historical contour frame information base.
According to the technical scheme, the current radar image of the radar detection area is obtained, the middle background and the foreground of the current radar image are separated to obtain the target radar image with the background removed, in order to eliminate holes in the area, gaps in the adjacent area and noise caused by radar of the target radar image and enable the target area to be detected in the accurate foreground to be obtained, morphological processing and Gaussian smoothing processing are carried out on the target radar image, meanwhile, edge detection is carried out on the radar image after Gaussian smoothing to obtain the edge detection image of the target radar image so as to ensure that the target area to be detected in the target radar image can be accurately determined, and finally, outer boundary inflection point extraction is carried out on the edge detection image to obtain outer boundary inflection point position information of the target area to be detected in the foreground of the target radar image, the outer boundary inflection point position information is used as the position information of the target area to be detected, and the current outline frame information can be accurately obtained through the position information of the target area to be detected. In addition, the acquired current contour frame information of all target areas to be detected is stored in an information base so as to accurately obtain a current contour frame information base, and then the object to be detected is subjected to projectile detection according to the matching information of the current contour frame information and the historical contour frame information of the target to be detected, so that the target to be detected can be quickly and accurately detected.
EXAMPLE III
Fig. 4 is a schematic structural diagram of a target detection apparatus based on a radar chart according to a third embodiment of the present invention. As shown in fig. 4, the apparatus includes:
and an image determining module 310, configured to obtain a current radar image of the radar detection area.
An information base determining module 320, configured to obtain a current outline frame information base according to the current radar image; and the current outline frame information base comprises current outline frame information of at least one target to be detected.
The detection module 330 is configured to detect the target to be detected according to the current contour frame information of the target to be detected and the matching information of the historical contour frame information base; and determining the historical profile frame information base according to the historical radar image.
The current outline frame information comprises outline frame position information; the historical outline frame information base comprises at least one piece of historical outline frame information, and the historical outline frame information at least comprises outline frame position information; the matching information comprises the contact ratio of the outline border position information.
Optionally, the detection module is specifically configured to:
traversing the historical outline frame information in the historical outline frame information base, and sequentially determining the contact ratio of the historical outline frame information and each piece of current outline frame information in the current outline frame information base according to the outline frame position information;
determining the current contour frame information of the target with the highest coincidence degree with the historical contour frame information in the current contour frame information base;
updating the historical contour frame information according to the coincidence degree of the target current contour frame information and the historical contour frame information;
and detecting the target to be detected corresponding to the current contour frame information of the target according to the updated historical contour frame information.
Optionally, the detection module includes an information updating unit, and is specifically configured to:
if the contact ratio is greater than or equal to a preset contact ratio threshold value, updating the contour frame position information in the historical contour frame information according to the contour frame position information in the target current contour frame information, updating and counting the continuous static frame number, and setting the continuous non-target associated frame number to zero;
and if the contact ratio is smaller than a preset contact ratio threshold value, updating and counting the continuous non-target associated frame number, and setting the continuous static frame number to be zero.
The historical outline frame information also comprises outline frame attribute information which at least comprises a continuous non-target associated frame number and a continuous static frame number.
Optionally, the detection module includes a first judgment unit, and is specifically configured to:
if the updated continuous static frame number is larger than a preset continuous static frame number threshold value and the moving distance is smaller than a preset distance threshold value, determining that the target to be detected corresponding to the current contour frame information of the target is a tossed object, and deleting the current contour frame information of the target from a current contour frame information base;
and if the updated continuous static frame number is greater than a preset continuous static frame number threshold value and the moving distance is greater than or equal to a preset distance threshold value, determining that the target to be detected corresponding to the target current contour frame information is a parking vehicle, and deleting the target current contour frame information from a current contour frame information base.
The historical outline frame information also comprises outline frame initial position information; the matching information further includes a moving distance between the outline border start position information and the updated outline border position information.
Optionally, the detection module includes a second determination unit, and is specifically configured to:
and if the updated continuous non-target associated frame number is greater than a preset continuous non-target associated frame number threshold value, deleting the historical profile frame information from a historical profile frame information base.
Optionally, the detection module includes an information adding unit, and is specifically configured to:
adding new historical outline frame information in the historical outline frame information base according to the remaining current outline frame information in the current outline frame information base, initializing the outline frame attribute information of the new historical outline frame information, and taking the outline frame position information in the current outline frame information base as the outline frame initial position information.
Optionally, the detection module includes a coincidence degree determination unit, which is specifically configured to:
determining the contact ratio according to the following formula:
Figure BDA0003827519860000201
wherein,
Figure BDA0003827519860000202
and
Figure BDA0003827519860000203
the upper left corner position information and the lower right corner position information of the rectangular bounding box respectively represent historical outline bounding box information,
Figure BDA0003827519860000204
and
Figure BDA0003827519860000205
and respectively representing the position information of the upper left corner and the position information of the lower right corner of the rectangular bounding box of the current outline bounding box information.
And the position information of the outline border is the information of the minimum circumscribed rectangle border.
Optionally, the detection module further includes a first position information determining unit, specifically configured to:
determining the position information of the target according to the contour frame position information in the current contour frame information of the target;
determining location information of the target according to the following formula:
Figure BDA0003827519860000206
wherein (x, y) represents position information of the target, (r, c) represents center position information of the bounding box of the minimum bounding rectangle,
Figure BDA0003827519860000207
(r tl ,c tl ) And (r) br ,c br ) And respectively representing the position information of the upper left corner and the position information of the lower right corner of the minimum circumscribed rectangle bounding box information. And delta represents the corresponding relation between pixel points in the current radar image and an actual area, and P, Q represents the number of rows and columns of the current radar image respectively.
Optionally, the information base determining module is specifically configured to:
separating the background and the foreground in the current radar image to obtain a target radar image with the background removed;
determining position information of a target area to be detected included in the foreground of the target radar image;
determining the minimum circumscribed rectangle bounding box information of the target area to be detected as the current outline bounding box information according to the position information of the target area to be detected;
and constructing a current outline frame information base according to the current outline frame information of all target areas to be detected in the target radar image.
Optionally, the information base determining module includes an image processing unit, and is specifically configured to:
aiming at a pixel point to be identified in a current radar image, determining a target detection position point corresponding to the pixel point to be identified mapped in a radar detection area and a corresponding preset signal intensity probability distribution model when a radar scans at the target detection position point;
detecting a matching result of a preset signal intensity probability distribution model corresponding to the value of the pixel point to be identified and the target detection position point; the preset signal intensity probability distribution model is used for describing the signal intensity probability distribution of radar reflected waves when target detection position points are scanned under the condition that a radar detection area does not comprise a foreground;
and separating the background and the foreground in the current radar image according to the matching result of each pixel point to be identified to obtain a target radar image.
The value of each pixel point in the current radar image is used for describing the signal intensity of a radar reflected wave when the radar scans at the detection position point, and the current radar image belongs to a gray level image.
Optionally, the image acquiring unit includes a result determining unit, and is specifically configured to:
detecting whether at least one normal distribution model in the probability distribution models of the corresponding preset signal intensity of the pixel value to be identified and the target detection position point meets a preset matching condition or not; the preset matching condition comprises that the value of the pixel point to be identified and the mean value of the normal distribution model meet a preset Laviand criterion;
if at least one normal distribution model meeting preset matching conditions exists, determining that the pixel point to be identified belongs to a background pixel in the current radar image;
and if the normal distribution model meeting the preset matching condition does not exist, determining that the pixel point to be identified belongs to the foreground pixel in the current radar image.
Optionally, the information base determining module includes a second location information determining unit, and is specifically configured to:
performing morphological processing on the target radar image to obtain a processed radar image; a target area to be detected in the foreground of the target radar image is divided into different sub-areas after the foreground and the background are separated;
performing Gaussian smoothing on the processed radar image, and performing edge detection on the processed radar image after Gaussian smoothing to obtain an edge detection image of the target radar image;
and extracting an outer boundary inflection point of the edge detection graph to obtain outer boundary inflection point position information of a target area to be detected in the foreground of the target radar image, wherein the outer boundary inflection point position information is used as the position information of the target area to be detected.
The target detection device based on the radar chart, provided by the embodiment of the invention, can execute the target detection method based on the radar chart, provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
According to the technical scheme, the data acquisition, storage, use, processing and the like meet the relevant regulations of national laws and regulations, and do not violate the custom of public order.
Example four
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
Fig. 5 shows a schematic structural diagram of an electronic device that can be used to implement the radar map-based target detection method according to an embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as a radar map-based object detection method.
In some embodiments, the radar map-based object detection method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the radar map based object detection method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the radar map-based object detection method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (15)

1. A target detection method based on a radar map is characterized by comprising the following steps:
acquiring a current radar image of a radar detection area;
obtaining a current outline frame information base according to the current radar image; the current outline frame information base comprises current outline frame information of at least one object to be detected;
detecting the target to be detected according to the matching information of the current contour frame information and the historical contour frame information base of the target to be detected; and determining the historical profile frame information base according to the historical radar image.
2. The method of claim 1, wherein the current outline border information includes outline border position information; the historical outline frame information base comprises at least one piece of historical outline frame information, and the historical outline frame information at least comprises outline frame position information; the matching information comprises the contact ratio of the outline frame position information;
correspondingly, the detecting the target to be detected according to the current contour frame information of the target to be detected and the matching information of the historical contour frame information base includes:
traversing the historical outline frame information in the historical outline frame information base, and sequentially determining the coincidence ratio of the historical outline frame information and each current outline frame information in the current outline frame information base according to the outline frame position information;
determining the current contour frame information of the target with the highest coincidence degree with the historical contour frame information in the current contour frame information base;
updating the historical outline frame information according to the contact ratio of the current outline frame information and the historical outline frame information of the target;
and detecting the target to be detected corresponding to the current contour frame information of the target according to the updated historical contour frame information.
3. The method of claim 2, wherein the historical contour frame information further includes contour frame attribute information, and the contour frame attribute information at least includes a continuous non-target associated frame number and a continuous static frame number;
correspondingly, updating the historical contour frame information according to the coincidence degree of the target current contour frame information and the historical contour frame information, and the updating comprises the following steps:
if the contact ratio is greater than or equal to a preset contact ratio threshold value, updating the contour frame position information in the historical contour frame information according to the contour frame position information in the target current contour frame information, updating and counting the continuous static frame number, and setting the continuous non-target associated frame number to zero;
and if the contact ratio is smaller than a preset contact ratio threshold value, updating and counting the continuous non-target associated frame number, and setting the continuous static frame number to be zero.
4. The method according to claim 3, wherein the historical outline border information further comprises outline border start position information; the matching information also comprises the moving distance between the outline border starting position information and the updated outline border position information;
correspondingly, the step of detecting the target to be detected corresponding to the current contour frame information of the target according to the updated historical contour frame information includes:
if the updated continuous static frame number is larger than a preset continuous static frame number threshold value and the moving distance is smaller than a preset distance threshold value, determining that the target to be detected corresponding to the current contour frame information of the target is a projectile and deleting the current contour frame information of the target from a current contour frame information base;
and if the updated continuous static frame number is greater than a preset continuous static frame number threshold value and the moving distance is greater than or equal to a preset distance threshold value, determining that the target to be detected corresponding to the target current contour frame information is a parking vehicle, and deleting the target current contour frame information from a current contour frame information base.
5. The method of claim 3, wherein after performing the projectile detection on the target to be detected corresponding to the current contour frame information of the target according to the updated historical contour frame information, the method further comprises:
and if the updated continuous non-target associated frame number is greater than a preset continuous non-target associated frame number threshold value, deleting the historical profile frame information from a historical profile frame information base.
6. The method of claim 4, wherein after performing projectile detection on the target to be detected corresponding to the current contour frame information of the target according to the updated historical contour frame information, the method further comprises:
adding new historical outline border information in the historical outline border information base according to the remaining current outline border information in the current outline border information base, carrying out initialization setting on outline border attribute information of the new historical outline border information, and taking outline border position information in the current outline border information base as outline border starting position information.
7. The method of claim 2, wherein determining, in sequence, a contact ratio of the historical contour frame information and each current contour frame information in a current contour frame information base according to the contour frame position information comprises:
determining the degree of coincidence according to the formula:
Figure FDA0003827519850000031
wherein,
Figure FDA0003827519850000032
and
Figure FDA0003827519850000033
the upper left corner position information and the lower right corner position information of the rectangular bounding box respectively represent historical outline bounding box information,
Figure FDA0003827519850000034
and
Figure FDA0003827519850000035
and respectively representing the position information of the upper left corner and the position information of the lower right corner of the rectangular bounding box of the current outline bounding box information.
8. The method of claim 4, wherein the outline border position information is minimum bounding rectangle information;
correspondingly, after the target to be detected is detected according to the current contour frame information of the target to be detected and the matching information of the historical contour frame information base, the method further comprises the following steps:
determining the position information of the target according to the contour frame position information in the current contour frame information of the target;
determining the position information of the target according to the following formula:
Figure FDA0003827519850000036
wherein (x, y) represents position information of the target, (r, c) represents center position information of the bounding box of the minimum bounding rectangle,
Figure FDA0003827519850000037
(r tl ,c tl ) And (r) br ,c br ) And respectively representing the position information of the upper left corner and the position information of the lower right corner of the minimum circumscribed rectangle bounding box information. And delta represents the corresponding relation between pixel points in the current radar image and an actual area, and P, Q represents the number of rows and columns of the current radar image respectively.
9. The method of claim 1, wherein obtaining a current outline bounding box information base from the current radar image comprises:
separating the background and the foreground in the current radar image to obtain a target radar image with the background removed;
determining position information of a target area to be detected in a foreground in the target radar image;
determining the minimum circumscribed rectangle bounding box information of the target area to be detected as the current outline bounding box information according to the position information of the target area to be detected;
and constructing a current outline frame information base according to the current outline frame information of all target areas to be detected in the target radar image.
10. The method of claim 9, wherein values of each pixel point in the current radar image are used to describe signal strength of a radar reflected wave when the radar scans at the detection position point, and the current radar image belongs to a grayscale image;
correspondingly, separating the background and the foreground in the current radar image to obtain a target radar image with the background removed includes:
aiming at a pixel point to be identified in a current radar image, determining a target detection position point corresponding to the pixel point to be identified mapped in a radar detection area and a corresponding preset signal intensity probability distribution model when a radar scans at the target detection position point;
detecting a matching result of a preset signal intensity probability distribution model corresponding to the value of the pixel point to be identified and the target detection position point; the preset signal intensity probability distribution model is used for describing the signal intensity probability distribution of radar reflected waves when target detection position points are scanned under the condition that a radar detection area does not comprise a foreground;
and separating the background and the foreground in the current radar image according to the matching result of each pixel point to be identified to obtain a target radar image.
11. The method of claim 10, wherein detecting a matching result of a pre-set signal strength probability distribution model corresponding to the pixel value to be identified and the target detection position point comprises:
detecting whether at least one normal distribution model in a preset signal intensity probability distribution model corresponding to the value of the pixel point to be identified and the target detection position point meets a preset matching condition; the preset matching condition comprises that the value of the pixel point to be identified and the mean value of the normal distribution model meet a preset Laviand criterion;
if at least one normal distribution model meeting a preset matching condition exists, determining that pixel points to be recognized belong to background pixels in the current radar image;
and if the normal distribution model meeting the preset matching condition does not exist, determining that the pixel point to be identified belongs to the foreground pixel in the current radar image.
12. The method according to claim 9, wherein determining the position information of the target area to be detected included in the foreground in the target radar image comprises:
performing morphological processing on the target radar image to obtain a processed radar image; a target area to be detected in the foreground of the target radar image is divided into different sub-areas after the foreground and the background are separated;
performing Gaussian smoothing on the processed radar image, and performing edge detection on the processed radar image after Gaussian smoothing to obtain an edge detection image of the target radar image;
and extracting an outer boundary inflection point of the edge detection graph to obtain outer boundary inflection point position information of a target area to be detected in the foreground of the target radar image, wherein the outer boundary inflection point position information is used as the position information of the target area to be detected.
13. An object detection apparatus based on a radar map, comprising:
the image determining module is used for acquiring a current radar image of a radar detection area;
the information base determining module is used for obtaining a current outline frame information base according to the current radar image; the current outline frame information base comprises current outline frame information of at least one object to be detected;
the detection module is used for detecting the target to be detected according to the current contour frame information of the target to be detected and the matching information of the historical contour frame information base; and determining the historical profile frame information base according to the historical radar image.
14. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the radar map based object detection method of any one of claims 1-12.
15. A computer-readable storage medium storing computer instructions for causing a processor to perform the radar map based object detection method of any one of claims 1-12 when executed.
CN202211065877.XA 2022-08-31 2022-08-31 Target detection method, device, equipment and medium based on radar map Pending CN115436900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211065877.XA CN115436900A (en) 2022-08-31 2022-08-31 Target detection method, device, equipment and medium based on radar map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211065877.XA CN115436900A (en) 2022-08-31 2022-08-31 Target detection method, device, equipment and medium based on radar map

Publications (1)

Publication Number Publication Date
CN115436900A true CN115436900A (en) 2022-12-06

Family

ID=84246539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211065877.XA Pending CN115436900A (en) 2022-08-31 2022-08-31 Target detection method, device, equipment and medium based on radar map

Country Status (1)

Country Link
CN (1) CN115436900A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117111019A (en) * 2023-10-25 2023-11-24 深圳市先创数字技术有限公司 Target tracking and monitoring method and system based on radar detection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117111019A (en) * 2023-10-25 2023-11-24 深圳市先创数字技术有限公司 Target tracking and monitoring method and system based on radar detection
CN117111019B (en) * 2023-10-25 2024-01-09 深圳市先创数字技术有限公司 Target tracking and monitoring method and system based on radar detection

Similar Documents

Publication Publication Date Title
CN111881832A (en) Lane target detection method, device, equipment and computer readable storage medium
CN106778661A (en) A kind of express lane line detecting method based on morphological transformation and adaptive threshold
CN115436900A (en) Target detection method, device, equipment and medium based on radar map
CN115330841A (en) Method, apparatus, device and medium for detecting projectile based on radar map
CN115578431B (en) Image depth processing method and device, electronic equipment and medium
CN115376106A (en) Vehicle type identification method, device, equipment and medium based on radar map
CN114724113B (en) Road sign recognition method, automatic driving method, device and equipment
CN117036457A (en) Roof area measuring method, device, equipment and storage medium
CN115761698A (en) Target detection method, device, equipment and storage medium
CN115359026A (en) Special vehicle traveling method and device based on microwave radar, electronic equipment and medium
CN115526837A (en) Abnormal driving detection method and device, electronic equipment and medium
CN115995075A (en) Vehicle self-adaptive navigation method and device, electronic equipment and storage medium
CN115063765A (en) Road side boundary determining method, device, equipment and storage medium
CN115359030A (en) Ground foreign matter detection method, device, equipment and medium based on radar map
CN115424441B (en) Road curve optimization method, device, equipment and medium based on microwave radar
CN114155508B (en) Road change detection method, device, equipment and storage medium
CN115440057B (en) Method, device, equipment and medium for detecting curve vehicle based on radar map
CN115359087A (en) Radar image background removing method, device, equipment and medium based on target detection
CN115410408B (en) Parking space state change detection method, device, equipment and medium
CN113806361B (en) Method, device and storage medium for associating electronic monitoring equipment with road
CN116883654A (en) Training method of semantic segmentation model, semantic segmentation method, device and equipment
CN116758306A (en) Method, device, equipment and medium for enhancing road surface static object image
CN117853971A (en) Method, device, equipment and storage medium for detecting sprinkled object
CN117078997A (en) Image processing or training method, device, equipment and medium of image processing model
CN115410370A (en) Abnormal parking detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination