Nothing Special   »   [go: up one dir, main page]

CN112184914A - Method and device for determining three-dimensional position of target object and road side equipment - Google Patents

Method and device for determining three-dimensional position of target object and road side equipment Download PDF

Info

Publication number
CN112184914A
CN112184914A CN202011161718.0A CN202011161718A CN112184914A CN 112184914 A CN112184914 A CN 112184914A CN 202011161718 A CN202011161718 A CN 202011161718A CN 112184914 A CN112184914 A CN 112184914A
Authority
CN
China
Prior art keywords
dimensional
target object
coordinate system
dimensional position
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011161718.0A
Other languages
Chinese (zh)
Other versions
CN112184914B (en
Inventor
苑立彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011161718.0A priority Critical patent/CN112184914B/en
Publication of CN112184914A publication Critical patent/CN112184914A/en
Application granted granted Critical
Publication of CN112184914B publication Critical patent/CN112184914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for determining a three-dimensional position of a target object and roadside equipment, and relates to the technical field of image processing, the technical field of automatic driving and the technical field of intelligent transportation. The specific implementation scheme is as follows: when the three-dimensional position of the target object in the world coordinate system is determined, detecting a two-dimensional image comprising the target object to obtain coordinate data of a three-dimensional frame of the target object in the two-dimensional image; and the three-dimensional position of the target object in the world coordinate system is determined by means of the coordinate data of the three-dimensional frame of the target object in the two-dimensional image, a high-precision map is not needed, the problem that the accuracy of the three-dimensional position of the target object obtained through calculation is low due to the fact that the high-precision map is not updated timely is solved, and therefore the accuracy of the three-dimensional position of the target object in the world coordinate system is improved.

Description

Method and device for determining three-dimensional position of target object and road side equipment
Technical Field
The application relates to the technical field of computers, in particular to a method and a device for determining a three-dimensional position of a target object and roadside equipment, which can be used in the technical field of image processing, the technical field of automatic driving and the technical field of intelligent transportation.
Background
In the technical field of intelligent transportation, in order to ensure normal running of an automatic driving vehicle, after a roadside device collects a two-dimensional image of an obstacle on a road, the roadside device needs to return 3D to the obstacle, namely, the three-dimensional position of the obstacle under a world coordinate system needs to be determined, and the three-dimensional position of the obstacle under the world coordinate system needs to be transmitted to the automatic driving vehicle, so that the automatic driving vehicle can accurately drive around the obstacle based on the obtained three-dimensional position, and the running safety of the automatic driving vehicle is improved.
In the prior art, when a roadside device returns 3D to an obstacle, a two-dimensional image of the obstacle is acquired first, a 2D pixel coordinate of the obstacle is detected through a depth learning detection algorithm, the 2D pixel coordinate of the obstacle is converted into a normalized 3D coordinate of the obstacle in a camera coordinate system through internal parameters of the roadside device, and then a 3D coordinate of the obstacle in a world coordinate system, namely a three-dimensional position of the obstacle in the world coordinate system, is calculated by combining external parameters of the camera and a ground equation of a high-precision map.
However, in the above manner, the three-dimensional position of the obstacle in the world coordinate system needs to be determined by using the high-precision map, but since the high-precision map needs to be updated regularly to determine the accuracy of the map, if the map is not updated in time, the accuracy of the calculated three-dimensional position of the obstacle in the world coordinate system is low.
Disclosure of Invention
The application provides a method and a device for determining the three-dimensional position of a target object and roadside equipment, and improves the accuracy of the three-dimensional position of the target object under a world coordinate system.
According to an aspect of the present application, there is provided a method for determining a three-dimensional position of a target object, which may include:
collecting a two-dimensional image; wherein the two-dimensional image comprises a target object.
And detecting the two-dimensional image to obtain coordinate data of a three-dimensional frame of the target object in the two-dimensional image.
And determining the three-dimensional position of the target object in a world coordinate system based on the coordinate data of the three-dimensional frame.
According to another aspect of the present application, there is provided an apparatus for determining a three-dimensional position of a target object, which may include:
the acquisition module is used for acquiring a two-dimensional image; wherein the two-dimensional image comprises a target object.
And the detection module is used for detecting the two-dimensional image to obtain the coordinate data of the three-dimensional frame of the target object in the two-dimensional image.
And the processing module is used for determining the three-dimensional position of the target object in the world coordinate system based on the coordinate data of the three-dimensional frame.
According to another aspect of the present application, there is provided an electronic device, which may include:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of determining a three-dimensional position of a target object according to the first aspect.
According to another aspect of the present application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method for determining a three-dimensional position of a target object according to the first aspect.
According to another aspect of the present application, there is provided a roadside apparatus including:
one or more processors; and
a storage device, configured to store one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the method for determining a three-dimensional position of a target object according to the first aspect.
According to the technical scheme of the application, when the three-dimensional position of the target object in the world coordinate system is determined, the coordinate data of a three-dimensional frame of the target object in the two-dimensional image is obtained by detecting the two-dimensional image comprising the target object; and the three-dimensional position of the target object in the world coordinate system is determined by means of the coordinate data of the three-dimensional frame of the target object in the two-dimensional image and the preset camera projection model, a high-precision map is not needed, the problem that the accuracy of the three-dimensional position of the target object obtained by calculation is low due to the fact that the high-precision map is not updated timely is solved, and therefore the accuracy of the three-dimensional position of the target object in the world coordinate system is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram of a system for intelligent transportation vehicle-road coordination according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a three-dimensional frame of a car in a two-dimensional image according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for determining a three-dimensional position of a target object according to a first embodiment of the present application;
fig. 4 is a schematic flowchart of a method for determining a three-dimensional position of a target object according to a second embodiment of the present application;
FIG. 5 is a schematic view of a top surface of a three-dimensional bezel provided in accordance with a second embodiment of the present application;
fig. 6 is a schematic structural diagram of an apparatus for determining a three-dimensional position of a target object according to a fourth embodiment of the present application;
fig. 7 is a block diagram of an electronic device of a method for determining a three-dimensional position of a target object according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the description of the text of the present application, the character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The method for determining the three-dimensional position of the target object provided by the embodiment of the application can be applied to a detection system of the three-dimensional position of the target object in a world coordinate system. For example, the method can be applied to a system for intelligent transportation vehicle-road coordination, or can be applied to a system for detecting targets in a field. Taking an application in an intelligent transportation vehicle-road cooperative system as an example, please refer to fig. 1, where fig. 1 is a schematic diagram of an intelligent transportation vehicle-road cooperative system provided in an embodiment of the present application, and a system architecture in the intelligent transportation vehicle-road cooperative system includes a roadside device (not shown) disposed on a road, a server device (not shown) connected with the roadside device, and at least one autonomous vehicle connected with the server device, the roadside device includes a roadside sensing device and a roadside computing device, the roadside sensing device (e.g., a roadside camera, which collects images) is connected to the roadside computing device (e.g., a roadside computing unit RSCU), the roadside computing device is connected to the server device, and the server device may communicate with the autonomous driving vehicle or the assisted driving vehicle in various ways based on a result calculated by the roadside computing device; in another system architecture, the roadside sensing device comprises a calculation function, the roadside sensing device is directly connected to the server device, and the server device can communicate with the automatic driving or driving assisting vehicle in various ways based on the calculation result of the roadside computing device. The above connections may be wired or wireless; the server device in the application is, for example, a cloud control platform, a vehicle-road cooperative management platform, a central subsystem, an edge computing platform, a cloud computing platform, and the like. In the following description, the roadside device sends the calculated three-dimensional position of the obstacle to the autonomous vehicle for description, which implicitly includes that the roadside device can send the calculated three-dimensional position of the obstacle to the autonomous vehicle through the server device.
In the prior art, the roadside device can acquire a two-dimensional image of an obstacle on a road, calculate the three-dimensional position of the obstacle in a world coordinate system based on the acquired two-dimensional image, and transmit the calculated three-dimensional position of the obstacle in the world coordinate system to each autonomous driving vehicle, so that the autonomous driving vehicle can accurately drive around the obstacle based on the acquired three-dimensional position, and the driving safety of the autonomous driving vehicle is improved.
With reference to the scenario shown in fig. 1, the roadside device may include four modules, where the four modules mainly include a roadside sensing module, a storage module, a roadside calculation module, and a transmission module. The roadside sensing module is mainly used for acquiring a two-dimensional image comprising an obstacle; the roadside calculation module is mainly used for detecting a two-dimensional image acquired by the roadside sensing module through a depth learning detection algorithm to obtain a coordinate of a center point of an obstacle in the two-dimensional image, inputting the coordinate of the center point of the obstacle into the camera projection module to obtain a three-dimensional position of the center point of the obstacle under a world coordinate system, and because the depth information of the obstacle cannot be determined, the three-dimensional position of the whole obstacle under the world coordinate system cannot be accurately calculated only according to the three-dimensional position of the center point of the obstacle under the world coordinate system, the three-dimensional position of the obstacle under the world coordinate system needs to be calculated by combining with a ground equation of the high-precision map in the storage module; the sending module is mainly used for sending the calculated three-dimensional position of the obstacle in the world coordinate system to each autonomous driving vehicle needing assistance, so that the autonomous driving vehicle can accurately drive around the obstacle based on the acquired three-dimensional position, and the driving safety of the autonomous driving vehicle is improved. However, in the above manner, a high-precision map is needed, and since the high-precision map needs to be updated regularly to determine the accuracy of the map, if the map is not updated in time, the three-dimensional position accuracy of the calculated obstacle in the world coordinate system is low.
In order to accurately calculate the three-dimensional position of the obstacle in the world coordinate system, in a possible attempt, the map can be updated at regular time to ensure the accuracy of the map, so that the problem that the accuracy of the calculated three-dimensional position of the obstacle is low due to the fact that the high-precision map is not updated timely is solved. In another attempt, the calculation of the three-dimensional position of the obstacle in the world coordinate system by means of a high-precision map can be avoided, and instead, the three-dimensional position of the obstacle in the world coordinate system can be calculated according to the three-dimensional position of the center point of the obstacle in the world coordinate system and the depth information of the obstacle. However, with this attempt, it is necessary to determine the depth information of the obstacle first. In order to determine the depth information of the obstacle, it may be tried to select more points from the obstacles in the two-dimensional image, without being limited to only selecting the center point of the obstacle, and respectively calculating the three-dimensional positions of each of the plurality of points in the world coordinate system, which not only increases the amount of calculation, but also fails to determine the depth information of the obstacle through the three-dimensional positions of each of the plurality of points in the world coordinate system if the plurality of points are not properly selected, and fails to calculate the three-dimensional positions of the obstacle in the world coordinate system even though the three-dimensional positions of each of the plurality of points in the world coordinate system are calculated.
It can be seen that how to obtain the depth information of the obstacle is a key factor for calculating the three-dimensional position of the obstacle in the world coordinate system. In order to obtain the depth information of the obstacle, in the embodiment of the present application, an attempt may be made to use a three-dimensional frame of the target object in the two-dimensional image, and since the coordinate data of the three-dimensional frame may indicate the depth information of the obstacle to some extent, the three-dimensional position of the obstacle in the world coordinate system may be calculated according to the coordinate data of the three-dimensional frame of the target object in the two-dimensional image.
Based on the above concept, the embodiment of the present application provides a method for determining a three-dimensional position of a target object, and when determining the three-dimensional position of the target object in a world coordinate system, a two-dimensional image including the target object may be acquired first; detecting the two-dimensional image to obtain coordinate data of a three-dimensional frame of the target object in the two-dimensional image; and determining the three-dimensional position of the target object in the world coordinate system based on the coordinate data of the three-dimensional frame.
The three-dimensional frame can be understood as a three-dimensional frame of the target object in the two-dimensional image, and although the three-dimensional frame is a three-dimensional frame, the coordinate data of the three-dimensional frame is still two-dimensional coordinate data. For example, the coordinate data of the three-dimensional frame may include coordinates of each vertex in the three-dimensional frame in the two-dimensional image. For example, when the target object is a car, please refer to fig. 2, where fig. 2 is a schematic diagram of a three-dimensional frame of the car in the two-dimensional image according to an embodiment of the present application, a dashed rectangular parallelepiped frame shown in an icon 2 is the three-dimensional frame of the car in the two-dimensional image, and the coordinate data of the three-dimensional frame includes coordinates of vertices in the rectangular parallelepiped frame in the two-dimensional image.
For example, the target object may be a vehicle, a pedestrian, or another object, for example, a trunk dropped on a road, and the like, and may be specifically set according to actual needs, and the embodiment of the present application is not further limited to the type of the target object.
It can be seen that, in the embodiment of the present application, when determining the three-dimensional position of the target object in the world coordinate system, the coordinate data of the three-dimensional frame of the target object in the two-dimensional image is obtained by detecting the two-dimensional image including the target object; and the three-dimensional position of the target object in the world coordinate system is determined by means of the coordinate data of the three-dimensional frame of the target object in the two-dimensional image, a high-precision map is not needed, the problem that the accuracy of the three-dimensional position of the target object obtained through calculation is low due to the fact that the high-precision map is not updated timely is solved, and therefore the accuracy of the three-dimensional position of the target object in the world coordinate system is improved.
It should be noted that, when the technical solution provided by the embodiment of the present application is executed, taking a system applied to intelligent transportation vehicle-road coordination as an example, the technical solution provided by the embodiment of the present application may be executed by a roadside device, for example, a roadside sensing device with a computing function, a roadside computing device connected with the roadside sensing device, or a server device connected with the roadside computing device, or a server device directly connected with the roadside sensing device, and the like, and the technical solution provided by the embodiment of the present application may be specifically set according to actual needs, and the embodiment of the present application is not specifically limited.
It can be understood that, when the technical solution provided in the embodiment of the present application is applied to an intelligent transportation vehicle-road coordination system, the target object may be an obstacle on a road, for example, a person, a vehicle, or an article dropped on the road, and the roadside device acquires a two-dimensional image including the obstacle, and calculates a three-dimensional position of the target object in a world coordinate system by using coordinate data of a three-dimensional frame of the obstacle in the two-dimensional image, and then may send the calculated three-dimensional position of the obstacle in the world coordinate system to an autonomous vehicle, so that the autonomous vehicle may accurately drive around the obstacle based on the acquired three-dimensional position, thereby improving safety of driving of the autonomous vehicle, and further reducing cost in vehicle-road coordination to a certain extent. It should be noted that, when the technical solution provided by the embodiment of the present application is described in the following, when the technical solution provided by the embodiment of the present application is applied to an intelligent transportation vehicle-road coordination system, a roadside device acquires a two-dimensional image including an obstacle, and calculates and obtains a three-dimensional position of a target object in a world coordinate system by using coordinate data of a three-dimensional frame of the obstacle in the two-dimensional image, which is taken as an example for explanation, but the embodiment of the present application is not limited thereto.
Hereinafter, the method for determining the three-dimensional position of the target object provided by the present application will be described in detail by specific examples. It is to be understood that the following detailed description may be combined with other embodiments, and that the same or similar concepts or processes may not be repeated in some embodiments.
Example one
Fig. 3 is a flowchart illustrating a method for determining a three-dimensional position of a target object according to a first embodiment of the present application, where the method for determining a three-dimensional position of a target object may be implemented by software and/or a hardware device, for example, the hardware device may be a device for determining a three-dimensional position of a target object, and the device for determining a three-dimensional position of a target object may be a roadside apparatus. For example, referring to fig. 3, the method for determining the three-dimensional position of the target object may include:
s301, collecting a two-dimensional image.
Wherein the two-dimensional image comprises a target object. For example, the target object may also be an obstacle on a road, such as a vehicle, a pedestrian, or a trunk dropped on the road, and may be specifically set according to actual needs, and the embodiment of the present application is not further limited to the specific type of the target object.
For example, when the roadside device acquires the two-dimensional image including the target object, the roadside device may acquire the two-dimensional image by using a camera of the roadside device, and after acquiring the two-dimensional image including the target object, detect the two-dimensional image to obtain coordinate data of a three-dimensional frame of the target object in the two-dimensional image, that is, execute the following S302:
s302, detecting the two-dimensional image to obtain coordinate data of a three-dimensional frame of the target object in the two-dimensional image.
The three-dimensional frame can be understood as a three-dimensional frame of the target object in the two-dimensional image, and although the three-dimensional frame is a three-dimensional frame, the coordinate data of the three-dimensional frame is still two-dimensional coordinate data. For example, the coordinate data of the three-dimensional frame may include coordinates of each vertex in the three-dimensional frame in the two-dimensional image.
Taking a target object as a car as an example, and with reference to fig. 2, the roadside device may perform image detection on a two-dimensional image including the car by using an image detection technology, detect a three-dimensional frame of the car in the two-dimensional image, that is, a dashed-line cuboid shown in fig. 2, and obtain coordinate data of the three-dimensional frame. For example, the coordinate data of the three-dimensional frame may include coordinate data of at least three vertices in one plane of the three-dimensional frame, or may include coordinates of at least three vertices in each plane, and may be specifically set according to actual needs.
Since the coordinate data of the three-dimensional frame may indicate the depth information of the obstacle to some extent, the three-dimensional position of the obstacle in the world coordinate system may be calculated from the three-dimensional frame of the target object in the two-dimensional image, that is, the following S303 is performed:
and S303, determining the three-dimensional position of the target object in the world coordinate system based on the coordinate data of the three-dimensional frame.
For example, when determining the three-dimensional position of the target object in the world coordinate system based on the coordinate data of the three-dimensional frame, the three-dimensional position of the target object in the world coordinate system may be determined jointly in combination with the camera projection model. The camera projection model is determined according to an internal reference matrix corresponding to internal parameters of shooting equipment for acquiring two-dimensional images, and a rotation matrix and a translation vector between a shooting equipment coordinate system corresponding to external parameters of the shooting equipment and a world coordinate system. The camera projection model is mainly used for converting two-dimensional coordinates in an image into two-dimensional coordinates under a coordinate system of the shooting device through an internal parameter matrix corresponding to the internal parameter, and converting the two-dimensional coordinates under the coordinate system of the shooting device into three-dimensional coordinates under a world coordinate system through a rotation matrix and a translation vector between the coordinate system of the shooting device and the world coordinate system corresponding to the external parameter, so that the three-dimensional position of a target object under the world coordinate system can be calculated. It is understood that, in the embodiment of the present application, the photographing apparatus may be a roadside apparatus.
It should be noted that, because each shooting device has a fixed set of internal parameters and external parameters, each shooting device only has a set of camera projection models corresponding thereto, and the three-dimensional positions of different target objects in the world coordinate system can be calculated through the camera projection models.
Therefore, when the three-dimensional position of the target object in the world coordinate system is determined, the coordinate data of the three-dimensional frame of the target object in the two-dimensional image is obtained by detecting the two-dimensional image comprising the target object; and the three-dimensional position of the target object in the world coordinate system is determined by means of the coordinate data of the three-dimensional frame of the target object in the two-dimensional image, a high-precision map is not needed, the problem that the accuracy of the three-dimensional position of the target object obtained through calculation is low due to the fact that the high-precision map is not updated timely is solved, and therefore the accuracy of the three-dimensional position of the target object in the world coordinate system is improved.
In order to facilitate understanding of the method for determining the three-dimensional position of the target object provided in the embodiment of the present application, the method for determining the three-dimensional position of the target object provided in the embodiment of the present application will be described in detail through the following specific embodiment two.
Example two
Fig. 4 is a flowchart illustrating a method for determining a three-dimensional position of a target object according to a second embodiment of the present application, where the method for determining a three-dimensional position of a target object may also be implemented by software and/or hardware devices. For example, referring to fig. 4, the method for determining the three-dimensional position of the target object may include:
s401, collecting a two-dimensional image.
Wherein the two-dimensional image comprises a target object.
S402, detecting the two-dimensional image to obtain coordinate data of a three-dimensional frame of the target object in the two-dimensional image.
It should be noted that, while the two-dimensional image is detected to obtain the coordinate data of the three-dimensional frame of the target in the two-dimensional image, the type of the target object may also be detected, so that the three-dimensional size of the target object may be determined according to the type of the target object, and in subsequent S405, the three-dimensional size of the target object may be referred to.
In the intelligent transportation vehicle-road cooperative system, the roadside device may acquire a two-dimensional image including a target object, perform image detection on the two-dimensional image by using an image detection technology, determine that the target object is a car on a road, detect a three-dimensional frame of the car in the two-dimensional image, that is, a dashed line cuboid shown in fig. 2, and obtain coordinate data of the three-dimensional frame.
And S403, determining coordinate data of at least three vertexes in any plane in the three-dimensional frame according to the coordinate data of the three-dimensional frame.
After the coordinate data of the three-dimensional frame is obtained, a plane may be arbitrarily selected from the three-dimensional frame, for example, a top surface of the three-dimensional frame shown in fig. 2 may be selected, as shown in fig. 5, fig. 5 is a schematic diagram of the top surface of the three-dimensional frame provided according to the second embodiment of the present application, three vertices may be selected from the top surface of the three-dimensional frame, and taking the three vertices including vertex 1, vertex 2, and vertex 3 as an example, coordinates of the three vertices in the two-dimensional image are found and determined from the data of the three-dimensional frame, where the two-dimensional coordinate of vertex 1 is (u1, v1), the two-dimensional coordinate of vertex 2 is (u2, v2), and the two-dimensional coordinate of vertex 3 is (u3, v 3.
After obtaining the two-dimensional coordinates (u1, v1) of the vertex 1, the two-dimensional coordinates (u2, v2) of the vertex 2, and the two-dimensional coordinates (u3, v3) of the vertex 3 in the top surface of the three-dimensional frame, respectively, the coordinate data of the three vertices may be input to the camera projection model, and the camera projection relationship between the coordinate data of each vertex and the three-dimensional coordinates may be obtained, that is, the following S404 is performed:
s404, inputting the coordinate data of at least three vertexes into the camera projection model to obtain a camera projection relation between the coordinate data of each vertex and the three-dimensional coordinates.
For example, an internal reference matrix corresponding to an internal reference of the roadside device may be represented by K, a rotation matrix between a coordinate system of the roadside device corresponding to an external reference and a world coordinate system may be represented by R, and a translation vector may be represented by t, and a camera projection model obtained based on the internal reference matrix K, the rotation matrix R between the coordinate system of the roadside device corresponding to the external reference and the world coordinate system, and the translation vector t may be represented by the following formula 1:
Figure BDA0002744553820000101
where Zc represents the depth of the roadside apparatus coordinate system, (u, v) represents the coordinates of the midpoint in the two-dimensional image, and (XW, YW, ZW) represents the three-dimensional coordinates of the midpoint in the two-dimensional image.
After the two-dimensional coordinates (u1, v1) of the vertex 1, the two-dimensional coordinates (u2, v2) of the vertex 2, and the two-dimensional coordinates (u3, v3) of the vertex 3 on the top surface of the three-dimensional frame are obtained in S403, the two-dimensional coordinates are input to the camera projection model for each vertex, and the camera projection relationship between the coordinate data of each vertex and the three-dimensional coordinates is obtained. For example, inputting the two-dimensional coordinates (u1, v1) of vertex 1 to the camera projection model, a camera projection relationship 1 between the coordinate data of vertex 1 and the three-dimensional coordinates (XW1, YW1, ZW1) is obtained, and the camera projection relationship 1 may be expressed as:
Figure BDA0002744553820000102
inputting the two-dimensional coordinates (u2, v2) of the vertex 2 to the camera projection model, obtaining a camera projection relation 2 between the coordinate data of the vertex 2 and the three-dimensional coordinates (XW2, YW2, ZW2), where the camera projection relation 2 can be expressed as:
Figure BDA0002744553820000103
inputting the two-dimensional coordinates (u3, v3) of the vertex 3 to the camera projection model, a camera projection relation 3 between the coordinate data of the vertex 3 and the three-dimensional coordinates (XW3, YW3, ZW3) is obtained, and the camera projection relation 3 can be expressed as:
Figure BDA0002744553820000104
Figure BDA0002744553820000105
thereby, the camera projection relationship between the coordinate data of each of the vertex 1, the vertex 2, and the vertex 3 and the three-dimensional coordinates is obtained.
The camera projection relation of each vertex comprises 3 equations, 4 unknown parameters (Zc, XW, YW, ZW), and the 3 camera projection relations comprise 9 equations and 12 parameters in total, so that the 12 parameters can be obtained by combining at least 3 equations and 9 equations included in the 3 camera projection relations, thereby obtaining three-dimensional coordinates (XW1, YW1, ZW1), three-dimensional coordinates (XW2, YW2, ZW2) of the vertex 1, three-dimensional coordinates (XW3, YW3, ZW3) of the vertex 3.
When at least 3 equations are constructed, the three-dimensional stereo size of the car can be determined according to the type of the car, and the distance relationship between the three-dimensional coordinates of any two vertexes of the 3 vertexes can be determined according to the three-dimensional stereo size of the car, so as to obtain the 3 equations, which can be referred to as the following description in S405:
s405, determining the distance relationship between the three-dimensional coordinates of any two vertexes of the at least three vertexes according to the type of the target object.
For example, before determining the distance relationship between the three-dimensional coordinates of any two vertices of the at least three vertices according to the type of the target object, the type of the target object needs to be determined, and the type of the target object may be detected in the detection process of S402, in combination with the description in S402; in S405, when it is necessary to determine the distance relationship between the three-dimensional coordinates of any two vertices, the type to which the target object belongs may be detected again, and may be specifically set according to actual needs.
When the type of the target object is determined to be a car, the car is 3.5 meters long and 1.8 meters wide, as shown in fig. 5, it can be seen that the distance between vertex 1 and vertex 2 is 1.8 meters, and the equation corresponding to the distance relationship 1 between vertex 1 and vertex 2 is: (Xw1-Xw2)2+(Yw1-Yw2)2+(Zw1-Zw2)2=1.82(ii) a The distance between vertex 2 and vertex 3 is 3.5 meters, and the distance relationship 2 between vertex 2 and vertex 3 corresponds to the equation: (Xw2-Xw3)2+(Yw2-Yw3)2+(Zw2-Zw3)2=3.52(ii) a The distance between vertex 1 and vertex 3 is 3.52+1.82The distance relationship 3 between vertex 1 and vertex 3 corresponds toThe equation is: (Xw1-Xw3)2+(Yw1-Yw3)2+(Zw1-Zw3)2=3.52+1.82And thus, the distance relation between the three-dimensional coordinates of any two vertexes is obtained.
It should be noted that, in the embodiment of the present application, there is no order between S404 and S405, and S404 may be executed first, and then S405 may be executed; or S405 may be executed first, and then S404 may be executed; s404 and S405 may also be executed at the same time, and may be specifically set according to actual needs, and here, the embodiment of the present application is only described by taking the example of executing S404 first and then executing S405, but the embodiment of the present application is not limited to this.
After obtaining the camera projection relationship between the coordinate data of each vertex and the three-dimensional coordinates, and the distance relationship between the three-dimensional coordinates of any two vertices, the following S406 may be performed:
s406, obtaining the three-dimensional position of the target object in the world coordinate system according to the camera projection relation between the coordinate data of each vertex and the three-dimensional coordinates and the distance relation between the three-dimensional coordinates of any two vertexes.
For example, when the three-dimensional position of the target object in the world coordinate system is obtained according to the camera projection relationship between the coordinate data of each vertex and the three-dimensional coordinates and the distance relationship between the three-dimensional coordinates of any two vertices, the three-dimensional coordinates of each vertex may be calculated according to the camera projection relationship between the coordinate data of each vertex and the three-dimensional coordinates and the distance relationship between the three-dimensional coordinates of any two vertices; and then the three-dimensional position of the target object in the world coordinate system is determined according to the three-dimensional coordinates of each vertex without a high-precision map, so that the problem of low accuracy of the three-dimensional position of the target object obtained by calculation due to the fact that the high-precision map is not updated in time is solved, and the accuracy of the three-dimensional position of the target object in the world coordinate system is improved.
Continuing with the example of the target object as a car, the three-dimensional coordinates of vertex 1 (XW1, YW1, ZW1), vertex 2 (XW2, YW2, ZW2), and vertex 3 (XW3, YW3, ZW3) can be obtained by solving 9 equations included in the 3 camera projection relations, namely, the camera projection relation 1, the camera projection relation 2, and the camera projection relation 3, according to the camera projection relations corresponding to the vertex 1, the vertex 2, and the vertex 3, the equation corresponding to the distance relation 1 between the vertex 1 and the vertex 2, the equation corresponding to the distance relation 2 between the vertex 2 and the vertex 3, and the total 12 equations.
After calculating the three-dimensional coordinates of the vertex 1 (XW1, YW1, ZW1), the vertex 2 (XW2, YW2, ZW2), and the vertex 3 (XW3, YW3, ZW3), respectively, the three-dimensional position of the car in the world coordinate system can be determined from the three-dimensional coordinates of the vertex 1 (XW1, YW1, ZW1), the vertex 2 (XW2, YW2, ZW2), and the vertex 3 (XW3, YW3, ZW 3).
It can be seen that, in the embodiment of the present application, when the three-dimensional position of the target object in the world coordinate system is calculated, the three-dimensional position of the target object in the world coordinate system is determined by using the coordinate data of the three-dimensional frame of the target object in the two-dimensional image, and a high-precision map is not needed, so that the problem of low accuracy of the calculated three-dimensional position of the target object due to untimely update of the high-precision map is solved, and the accuracy of the three-dimensional position of the target object in the world coordinate system is improved.
In addition, after the three-dimensional position of the car in the world coordinate system is calculated through S401-S406, the roadside device can send the calculated three-dimensional position of the car in the world coordinate system to the automatic driving vehicle, so that the automatic driving vehicle can accurately drive around the car based on the acquired three-dimensional position, the driving safety of the automatic driving vehicle is improved, and in addition, the cost in vehicle-road cooperation is reduced to a certain extent.
EXAMPLE III
Fig. 6 is a schematic structural diagram of a device 60 for determining a three-dimensional position of a target object according to a fourth embodiment of the present application, and for example, please refer to fig. 6, the device 60 for determining a three-dimensional position of a target object may include:
the acquisition module 601 is used for acquiring a two-dimensional image; wherein the two-dimensional image comprises a target object.
The detecting module 602 is configured to detect the two-dimensional image to obtain coordinate data of a three-dimensional frame of the target object in the two-dimensional image.
The processing module 603 is configured to determine a three-dimensional position of the target object in the world coordinate system based on the coordinate data of the three-dimensional frame.
Optionally, the processing module 603 includes a first processing sub-module and a second processing sub-module.
And the first processing submodule is used for determining the coordinate data of at least three vertexes in any plane in the three-dimensional frame according to the coordinate data of the three-dimensional frame.
And the second processing submodule is used for inputting the coordinate data of at least three vertexes into the camera projection model to obtain the three-dimensional position of the target object in the world coordinate system.
Optionally, the second processing sub-module is configured to input the coordinate data of the at least three vertices to the camera projection model, so as to obtain a camera projection relationship between the coordinate data of each vertex and the three-dimensional coordinate; determining the distance relationship between the three-dimensional coordinates of any two vertexes of the at least three vertexes according to the type of the target object; and then obtaining the three-dimensional position of the target object under the world coordinate system according to the camera projection relation between the coordinate data of each vertex and the three-dimensional coordinates and the distance relation between the three-dimensional coordinates of any two vertexes.
Optionally, the second processing sub-module is configured to calculate the three-dimensional coordinates of each vertex according to a camera projection relationship between the coordinate data of each vertex and the three-dimensional coordinates and a distance relationship between the three-dimensional coordinates of any two vertices; and determining the three-dimensional position of the target object in the world coordinate system according to the three-dimensional coordinates of each vertex.
Optionally, the processing module 603 further includes a third processing sub-module.
The third processing submodule is used for determining an internal reference matrix corresponding to internal reference of shooting equipment for acquiring a two-dimensional image, and a rotation matrix and a translation vector between a shooting equipment coordinate system corresponding to external reference of the shooting equipment and a world coordinate system; and determining a camera projection model according to the internal reference matrix, the rotation matrix and the translation vector.
Optionally, the target object is an obstacle on the road, and the apparatus further includes a sending module 604.
A sending module 604, configured to send the three-dimensional position of the target object in the world coordinate system to the autonomous vehicle; the three-dimensional position of the target object in the world coordinate system is used to instruct the autonomous vehicle to travel according to the three-dimensional position.
The device 60 for determining the three-dimensional position of the target object provided in the embodiment of the present application may implement the technical solution of the method for determining the three-dimensional position of the target object in any of the above embodiments, and its implementation principle and beneficial effect are similar to those of the method for determining the three-dimensional position of the target object, and reference may be made to the implementation principle and beneficial effect of the method for determining the three-dimensional position of the target object, which are not described herein again.
The embodiment of the present application further provides a roadside device, which may be a roadside device included in the intelligent transportation vehicle-road cooperative system architecture shown in fig. 1, and includes one or more processors; and a storage device, configured to store one or more programs, where when the one or more programs are executed by the one or more processors, the one or more processors implement the technical solution of the method for determining the three-dimensional position of the target object in any of the embodiments, and an implementation principle and beneficial effects of the method are similar to an implementation principle and beneficial effects of the method for determining the three-dimensional position of the target object, and reference may be made to the implementation principle and beneficial effects of the method for determining the three-dimensional position of the target object, which is not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 7, fig. 7 is a block diagram of an electronic device of a method for determining a three-dimensional position of a target object according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for determining a three-dimensional position of a target object provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the method for determining a three-dimensional position of a target object provided by the present application.
The memory 702, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the acquisition module 601, the detection module 602, the processing module 603, and the transmission module 604 shown in fig. 6) corresponding to the method for determining the three-dimensional position of the target object in the embodiment of the present application. The processor 701 executes various functional applications of the server and data processing, namely, a method for determining a three-dimensional position of a target object in the above-described method embodiments, by executing a non-transitory software program, instructions, and modules stored in the memory 702.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the determination method of the three-dimensional position of the target object, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, and such remote memory may be coupled to the electronic device of the method for determining a three-dimensional location of a target object via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for determining the three-dimensional position of the target object may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the determination method of the three-dimensional position of the target object, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, when the three-dimensional position of the target object in the world coordinate system is determined, the coordinate data of a three-dimensional frame of the target object in the two-dimensional image is obtained by detecting the two-dimensional image comprising the target object; and the three-dimensional position of the target object in the world coordinate system is determined by means of the coordinate data of the three-dimensional frame of the target object in the two-dimensional image and the preset camera projection model, a high-precision map is not needed, the problem that the accuracy of the three-dimensional position of the target object obtained by calculation is low due to the fact that the high-precision map is not updated timely is solved, and therefore the accuracy of the three-dimensional position of the target object in the world coordinate system is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (15)

1. A method of determining a three-dimensional position of a target object, comprising:
collecting a two-dimensional image; wherein the two-dimensional image comprises a target object;
detecting the two-dimensional image to obtain coordinate data of a three-dimensional frame of the target object in the two-dimensional image;
and determining the three-dimensional position of the target object in a world coordinate system based on the coordinate data of the three-dimensional frame.
2. The method of claim 1, wherein said determining a three-dimensional position of the target object in a world coordinate system based on the coordinate data of the three-dimensional bounding box comprises:
determining coordinate data of at least three vertexes in any plane in the three-dimensional frame according to the coordinate data of the three-dimensional frame;
and inputting the coordinate data of the at least three vertexes into a camera projection model to obtain the three-dimensional position of the target object in a world coordinate system.
3. The method of claim 2, wherein the inputting the coordinate data of the at least three vertices to a camera projection model, resulting in a three-dimensional position of the target object in a world coordinate system, comprises:
inputting the coordinate data of the at least three vertexes into the camera projection model to obtain a camera projection relation between the coordinate data of each vertex and the three-dimensional coordinates;
determining the distance relationship between the three-dimensional coordinates of any two vertexes of the at least three vertexes according to the type of the target object;
and obtaining the three-dimensional position of the target object under a world coordinate system according to the camera projection relation between the coordinate data of each vertex and the three-dimensional coordinates and the distance relation between the three-dimensional coordinates of any two vertexes.
4. The method according to claim 3, wherein the obtaining the three-dimensional position of the target object in the world coordinate system according to the camera projection relation between the coordinate data and the three-dimensional coordinates of the vertexes and the distance relation between the three-dimensional coordinates of any two vertexes comprises:
calculating the three-dimensional coordinates of each vertex according to the camera projection relation between the coordinate data of each vertex and the three-dimensional coordinates and the distance relation between the three-dimensional coordinates of any two vertexes;
and determining the three-dimensional position of the target object under a world coordinate system according to the three-dimensional coordinates of the vertexes.
5. The method of claim 2, further comprising:
determining an internal reference matrix corresponding to internal reference of shooting equipment for acquiring the two-dimensional image, and a rotation matrix and a translation vector between a coordinate system of the shooting equipment and a world coordinate system corresponding to external reference of the shooting equipment;
determining the camera projection model from the internal reference matrix, the rotation matrix, and the translation vector.
6. The method of any one of claims 1-4, the target object being an obstacle on a road, the method further comprising:
sending the three-dimensional position of the target object in the world coordinate system to an automatic driving vehicle; the three-dimensional position of the target object in the world coordinate system is used for indicating the automatic driving vehicle to drive according to the three-dimensional position.
7. An apparatus for determining a three-dimensional position of a target object, comprising:
the acquisition module is used for acquiring a two-dimensional image; wherein the two-dimensional image comprises a target object;
the detection module is used for detecting the two-dimensional image to obtain coordinate data of a three-dimensional frame of the target object in the two-dimensional image;
and the processing module is used for determining the three-dimensional position of the target object in the world coordinate system based on the coordinate data of the three-dimensional frame.
8. The apparatus of claim 7, wherein the processing module comprises a first processing sub-module and a second processing sub-module;
the first processing submodule is used for determining coordinate data of at least three vertexes in any plane in the three-dimensional frame according to the coordinate data of the three-dimensional frame;
and the second processing submodule is used for inputting the coordinate data of the at least three vertexes into the camera projection model to obtain the three-dimensional position of the target object in the world coordinate system.
9. The apparatus of claim 8, wherein,
the second processing submodule is used for inputting the coordinate data of the at least three vertexes into the camera projection model to obtain a camera projection relation between the coordinate data of each vertex and the three-dimensional coordinates; determining the distance relationship between the three-dimensional coordinates of any two vertexes of the at least three vertexes according to the type of the target object; and then obtaining the three-dimensional position of the target object under a world coordinate system according to the camera projection relation between the coordinate data of each vertex and the three-dimensional coordinates and the distance relation between the three-dimensional coordinates of any two vertexes.
10. The apparatus of claim 9, wherein,
the second processing submodule is used for calculating the three-dimensional coordinates of each vertex according to the camera projection relationship between the coordinate data of each vertex and the three-dimensional coordinates and the distance relationship between the three-dimensional coordinates of any two vertices; and determining the three-dimensional position of the target object under the world coordinate system according to the three-dimensional coordinates of the vertexes.
11. The apparatus of claim 8, the processing module further comprising a third processing sub-module;
the third processing submodule is used for determining an internal reference matrix corresponding to internal reference of shooting equipment for acquiring the two-dimensional image, and a rotation matrix and a translation vector between a coordinate system of the shooting equipment and a world coordinate system corresponding to external reference of the shooting equipment; and determining the camera projection model according to the internal reference matrix, the rotation matrix and the translation vector.
12. The apparatus according to any one of claims 7-11, the target object being an obstacle on a road, the apparatus further comprising a transmitting module;
the sending module is used for sending the three-dimensional position of the target object in the world coordinate system to an automatic driving vehicle; the three-dimensional position of the target object in the world coordinate system is used for indicating the automatic driving vehicle to drive according to the three-dimensional position.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining a three-dimensional position of a target object according to any one of claims 1 to 6.
14. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method for determining a three-dimensional position of a target object according to any one of claims 1 to 6.
15. A roadside apparatus, the apparatus comprising:
one or more processors; and
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1-6.
CN202011161718.0A 2020-10-27 2020-10-27 Method and device for determining three-dimensional position of target object and road side equipment Active CN112184914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011161718.0A CN112184914B (en) 2020-10-27 2020-10-27 Method and device for determining three-dimensional position of target object and road side equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011161718.0A CN112184914B (en) 2020-10-27 2020-10-27 Method and device for determining three-dimensional position of target object and road side equipment

Publications (2)

Publication Number Publication Date
CN112184914A true CN112184914A (en) 2021-01-05
CN112184914B CN112184914B (en) 2024-07-16

Family

ID=73922828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011161718.0A Active CN112184914B (en) 2020-10-27 2020-10-27 Method and device for determining three-dimensional position of target object and road side equipment

Country Status (1)

Country Link
CN (1) CN112184914B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837363A (en) * 2021-02-03 2021-05-25 上海交通大学 Stereotaxic frame positioning method and system, medium and terminal
CN113033426A (en) * 2021-03-30 2021-06-25 北京车和家信息技术有限公司 Dynamic object labeling method, device, equipment and storage medium
CN113240750A (en) * 2021-05-13 2021-08-10 中移智行网络科技有限公司 Three-dimensional space information measuring and calculating method and device
CN113470112A (en) * 2021-06-30 2021-10-01 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal
CN113516013A (en) * 2021-04-09 2021-10-19 阿波罗智联(北京)科技有限公司 Target detection method and device, electronic equipment, road side equipment and cloud control platform
CN114266829A (en) * 2021-12-24 2022-04-01 珠海格力电器股份有限公司 Object processing method and device, electronic equipment and computer readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214980A (en) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 A kind of 3 d pose estimation method, device, equipment and computer storage medium
CN109360249A (en) * 2018-12-06 2019-02-19 北京工业大学 Camera Adjustable Calibration System
US10304191B1 (en) * 2016-10-11 2019-05-28 Zoox, Inc. Three dimensional bounding box estimation from two dimensional images
CN110148169A (en) * 2019-03-19 2019-08-20 长安大学 A kind of vehicle target 3 D information obtaining method based on PTZ holder camera
CN110390258A (en) * 2019-06-05 2019-10-29 东南大学 Annotating Method of 3D Information of Image Object
CN110738183A (en) * 2019-10-21 2020-01-31 北京百度网讯科技有限公司 Obstacle detection method and device
CN110826499A (en) * 2019-11-08 2020-02-21 上海眼控科技股份有限公司 Object space parameter detection method and device, electronic equipment and storage medium
JP2020041862A (en) * 2018-09-07 2020-03-19 倉敷紡績株式会社 Band-like object three-dimensional measurement method and band-like object three-dimensional measurement device
CN111079619A (en) * 2019-12-10 2020-04-28 北京百度网讯科技有限公司 Method and apparatus for detecting target object in image
CN111578839A (en) * 2020-05-25 2020-08-25 北京百度网讯科技有限公司 Obstacle coordinate processing method and device, electronic equipment and readable storage medium
US20200327690A1 (en) * 2019-04-09 2020-10-15 Sensetime Group Limited Three-dimensional object detection method and device, method and device for controlling smart driving, medium and apparatus

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10304191B1 (en) * 2016-10-11 2019-05-28 Zoox, Inc. Three dimensional bounding box estimation from two dimensional images
CN109214980A (en) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 A kind of 3 d pose estimation method, device, equipment and computer storage medium
JP2020041862A (en) * 2018-09-07 2020-03-19 倉敷紡績株式会社 Band-like object three-dimensional measurement method and band-like object three-dimensional measurement device
CN109360249A (en) * 2018-12-06 2019-02-19 北京工业大学 Camera Adjustable Calibration System
CN110148169A (en) * 2019-03-19 2019-08-20 长安大学 A kind of vehicle target 3 D information obtaining method based on PTZ holder camera
US20200327690A1 (en) * 2019-04-09 2020-10-15 Sensetime Group Limited Three-dimensional object detection method and device, method and device for controlling smart driving, medium and apparatus
CN110390258A (en) * 2019-06-05 2019-10-29 东南大学 Annotating Method of 3D Information of Image Object
CN110738183A (en) * 2019-10-21 2020-01-31 北京百度网讯科技有限公司 Obstacle detection method and device
CN110826499A (en) * 2019-11-08 2020-02-21 上海眼控科技股份有限公司 Object space parameter detection method and device, electronic equipment and storage medium
CN111079619A (en) * 2019-12-10 2020-04-28 北京百度网讯科技有限公司 Method and apparatus for detecting target object in image
CN111578839A (en) * 2020-05-25 2020-08-25 北京百度网讯科技有限公司 Obstacle coordinate processing method and device, electronic equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张峻宁;苏群星;刘鹏远;王正军;谷宏强;: "基于空间约束的自适应单目3D物体检测算法", 浙江大学学报(工学版), vol. 54, no. 6, 30 June 2020 (2020-06-30), pages 1138 - 1146 *
程庆;魏利胜;甘泉;: "基于单目视觉的目标定位算法研究", 安徽工程大学学报, vol. 32, no. 2, 15 April 2017 (2017-04-15), pages 37 - 42 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837363A (en) * 2021-02-03 2021-05-25 上海交通大学 Stereotaxic frame positioning method and system, medium and terminal
CN112837363B (en) * 2021-02-03 2022-09-30 上海交通大学 Stereotaxic frame positioning method and system, medium and terminal
CN113033426A (en) * 2021-03-30 2021-06-25 北京车和家信息技术有限公司 Dynamic object labeling method, device, equipment and storage medium
CN113033426B (en) * 2021-03-30 2024-03-01 北京车和家信息技术有限公司 Dynamic object labeling method, device, equipment and storage medium
CN113516013A (en) * 2021-04-09 2021-10-19 阿波罗智联(北京)科技有限公司 Target detection method and device, electronic equipment, road side equipment and cloud control platform
CN113516013B (en) * 2021-04-09 2024-05-14 阿波罗智联(北京)科技有限公司 Target detection method, target detection device, electronic equipment, road side equipment and cloud control platform
CN113240750A (en) * 2021-05-13 2021-08-10 中移智行网络科技有限公司 Three-dimensional space information measuring and calculating method and device
CN113470112A (en) * 2021-06-30 2021-10-01 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal
CN114266829A (en) * 2021-12-24 2022-04-01 珠海格力电器股份有限公司 Object processing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112184914B (en) 2024-07-16

Similar Documents

Publication Publication Date Title
CN112184914A (en) Method and device for determining three-dimensional position of target object and road side equipment
EP3968266B1 (en) Obstacle three-dimensional position acquisition method and apparatus for roadside computing device
CN112652016B (en) Point cloud prediction model generation method, pose estimation method and pose estimation device
CN112132829A (en) Vehicle information detection method and device, electronic equipment and storage medium
EP3989117A1 (en) Vehicle information detection method and apparatus, method and apparatus for training detection model, electronic device, storage medium and program
CN110738183B (en) Road side camera obstacle detection method and device
CN112101209B (en) Method and apparatus for determining world coordinate point cloud for roadside computing device
CN111079079B (en) Data correction method, device, electronic equipment and computer readable storage medium
CN111797745B (en) Training and predicting method, device, equipment and medium for object detection model
CN111721281B (en) Position identification method and device and electronic equipment
CN111666876B (en) Method and device for detecting obstacle, electronic equipment and road side equipment
CN112344855B (en) Obstacle detection method and device, storage medium and drive test equipment
CN112288825A (en) Camera calibration method and device, electronic equipment, storage medium and road side equipment
CN113706704B (en) Method and equipment for planning route based on high-precision map and automatic driving vehicle
CN111652113A (en) Obstacle detection method, apparatus, device, and storage medium
CN111949816A (en) Positioning processing method and device, electronic equipment and storage medium
CN114140759A (en) High-precision map lane line position determining method and device and automatic driving vehicle
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN115147809A (en) Obstacle detection method, device, equipment and storage medium
CN114266876B (en) Positioning method, visual map generation method and device
CN115773759A (en) Indoor positioning method, device and equipment of autonomous mobile robot and storage medium
CN111402308A (en) Method, apparatus, device and medium for determining speed of obstacle
CN111968071B (en) Method, device, equipment and storage medium for generating spatial position of vehicle
CN115294234B (en) Image generation method and device, electronic equipment and storage medium
CN115790621A (en) High-precision map updating method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211022

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant