CN109961471B - Method and device for marking position of object in image and electronic equipment - Google Patents
Method and device for marking position of object in image and electronic equipment Download PDFInfo
- Publication number
- CN109961471B CN109961471B CN201711340685.4A CN201711340685A CN109961471B CN 109961471 B CN109961471 B CN 109961471B CN 201711340685 A CN201711340685 A CN 201711340685A CN 109961471 B CN109961471 B CN 109961471B
- Authority
- CN
- China
- Prior art keywords
- model
- pose information
- camera model
- marked
- object model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention provides a method and a device for marking the position of an object in an image and electronic equipment, wherein the method comprises the following steps: acquiring current pose information of a pre-constructed camera model; acquiring current pose information and physical parameters of a pre-constructed object model to be marked; obtaining target pose information of the object model to be marked in a camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be marked; determining the pixel position of an object model to be marked in a current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model; the pixel locations are labeled in the image. The pixel position of an object to be marked in the image collected by the camera can be marked in the virtual environment, manual marking work is omitted, the pose of the object to be marked can be changed rapidly, a large number of marked images are obtained, and marking efficiency is improved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for labeling a position of an object in an image, and an electronic device.
Background
With the continuous improvement of computer computing capability, various deep learning models are more and more widely applied. The deep learning model for image processing is one of the deep learning models having important roles, and for example, in the fields of robot grabbing, license plate recognition, target detection in surveillance video, and the like, the deep learning model for image processing is very important.
When the deep learning models are trained, a large number of image samples need to be acquired, that is, a target object is shot at various angles and positions, and then a large number of images are acquired, wherein the target object is an object which needs to be actually detected, for example, an object which needs to be grabbed by a mechanical arm, a license plate of a vehicle and the like. In these images, the position of the target object needs to be marked, and the marked image is used as an image sample for training the deep learning model.
The method for labeling the position of the target object is generally manual labeling, that is, in the acquired image, the position of the target object is determined through human eyes, and then labeling is performed to obtain an image book. It can be seen that this approach is very labor and time consuming and inefficient for labeling.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for marking the position of an object in an image and electronic equipment, so that manual marking work is avoided, and marking efficiency is improved. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for labeling a position of an object in an image, where the method includes:
acquiring current pose information of a pre-constructed camera model;
acquiring current pose information and physical parameters of a pre-constructed object model to be marked, wherein the physical parameters are parameters for marking the size of the object model to be marked;
obtaining target pose information of the object model to be marked in a camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be marked;
determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model;
the pixel location is marked in the image.
Optionally, before the step of acquiring current pose information of the pre-constructed camera model, the method further includes:
acquiring current pose information of a pre-constructed mechanical arm model, wherein the camera model is fixedly connected with the tail end of the mechanical arm model;
the step of acquiring the current pose information of the pre-constructed camera model comprises the following steps:
and determining the current pose information of the camera model according to the current pose information of the mechanical arm model.
Optionally, the physical parameters include: the position of a preset marking point of the object model to be marked and the volume parameter of the object model to be marked are determined;
the step of determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model comprises the following steps:
determining a first target position of the preset marking point in a current acquired image of the camera model according to the internal parameter matrix of the camera model and the target pose information;
and determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter.
Optionally, the preset labeling point is a lower left vertex of the object model to be labeled;
the step of determining the pixel position of the object model to be marked in the currently acquired image of the camera model according to the first target position and the volume parameter comprises the following steps:
determining a second target position of the upper right vertex of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter;
and determining the area in the rectangular frame with the first target position and the second target position as diagonal points as the pixel position of the object model to be marked in the current acquired image of the camera model.
Optionally, after the step of marking the pixel position in the image, the method further includes:
and recording the pixel position, the current pose information of the camera model and the current pose information of the object model to be marked corresponding to the current acquired image of the camera model.
Optionally, the method further includes:
and changing the pose of the camera model and/or the object model to be marked, and returning to the step of acquiring the current pose information of the pre-constructed camera model.
In a second aspect, an embodiment of the present invention provides an apparatus for annotating a position of an object in an image, the apparatus including:
the camera model data acquisition module is used for acquiring the current pose information of a pre-constructed camera model;
the system comprises a to-be-labeled object model data acquisition module, a labeling module and a labeling module, wherein the to-be-labeled object model data acquisition module is used for acquiring current pose information and physical parameters of a pre-constructed to-be-labeled object model, and the physical parameters are parameters for identifying the size of the to-be-labeled object model;
the target pose information determining module is used for obtaining target pose information of the object model to be marked in a coordinate system of the camera model through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be marked;
the pixel position determining module is used for determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the internal parameter matrix of the camera model, the physical parameters and the target pose information;
and the pixel position marking module is used for marking the pixel position in the image.
Optionally, the apparatus further comprises:
the system comprises a mechanical arm model data acquisition module, a camera model generation module and a control module, wherein the mechanical arm model data acquisition module is used for acquiring the current pose information of a pre-constructed mechanical arm model before the current pose information of the pre-constructed camera model is acquired, and the camera model is fixedly connected with the tail end of the mechanical arm model;
the camera model data acquisition module comprises:
and the current pose information acquisition unit is used for determining the current pose information of the camera model according to the current pose information of the mechanical arm model.
Optionally, the physical parameters include: the position of a preset marking point of the object model to be marked and the volume parameter of the object model to be marked are determined;
the pixel location determination module comprises:
the first target position determining unit is used for determining a first target position of the preset marking point in a current acquired image of the camera model according to the internal parameter matrix of the camera model and the target pose information;
and the pixel position determining unit is used for determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter.
Optionally, the preset labeling point is a lower left vertex of the object model to be labeled;
the pixel position determination unit includes:
the second target position determining subunit is used for determining a second target position of the upper right vertex of the object model to be labeled in the currently acquired image of the camera model according to the first target position and the volume parameter;
and the pixel position determining subunit is configured to determine, as the pixel position of the object model to be labeled in the currently acquired image of the camera model, a region in a rectangular frame with the first target position and the second target position as diagonal points.
Optionally, the apparatus further comprises:
and the information recording module is used for recording the pixel position, the current pose information of the camera model and the current pose information of the object model to be marked corresponding to the current acquired image of the camera model after the pixel position is marked in the image.
Optionally, the apparatus further comprises:
and the pose changing module is used for changing the pose of the camera model and/or the object model to be marked and triggering the camera model data acquisition module.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a processor, a memory, and a communication bus, where the processor and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of the method for marking the position of the object in the image when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method for labeling a position of an object in an image is implemented.
According to the scheme provided by the embodiment of the invention, the current pose information of a pre-constructed camera model, the current pose information and the physical parameters of a pre-constructed object model to be marked are firstly obtained, then the target pose information of the object model to be marked in a camera model coordinate system is obtained through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be marked, then the pixel position of the object model to be marked in a current acquired image of the camera model is determined according to an internal parameter matrix, the physical parameters and the target pose information of the camera model, and finally the pixel position is marked in the image. The method can label the pixel position of an object to be labeled in the image collected by the camera in a virtual environment, so that manual labeling work is omitted, the pose of the object to be labeled can be changed quickly, a large number of labeled images are obtained, and the image labeling efficiency is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for labeling a position of an object in an image according to an embodiment of the present invention;
FIG. 2 is a schematic view of a camera model mounted to the end of a robotic arm model;
FIG. 3 is a detailed flowchart of step S104 in the embodiment shown in FIG. 1;
FIG. 4 is a detailed flowchart of step S302 in the embodiment shown in FIG. 3;
FIG. 5 is a schematic view of an annotated image captured by the camera model;
FIG. 6 is a schematic structural diagram of an apparatus for labeling a position of an object in an image according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to avoid manual annotation work and improve image annotation efficiency when annotating the position of an object in an image, embodiments of the present invention provide an annotation method and apparatus for the position of an object in an image, an electronic device, and a computer-readable storage medium.
First, a method for labeling a position of an object in an image according to an embodiment of the present invention is described below.
The method for labeling the position of the object in the image provided by the embodiment of the invention can be applied to any electronic equipment needing to label the position of the object in the image, for example, electronic equipment such as a computer, a tablet computer, a mobile phone and the like, and is not particularly limited herein, and will be referred to as electronic equipment hereinafter.
In order to more conveniently label the position of an object in an image, the electronic device may obtain a camera model and an object model to be labeled through Gazebo, CAD and other application programs. The camera model and the object model to be marked can be constructed in advance according to actual needs, for example, if the marked image is used for training a deep learning model for grabbing an object by the mechanical arm, the object to be marked is a target object to be grabbed by the mechanical arm generally, and the object model to be marked can be constructed according to parameters such as the actual shape, the size and the like of the target object to be grabbed by the mechanical arm. Similarly, in this case, the camera model may be constructed based on the actual internal parameters and external parameters of the camera mounted at the end of the robot arm.
As shown in fig. 1, a method for labeling a position of an object in an image, the method comprising:
s101, acquiring current pose information of a pre-constructed camera model;
s102, acquiring current pose information and physical parameters of a pre-constructed object model to be marked;
and the physical parameters are parameters for identifying the size of the object model to be labeled.
S103, obtaining target pose information of the object model to be marked in a camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be marked;
s104, determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model;
and S105, marking the pixel position in the image.
Therefore, in the scheme provided by the embodiment of the invention, the electronic device firstly acquires the current pose information of the pre-constructed camera model and the current pose information and the physical parameters of the pre-constructed object model to be labeled, then obtains the target pose information of the object model to be labeled in the camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be labeled, determines the pixel position of the object model to be labeled in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information, and finally labels the pixel position in the image. The method can label the pixel position of an object to be labeled in the image collected by the camera in a virtual environment, so that manual labeling work is omitted, the pose of the object to be labeled can be changed quickly, a large number of labeled images are obtained, and the image labeling efficiency is greatly improved.
It should be noted that the execution sequence of the step S101 and the step S102 is not limited in sequence, the step S101 may be executed first, or the step S102 may be executed first, and of course, both may be executed simultaneously, which is reasonable, as long as the electronic device can obtain the current pose information of the pre-constructed camera model and the current pose information and the physical parameters of the pre-constructed object model to be labeled, the execution sequence of the step S101 and the step S102 does not cause any influence on the subsequent steps.
The pose information may include three-dimensional position information and three-dimensional pose information in a world coordinate system. It can be understood that, in the virtual environment, after the camera model is determined, the pose of the camera model can be adjusted at will, for example, the camera model rotates, translates, and the like, and further, the pose information of the camera model can be acquired, and the three-dimensional pose information of the camera model can include information such as the optical axis direction of the camera model.
And the physical parameters of the object model to be labeled can be parameters capable of identifying the size of the object model to be labeled. For example, the object model to be labeled is a cylindrical cup model, and the physical parameters of the object model to be labeled may be the diameter of the circle at the bottom of the cup model, the coordinates of the center of the circle, the height of the cup model, and the like. For another example, the object model to be labeled is a rectangular box model, and the physical parameters of the object model to be labeled may be the length, width, height, and coordinates of a certain vertex or center point of the box model.
Furthermore, in step S103, the electronic device may obtain target pose information of the object model to be labeled in the camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be labeled. It can be understood that after the camera model is determined, the coordinate system of the camera model is known, and then after the current pose information of the camera model and the current pose information of the object model to be labeled are determined, the target pose information of the object model to be labeled in the coordinate system of the camera model can be determined through coordinate transformation, namely, the projection of the world coordinate system to the coordinate system of the camera model, namely, through the coordinate transformation of the external parameter matrix of the camera. The three-dimensional position and three-dimensional posture information of the object model to be marked in the camera model coordinate system can be determined.
Therefore, in step S104, the electronic device may determine the position of the object model to be labeled in the currently captured image of the camera model according to the internal parameter matrix of the camera model and the target pose information.
It will be appreciated that once the camera model is determined, its internal parameter matrix is known, and the internal parameters of the camera model can be generally expressed as:
wherein f isxAnd fyFocal lengths of the camera model in the x-direction and the y-direction of the camera model coordinate system, c, respectivelyxAnd cyThe coordinate of the principal point of the camera model in the x direction and the y direction of the imaging plane coordinate system are respectively, wherein the intersection point of the optical axis direction of the camera model and the imaging plane is called as the principal point. After the camera model is determined, the internal parameter matrix of the camera model can be determined.
The target pose information is used for representing the three-dimensional pose of the object model to be marked in the camera model coordinate system, the image coordinate system is used for representing the position of the object to be marked in the two-dimensional image acquired by the camera model, and the internal parameter matrix of the camera model is used for linearly changing between the two coordinate systems.
The pixel positions of the object model to be labeled in the current acquired image of the camera model are generally as follows: the image processing method includes that a region of a pixel range covered by an object model to be marked in a current acquired image of a camera model is included, so that the electronic equipment can generally determine the region, namely the pixel position of the object model to be marked in the current acquired image of the camera model according to the physical parameters of the object model to be marked and the position of the object model to be marked in the current acquired image of the camera model.
Furthermore, the electronic device can mark the pixel position in the current collected image of the camera model, and obtain the marked image, which can be used as a sample for training the deep learning model. The specific labeling manner of the pixel position may be determined according to the processing requirement of the subsequent deep learning model, and the embodiment of the present invention is not specifically limited herein, for example, highlighting, numerical value marking, and the like may be adopted.
As an implementation manner of the embodiment of the present invention, in a case where the camera model is fixedly connected to the end of the pre-constructed mechanical arm model, before the step of acquiring the current pose information of the pre-constructed camera model, the method may further include:
and acquiring the current pose information of the pre-constructed mechanical arm model.
As shown in fig. 2, in this case, the pose of the camera model 21 changes with the change of the pose of the mechanical arm model 22, and the rotation and movement of the joint of the mechanical arm model 22 both drive the pose information of the camera model 21 to change, so the electronic device can acquire the current pose information of the mechanical arm model 22 that is constructed in advance, and after the mechanical arm model 22 is determined, the pose information can be determined, and the pose information is also relative to the world coordinate system. Any one of the object model 23 and the object model 24 may be used as an object model to be labeled, or both may be used as object models to be labeled at the same time.
Correspondingly, the step of obtaining the current pose information of the pre-constructed camera model may include:
and determining the current pose information of the camera model according to the current pose information of the mechanical arm model.
After the electronic equipment acquires the current pose information of the mechanical arm model, the electronic equipment can determine the current pose information of the camera model because the installation position and the angle of the camera model at the tail end of the mechanical arm model are known.
Therefore, in this embodiment, for the situation that the camera model is installed at the end of the mechanical arm model, the current pose information of the camera model can be accurately obtained according to the current pose information of the mechanical arm model, so that a large number of image samples of the deep learning model for training the mechanical arm to grab the object can be quickly obtained, and the efficiency of training the deep learning model for grabbing the object by the mechanical arm can be greatly improved.
As an implementation manner of the embodiment of the present invention, the physical parameters may include: the position of a preset marking point of the object model to be marked and the volume parameter of the object model to be marked. The preset annotation point may be determined according to the actual shape of the object model to be annotated and the purpose of the annotated image, and may be, for example, a point on the outer contour line of the object model to be annotated. If the object model to be labeled is in a regular shape, such as a cube, a cylinder, etc., the preset labeling point may be a certain vertex of the object model to be labeled. Of course, for convenience of processing, even if the object model to be labeled has an irregular shape, it is reasonable that the electronic device approximates the object model to be labeled to a relatively close regular shape, and further, a certain vertex of the regular shape is used as a preset labeling point of the object model to be labeled.
The volume parameter of the object model to be marked is the parameter capable of determining the volume of the object model to be marked. When the object model to be labeled is in a regular shape, such as a cube, a cylinder, etc., the volume parameters may be parameters that can mathematically determine the volume of the object model to be labeled, that is, the length, the width, the height, etc., and are not limited herein. When the object model to be labeled is in an irregular shape, the electronic device may approximate the object model to be labeled to a relatively close regular shape, and further take parameters, such as length, width, and height, of the regular shape, which can mathematically represent the volume of the object model to be labeled, as volume parameters of the object model to be labeled.
For the case that the physical parameters of the object model to be labeled include the position of the preset labeling point of the object model to be labeled and the volume parameter of the object model to be labeled, as shown in fig. 3, the step of determining the pixel position of the object model to be labeled in the currently acquired image of the camera model according to the internal parameter matrix of the camera model, the physical parameters and the target pose information may include:
s301, determining a first target position of the preset marking point in a current acquired image of the camera model according to the internal parameter matrix of the camera model and the target pose information;
since the position of the preset labeling point is generally the position in the world coordinate system, in order to accurately determine the first target position, the electronic device may first project the position of the preset labeling point into the camera model coordinate system according to the target pose information of the object model to be labeled. In another embodiment, it is reasonable that the electronic device first projects the position of the preset annotation point into the camera model coordinate system according to the external parameter matrix of the camera model.
As can be seen from the foregoing, the internal parameter matrix of the camera model represents a linear variation relationship between the camera model coordinate system and the image coordinate system, and therefore, the electronic device may determine, according to the internal parameter matrix of the camera model, through coordinate transformation, a position of the preset labeling point projected to a corresponding position in the camera model coordinate system, that is, a position in the image coordinate system, that is, a first target position in the currently acquired image of the camera model.
S302, determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter.
After the electronic device determines the first target position, the pixel range covered by the object model to be labeled in the current image acquired by the camera model, that is, the pixel position of the object model to be labeled in the current image acquired by the camera model, can be determined according to the volume parameter of the object model to be labeled. It should be noted that, in order to determine the pixel range more accurately, the above volume parameter is generally the volume parameter of the object to be labeled in the currently acquired image of the camera model, which is determined according to the imaging principle of the camera model. The determination method may be any method in the technical field of camera imaging processing, and is not specifically limited and described herein.
For example, the object model to be labeled is a cube, the preset labeling point is a center point of the object model to be labeled, the electronic device determines that the first target position is (25, 42), the volume parameter of the object to be labeled in the currently acquired image of the camera model is a square with a side length of 6, and then the pixel positions of the object model to be labeled in the currently acquired image of the camera model are square regions with diagonal points of (22, 39) and (28, 45).
Therefore, in the embodiment, the electronic device can determine the pixel position of the object model to be labeled in the current acquired image of the camera model according to the preset labeling point and the volume parameter, can quickly and accurately determine the pixel position corresponding to the object model to be labeled, and further improves the accuracy and efficiency of image labeling.
In order to determine the pixel position of the object model to be labeled in the currently acquired image of the camera model more quickly, as an implementation manner of the embodiment of the present invention, the preset labeling point may be a lower left vertex of the object model to be labeled.
Correspondingly, as shown in fig. 4, the step of determining the pixel position of the object model to be labeled in the currently acquired image of the camera model according to the first target position and the volume parameter may include:
s401, determining a second target position of the upper right vertex of the object model to be marked in the currently acquired image of the camera model according to the first target position and the volume parameter;
it should be noted that, in this embodiment, for the case that the preset labeling point is the lower left vertex of the object model to be labeled, when the object model to be labeled is in a regular shape, the lower left vertex of the object model to be labeled refers to the vertex in the lower left corner of the camera model, and when the object model to be labeled is in an irregular shape, the electronic device may approximate the object model to be labeled to a relatively close regular shape, and the lower left vertex of the object model to be labeled refers to the vertex in the lower left corner of the camera model, which is the vertex in the relatively close regular shape.
Then, the electronic device can determine a second target position of the top right vertex of the object model to be labeled in the currently acquired image of the camera model according to the first target position and the volume parameter. Illustratively, as shown in fig. 5, the object model 51 to be annotated is a rectangular parallelepiped, the first target position 52 is (15, 20), and the volume parameter of the object to be annotated in the currently acquired image of the camera model is 12 in length and 8 in width, so it is obvious that the electronic device can very easily determine the second target position 53 to be (27, 28).
S402, determining the area in the rectangular frame with the first target position and the second target position as opposite corners as the pixel position of the object model to be marked in the current acquired image of the camera model.
The area in the rectangular frame with the first target position and the second target position as the diagonal points can cover the pixel range occupied by the object model to be marked in the current image acquired by the camera model, and the rectangular frame is adopted as the marking mode to be in accordance with the training principle of each deep learning model, so that after the second target position is determined, the electronic equipment can determine the area in the rectangular frame with the first target position and the second target position as the diagonal points as the pixel position of the object model to be marked in the current image acquired by the camera model. As shown in fig. 5, the area within the rectangular frame 54 is the pixel position of the object model 51 to be labeled in the current captured image of the camera model.
Therefore, in this embodiment, the electronic device may determine the second target position according to the first target position, and then determine the region in the rectangular frame with the first target position and the second target position as the diagonal points as the pixel position of the object model to be labeled in the image currently acquired by the camera model, so that the pixel position of the object model to be labeled may be determined more quickly and conveniently, and the efficiency of image labeling may be further improved.
As an implementation manner of the embodiment of the present invention, after the step of marking the pixel position in the image, the method may further include:
and recording the pixel position, the current pose information of the camera model and the current pose information of the object model to be marked corresponding to the current acquired image of the camera model.
In order to meet the requirements of various deep learning models, after the electronic equipment marks the pixel position in the image, the electronic equipment can also acquire the image corresponding to the current camera model, and record the pixel position of the object model to be marked, the current pose information of the camera model, the current pose information, the type and other information of the object model to be marked, so that the recorded information can be obtained according to actual requirements when the deep learning model is trained subsequently. For the way of recording the pixel position of the object model to be labeled, the way of recording the vertex, the diagonal point, the side length, and the like of the labeling frame can be adopted, and no specific limitation is made here.
As an implementation manner of the embodiment of the present invention, the method may further include:
and changing the pose of the camera model and/or the object model to be marked, and returning to the step of acquiring the current pose information of the pre-constructed camera model.
In order to rapidly acquire a large number of images after labeling, after the images currently acquired by the camera model are labeled, the electronic equipment can change the poses of the camera model and/or the object model to be labeled, and return to the step of acquiring the current pose information of the camera model constructed in advance, and circularly execute the step of the labeling method of the object position in the images.
It can be understood that after the pose of the camera model and/or the object model to be labeled is changed, the position and/or the pose of the object model to be labeled in the image acquired by the camera model is also changed, and further, the electronic device can acquire a large number of different labeled images and can be used as an image sample for training various deep learning models.
Therefore, in the embodiment, the electronic equipment can quickly change the poses of the camera model and/or the model of the object to be labeled, and because the changed poses of the camera model and/or the model of the object to be labeled are known, a large number of labeled images can be quickly obtained, and the efficiency of image labeling is greatly improved.
Corresponding to the method for marking the position of the object in the image, the embodiment of the invention also provides a device for marking the position of the object in the image.
The following describes an apparatus for labeling a position of an object in an image according to an embodiment of the present invention.
As shown in fig. 6, an apparatus for annotating a position of an object in an image, the apparatus comprising:
the camera model data acquisition module 610 is used for acquiring current pose information of a pre-constructed camera model;
the model data acquiring module 620 of the object to be labeled is used for acquiring the current pose information and the physical parameters of the pre-constructed model of the object to be labeled;
and the physical parameters are parameters for identifying the size of the object model to be labeled.
A target pose information determining module 630, configured to obtain target pose information of the object model to be labeled in a coordinate system of the camera model through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be labeled;
a pixel position determining module 640, configured to determine a pixel position of the object model to be labeled in a currently acquired image of the camera model according to the internal parameter matrix of the camera model, the physical parameters, and the target pose information;
a pixel location labeling module 650 for labeling the pixel location in the image.
Therefore, in the scheme provided by the embodiment of the invention, the electronic device firstly acquires the current pose information of the pre-constructed camera model and the current pose information and the physical parameters of the pre-constructed object model to be labeled, then obtains the target pose information of the object model to be labeled in the camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be labeled, determines the pixel position of the object model to be labeled in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information, and finally labels the pixel position in the image. The method can label the pixel position of an object to be labeled in the image collected by the camera in a virtual environment, so that manual labeling work is omitted, the pose of the object to be labeled can be changed quickly, a large number of labeled images are obtained, and the image labeling efficiency is greatly improved.
As an implementation manner of the embodiment of the present invention, the apparatus may further include:
a mechanical arm model data acquisition module (not shown in fig. 6) configured to acquire current pose information of a pre-constructed mechanical arm model before acquiring current pose information of a pre-constructed camera model, where the camera model is fixedly connected to a terminal of the mechanical arm model;
the camera model data acquisition module 610 may include:
and a current pose information acquiring unit (not shown in fig. 6) configured to determine current pose information of the camera model according to the current pose information of the mechanical arm model.
As an implementation manner of the embodiment of the present invention, the physical parameters may include: the position of a preset marking point of the object model to be marked and the volume parameter of the object model to be marked are determined;
the pixel position determining module 640 may include:
a first target position determining unit (not shown in fig. 6) configured to determine a first target position of the preset annotation point in a currently acquired image of the camera model according to the internal parameter matrix of the camera model and the target pose information;
and a pixel position determining unit (not shown in fig. 6) configured to determine a pixel position of the object model to be labeled in the currently acquired image of the camera model according to the first target position and the volume parameter.
As an implementation manner of the embodiment of the present invention, the preset annotation point may be a lower left vertex of the object model to be annotated;
the pixel position determination unit may include:
a second target position determining subunit (not shown in fig. 6) configured to determine, according to the first target position and the volume parameter, a second target position of an upper right vertex of the object model to be labeled in a currently acquired image of the camera model;
a pixel position determining subunit (not shown in fig. 6), configured to determine, as a pixel position of the object model to be labeled in the currently acquired image of the camera model, a region within a rectangular frame with the first target position and the second target position as opposite corners.
As an implementation manner of the embodiment of the present invention, the apparatus may further include:
and an information recording module (not shown in fig. 6) configured to record the pixel position, the current pose information of the camera model, and the current pose information of the object model to be labeled, corresponding to the currently-captured image of the camera model, after the pixel position is marked in the image.
As an implementation manner of the embodiment of the present invention, the apparatus may further include:
a pose changing module (not shown in fig. 6) configured to change the pose of the camera model and/or the object model to be labeled and trigger the camera model data acquiring module 610.
An embodiment of the present invention further provides an electronic device, as shown in fig. 7, including a processor 701, a memory 702, and a communication bus 703, where the processor 701 and the memory 702 complete mutual communication through the communication bus 703,
a memory 702 for storing a computer program;
the processor 701 is configured to implement the following steps when executing the program stored in the memory 702:
acquiring current pose information of a pre-constructed camera model;
acquiring current pose information and physical parameters of a pre-constructed object model to be marked, wherein the physical parameters are parameters for marking the size of the object model to be marked;
obtaining target pose information of the object model to be marked in a camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be marked;
determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model;
the pixel location is marked in the image.
Therefore, in the scheme provided by the embodiment of the invention, the electronic device firstly acquires the current pose information of the pre-constructed camera model and the current pose information and the physical parameters of the pre-constructed object model to be labeled, then obtains the target pose information of the object model to be labeled in the camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be labeled, determines the pixel position of the object model to be labeled in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information, and finally labels the pixel position in the image. The method can label the pixel position of an object to be labeled in the image collected by the camera in a virtual environment, so that manual labeling work is omitted, the pose of the object to be labeled can be changed quickly, a large number of labeled images are obtained, and the image labeling efficiency is greatly improved.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
Before the step of acquiring the current pose information of the pre-constructed camera model, the method may further include:
acquiring current pose information of a pre-constructed mechanical arm model, wherein the camera model is fixedly connected with the tail end of the mechanical arm model;
the step of obtaining the current pose information of the pre-constructed camera model includes:
and determining the current pose information of the camera model according to the current pose information of the mechanical arm model.
Wherein, the physical parameters may include: the position of a preset marking point of the object model to be marked and the volume parameter of the object model to be marked are determined;
the step of determining the pixel position of the object model to be labeled in the currently acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model may include:
determining a first target position of the preset marking point in a current acquired image of the camera model according to the internal parameter matrix of the camera model and the target pose information;
and determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter.
The preset marking point can be a lower left vertex of the object model to be marked;
the step of determining the pixel position of the object model to be labeled in the currently acquired image of the camera model according to the first target position and the volume parameter may include:
determining a second target position of the upper right vertex of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter;
and determining the area in the rectangular frame with the first target position and the second target position as diagonal points as the pixel position of the object model to be marked in the current acquired image of the camera model.
After the step of marking the pixel position in the image, the method may further include:
and recording the pixel position, the current pose information of the camera model and the current pose information of the object model to be marked corresponding to the current acquired image of the camera model.
Wherein, the method can also comprise:
and changing the pose of the camera model and/or the object model to be marked, and returning to the step of acquiring the current pose information of the pre-constructed camera model.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and when executed by a processor, the computer program implements the following steps:
acquiring current pose information of a pre-constructed camera model;
acquiring current pose information and physical parameters of a pre-constructed object model to be marked, wherein the physical parameters are parameters for marking the size of the object model to be marked;
obtaining target pose information of the object model to be marked in a camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be marked;
determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model;
the pixel location is marked in the image.
It can be seen that, in the scheme provided in the embodiment of the present invention, when the computer program is executed by the processor, the current pose information of the pre-constructed camera model, the current pose information and the physical parameters of the pre-constructed object model to be labeled are obtained, then the target pose information of the object model to be labeled in the camera model coordinate system is obtained through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be labeled, then the pixel position of the object model to be labeled in the currently acquired image of the camera model is determined according to the internal parameter matrix, the physical parameters and the target pose information, and finally the pixel position is labeled in the image. The method can label the pixel position of an object to be labeled in the image collected by the camera in a virtual environment, so that manual labeling work is omitted, the pose of the object to be labeled can be changed quickly, a large number of labeled images are obtained, and the image labeling efficiency is greatly improved.
Before the step of acquiring the current pose information of the pre-constructed camera model, the method may further include:
acquiring current pose information of a pre-constructed mechanical arm model, wherein the camera model is fixedly connected with the tail end of the mechanical arm model;
the step of obtaining the current pose information of the pre-constructed camera model includes:
and determining the current pose information of the camera model according to the current pose information of the mechanical arm model.
Wherein, the physical parameters may include: the position of a preset marking point of the object model to be marked and the volume parameter of the object model to be marked are determined;
the step of determining the pixel position of the object model to be labeled in the currently acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model may include:
determining a first target position of the preset marking point in a current acquired image of the camera model according to the internal parameter matrix of the camera model and the target pose information;
and determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter.
The preset marking point can be a lower left vertex of the object model to be marked;
the step of determining the pixel position of the object model to be labeled in the currently acquired image of the camera model according to the first target position and the volume parameter may include:
determining a second target position of the upper right vertex of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter;
and determining the area in the rectangular frame with the first target position and the second target position as diagonal points as the pixel position of the object model to be marked in the current acquired image of the camera model.
After the step of marking the pixel position in the image, the method may further include:
and recording the pixel position, the current pose information of the camera model and the current pose information of the object model to be marked corresponding to the current acquired image of the camera model.
Wherein, the method can also comprise:
and changing the pose of the camera model and/or the object model to be marked, and returning to the step of acquiring the current pose information of the pre-constructed camera model.
It should be noted that, for the above-mentioned apparatus, electronic device and computer-readable storage medium embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (14)
1. A method for labeling the position of an object in an image, which is characterized by comprising the following steps:
acquiring current pose information of a pre-constructed camera model;
acquiring current pose information and physical parameters of a pre-constructed object model to be marked, wherein the physical parameters are parameters for marking the size of the object model to be marked;
obtaining target pose information of the object model to be marked in a camera model coordinate system through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be marked;
determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model;
the pixel location is marked in the image.
2. The method of claim 1, wherein prior to the step of acquiring current pose information of the pre-constructed camera model, the method further comprises:
acquiring current pose information of a pre-constructed mechanical arm model, wherein the camera model is fixedly connected with the tail end of the mechanical arm model;
the step of acquiring the current pose information of the pre-constructed camera model comprises the following steps:
and determining the current pose information of the camera model according to the current pose information of the mechanical arm model.
3. The method of claim 1, wherein the physical parameters comprise: the position of a preset marking point of the object model to be marked and the volume parameter of the object model to be marked are determined;
the step of determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the internal parameter matrix, the physical parameters and the target pose information of the camera model comprises the following steps:
determining a first target position of the preset marking point in a current acquired image of the camera model according to the internal parameter matrix of the camera model and the target pose information;
and determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter.
4. The method of claim 3, wherein the preset labeling point is a lower left vertex of the object model to be labeled;
the step of determining the pixel position of the object model to be marked in the currently acquired image of the camera model according to the first target position and the volume parameter comprises the following steps:
determining a second target position of the upper right vertex of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter;
and determining the area in the rectangular frame with the first target position and the second target position as diagonal points as the pixel position of the object model to be marked in the current acquired image of the camera model.
5. The method of claim 1, wherein after the step of labeling the pixel location in the image, the method further comprises:
and recording the pixel position, the current pose information of the camera model and the current pose information of the object model to be marked corresponding to the current acquired image of the camera model.
6. The method of any one of claims 1-5, further comprising:
and changing the pose of the camera model and/or the object model to be marked, and returning to the step of acquiring the current pose information of the pre-constructed camera model.
7. An apparatus for annotating a position of an object in an image, said apparatus comprising:
the camera model data acquisition module is used for acquiring the current pose information of a pre-constructed camera model;
the system comprises a to-be-labeled object model data acquisition module, a labeling module and a labeling module, wherein the to-be-labeled object model data acquisition module is used for acquiring current pose information and physical parameters of a pre-constructed to-be-labeled object model, and the physical parameters are parameters for identifying the size of the to-be-labeled object model;
the target pose information determining module is used for obtaining target pose information of the object model to be marked in a coordinate system of the camera model through coordinate transformation according to the current pose information of the camera model and the current pose information of the object model to be marked;
the pixel position determining module is used for determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the internal parameter matrix of the camera model, the physical parameters and the target pose information;
and the pixel position marking module is used for marking the pixel position in the image.
8. The apparatus of claim 7, wherein the apparatus further comprises:
the system comprises a mechanical arm model data acquisition module, a camera model generation module and a control module, wherein the mechanical arm model data acquisition module is used for acquiring the current pose information of a pre-constructed mechanical arm model before the current pose information of the pre-constructed camera model is acquired, and the camera model is fixedly connected with the tail end of the mechanical arm model;
the camera model data acquisition module comprises:
and the current pose information acquisition unit is used for determining the current pose information of the camera model according to the current pose information of the mechanical arm model.
9. The apparatus of claim 7, wherein the physical parameters comprise: the position of a preset marking point of the object model to be marked and the volume parameter of the object model to be marked are determined;
the pixel location determination module comprises:
the first target position determining unit is used for determining a first target position of the preset marking point in a current acquired image of the camera model according to the internal parameter matrix of the camera model and the target pose information;
and the pixel position determining unit is used for determining the pixel position of the object model to be marked in the current acquired image of the camera model according to the first target position and the volume parameter.
10. The apparatus of claim 9, wherein the preset labeling point is a lower left vertex of the object model to be labeled;
the pixel position determination unit includes:
the second target position determining subunit is used for determining a second target position of the upper right vertex of the object model to be labeled in the currently acquired image of the camera model according to the first target position and the volume parameter;
and the pixel position determining subunit is configured to determine, as the pixel position of the object model to be labeled in the currently acquired image of the camera model, a region in a rectangular frame with the first target position and the second target position as diagonal points.
11. The apparatus of claim 7, wherein the apparatus further comprises:
and the information recording module is used for recording the pixel position, the current pose information of the camera model and the current pose information of the object model to be marked corresponding to the current acquired image of the camera model after the pixel position is marked in the image.
12. The apparatus of any of claims 7-11, wherein the apparatus further comprises:
and the pose changing module is used for changing the pose of the camera model and/or the object model to be marked and triggering the camera model data acquisition module.
13. An electronic device is characterized by comprising a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1-6 when executing a program stored in the memory.
14. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711340685.4A CN109961471B (en) | 2017-12-14 | 2017-12-14 | Method and device for marking position of object in image and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711340685.4A CN109961471B (en) | 2017-12-14 | 2017-12-14 | Method and device for marking position of object in image and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109961471A CN109961471A (en) | 2019-07-02 |
CN109961471B true CN109961471B (en) | 2021-05-28 |
Family
ID=67018116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711340685.4A Active CN109961471B (en) | 2017-12-14 | 2017-12-14 | Method and device for marking position of object in image and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109961471B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7383255B2 (en) * | 2019-08-22 | 2023-11-20 | ナブテスコ株式会社 | Information processing systems, information processing methods, construction machinery |
CN113129365B (en) * | 2019-12-30 | 2022-06-24 | 魔门塔(苏州)科技有限公司 | Image calibration method and device |
CN113378606A (en) * | 2020-03-10 | 2021-09-10 | 杭州海康威视数字技术股份有限公司 | Method, device and system for determining labeling information |
CN111695628B (en) * | 2020-06-11 | 2023-05-05 | 北京百度网讯科技有限公司 | Key point labeling method and device, electronic equipment and storage medium |
CN113763573B (en) * | 2021-09-17 | 2023-07-11 | 北京京航计算通讯研究所 | Digital labeling method and device for three-dimensional object |
CN113763572B (en) * | 2021-09-17 | 2023-06-27 | 北京京航计算通讯研究所 | 3D entity labeling method based on AI intelligent recognition and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1847789A (en) * | 2005-04-06 | 2006-10-18 | 佳能株式会社 | Method and apparatus for measuring position and orientation |
CN101319895A (en) * | 2008-07-17 | 2008-12-10 | 上海交通大学 | Hand-hold traffic accident fast on-site coordinate machine |
CN103827631A (en) * | 2011-09-27 | 2014-05-28 | 莱卡地球系统公开股份有限公司 | Measuring system and method for marking a known target point in a coordinate system |
CN104715479A (en) * | 2015-03-06 | 2015-06-17 | 上海交通大学 | Scene reproduction detection method based on augmented virtuality |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101520904B (en) * | 2009-03-24 | 2011-12-28 | 上海水晶石信息技术有限公司 | Reality augmenting method with real environment estimation and reality augmenting system |
US9691163B2 (en) * | 2013-01-07 | 2017-06-27 | Wexenergy Innovations Llc | System and method of measuring distances related to an object utilizing ancillary objects |
CN103606303B (en) * | 2013-05-09 | 2016-02-03 | 陕西思智通教育科技有限公司 | A kind of rendering method for the Web-based instruction and equipment |
CN104217441B (en) * | 2013-08-28 | 2017-05-10 | 北京嘉恒中自图像技术有限公司 | Mechanical arm positioning fetching method based on machine vision |
JP6121063B1 (en) * | 2014-11-04 | 2017-04-26 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd | Camera calibration method, device and system |
CN115100066A (en) * | 2016-01-29 | 2022-09-23 | 上海联影医疗科技股份有限公司 | Image reconstruction method and device |
CN110288660B (en) * | 2016-11-02 | 2021-05-25 | 北京信息科技大学 | Robot hand-eye calibration method based on convex relaxation global optimization algorithm |
-
2017
- 2017-12-14 CN CN201711340685.4A patent/CN109961471B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1847789A (en) * | 2005-04-06 | 2006-10-18 | 佳能株式会社 | Method and apparatus for measuring position and orientation |
CN101319895A (en) * | 2008-07-17 | 2008-12-10 | 上海交通大学 | Hand-hold traffic accident fast on-site coordinate machine |
CN103827631A (en) * | 2011-09-27 | 2014-05-28 | 莱卡地球系统公开股份有限公司 | Measuring system and method for marking a known target point in a coordinate system |
CN104715479A (en) * | 2015-03-06 | 2015-06-17 | 上海交通大学 | Scene reproduction detection method based on augmented virtuality |
Also Published As
Publication number | Publication date |
---|---|
CN109961471A (en) | 2019-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109961471B (en) | Method and device for marking position of object in image and electronic equipment | |
CN109584295B (en) | Method, device and system for automatically labeling target object in image | |
CN110298878B (en) | Method and device for determining three-dimensional pose of target object and electronic equipment | |
TWI686746B (en) | Method, device, server, client and system for identifying damaged parts of vehicle | |
CN111127422B (en) | Image labeling method, device, system and host | |
CN107328420B (en) | Positioning method and device | |
WO2019114339A1 (en) | Method and device for correcting motion of robotic arm | |
WO2021004416A1 (en) | Method and apparatus for establishing beacon map on basis of visual beacons | |
WO2019042426A1 (en) | Augmented reality scene processing method and apparatus, and computer storage medium | |
CN109479082B (en) | Image processing method and apparatus | |
CN112836558B (en) | Mechanical arm tail end adjusting method, device, system, equipment and medium | |
KR20180105875A (en) | Camera calibration method using single image and apparatus therefor | |
CN113172636B (en) | Automatic hand-eye calibration method and device and storage medium | |
CN110298879B (en) | Method and device for determining pose of object to be grabbed and electronic equipment | |
CN109955244A (en) | Grabbing control method and device based on visual servo and robot | |
CN112381873A (en) | Data labeling method and device | |
CN109213202A (en) | Cargo arrangement method, device, equipment and storage medium based on optical servo | |
JP2016103137A (en) | User interface system, image processor and control program | |
WO2022088613A1 (en) | Robot positioning method and apparatus, device and storage medium | |
JP2010184300A (en) | Attitude changing device and attitude changing method | |
WO2021138856A1 (en) | Camera control method, device, and computer readable storage medium | |
CN114972492A (en) | Position and pose determination method and device based on aerial view and computer storage medium | |
WO2019100216A1 (en) | 3d modeling method, electronic device, storage medium and program product | |
KR102438093B1 (en) | Method and apparatus for associating objects, systems, electronic devices, storage media and computer programs | |
CN114882107A (en) | Data processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |