CN108444452B - Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device - Google Patents
Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device Download PDFInfo
- Publication number
- CN108444452B CN108444452B CN201810143272.5A CN201810143272A CN108444452B CN 108444452 B CN108444452 B CN 108444452B CN 201810143272 A CN201810143272 A CN 201810143272A CN 108444452 B CN108444452 B CN 108444452B
- Authority
- CN
- China
- Prior art keywords
- target
- shooting device
- dimensional space
- vector
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 239000013598 vector Substances 0.000 claims abstract description 78
- 238000003062 neural network model Methods 0.000 claims abstract description 70
- 238000001514 detection method Methods 0.000 claims abstract description 20
- 238000010801 machine learning Methods 0.000 claims abstract description 14
- 238000004590 computer program Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 abstract description 3
- 230000003068 static effect Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C1/00—Measuring angles
- G01C1/02—Theodolites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention is suitable for the field of detection of three-dimensional space, and provides a method and a device for detecting the longitude and latitude of a target and the three-dimensional space attitude of a shooting device. The method comprises the following steps: constructing a group of vectors q related to the target three-dimensional space posture; receiving a target image I shot by a shooting device; using machine learning to sample N groups of data I1,q1...IN,qNOptimizing a neural network model parameter W according to a neural network model equation to obtain an optimized neural network model parameter W of the formed sample set; substituting the optimized neural network model parameter W and a newly received target image I shot by the shooting device into a neural network model equation to obtain a vector q; and calculating through the vector q to obtain the three-dimensional space attitude R of the shooting device relative to the target. The data source of the invention can be from the real-time video of the ordinary monocular shooting device, the cost is low, and the detection is convenient; the method can realize the prediction of the moving direction of the target by the static picture, establish the mapping relation of the target from the video to the map and provide support for the related application expansion.
Description
Technical Field
The invention belongs to the field of detection of three-dimensional space, and particularly relates to a method and a device for detecting the longitude and latitude of a target and the three-dimensional space attitude of a shooting device.
Background
The technology for remotely measuring the direction and the posture of the target is always a hot point of domestic and foreign research, and has important practical application value in the fields of battlefield decision, autonomous navigation, change monitoring and the like. The traditional detection method is based on expensive and clumsy detection equipment such as TOF depth camera, kinect, laser scanner and the like. High cost and inconvenient detection.
Disclosure of Invention
The invention aims to provide a method and a device for detecting longitude and latitude of a target and a three-dimensional space posture of a shooting device, a computer readable storage medium and electronic equipment, and aims to solve the problems of high cost and inconvenience in detection of expensive and clumsy detection equipment based on a TOF depth camera, a kinect, a laser scanner and the like.
In a first aspect, the present invention provides a method for detecting a three-dimensional spatial posture of a camera, the method comprising:
s101, constructing a group of vectors q related to the target three-dimensional space attitude;
s102, receiving a target image I shot by a shooting device;
s103, utilizing machine learning to carry out N groups of sample data I1,q1...IN,qNOptimizing a neural network model parameter W according to a neural network model equation to obtain an optimized neural network model parameter W of the formed sample set;
s104, substituting the optimized neural network model parameters W and a newly received target image I shot by the shooting device into a neural network model equation to obtain a vector q;
and S105, calculating through the vector q to obtain the three-dimensional space attitude R of the shooting device relative to the target.
In a second aspect, the present invention provides a device for detecting a three-dimensional spatial attitude of a photographing device, the device comprising:
a construction module, which is used for constructing a group of vectors q related to the target three-dimensional space attitude;
the receiving module is used for receiving a target image I shot by the shooting device;
an optimization module for using machine learning to combine N groups of sample data I1,q1...IN,qNOptimizing a neural network model parameter W according to a neural network model equation to obtain an optimized neural network model parameter W of the formed sample set;
the vector calculation module is used for substituting the optimized neural network model parameters W and a newly received target image I shot by the shooting device into a neural network model equation to obtain a vector q;
and the resolving module is used for resolving through the vector q to obtain the three-dimensional space attitude R of the shooting device relative to the target.
In a third aspect, the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for detecting a three-dimensional spatial orientation of a photographing apparatus as described above.
In a fourth aspect, the present invention provides a method for detecting longitude and latitude of a target, wherein the method includes:
s201, obtaining a three-dimensional space attitude R of the shooting device relative to a target and a three-dimensional space coordinate T of the shooting device relative to the target according to the detection method of the three-dimensional space attitude of the shooting device;
s202, inversely calculating the three-dimensional space coordinate of the target relative to the shooting device according to the three-dimensional space attitude R of the shooting device relative to the target and the three-dimensional space coordinate T of the shooting device relative to the target;
s203, indirectly solving the coordinate To of the target relative To the earth geocentric coordinate system according To the three-dimensional space coordinate of the target relative To the shooting device and the coordinate Tx and the attitude Rx of the shooting device relative To the earth geocentric coordinate system, and directly obtaining the longitude and latitude of the target according To the To.
In a fourth aspect, the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for detecting a target longitude and latitude as described above.
In the invention, N groups of sample data I are obtained by utilizing machine learning1,q1...IN,qNOptimizing a neural network model parameter W according to a neural network model equation to obtain an optimized neural network model parameter W of the formed sample set; substituting the optimized neural network model parameter W and a newly received target image I shot by the shooting device into a neural network model equation to obtain a vector q; and calculating through the vector q to obtain the three-dimensional space attitude R of the shooting device relative to the target. Therefore, the data source of the invention can be from real-time videos of common monocular shooting devices, and has low cost and convenient detection, which is different from the traditional target detection technology based on expensive and clumsy detection equipment such as TOF depth cameras, kinect and laser scanners; the method can realize the prediction of the moving direction of the target by the static picture, establish the mapping relation of the target from the video to the map and provide support for the related application expansion.
Drawings
Fig. 1 is a flowchart of a method for detecting a three-dimensional spatial pose of a camera according to an embodiment of the present invention.
Fig. 2 is a functional block diagram of a detection apparatus for detecting a three-dimensional spatial attitude of a camera according to a second embodiment of the present invention.
Fig. 3 is a flowchart of a method for detecting latitude and longitude of a target according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
The first embodiment is as follows:
referring to fig. 1, a method for detecting a three-dimensional attitude of a camera according to an embodiment of the present invention includes the following steps: it should be noted that the method for detecting the three-dimensional attitude of the imaging device according to the present invention is not limited to the flow sequence shown in fig. 1 if the results are substantially the same.
And S101, constructing a group of vectors q related to the target three-dimensional space attitude.
In the first embodiment of the present invention, the vector q associated with the target three-dimensional spatial pose may be: a 4-ary number { q0, q1, q2, q3}, an attitude matrix, or three attitude angles { a, b, c }. The vector q is a binary number when a plane defined by two dimensions of the three-dimensional space is perpendicular to the camera line-of-sight direction.
And S102, receiving a target image I shot by the shooting device.
S103, utilizing machine learning to carry out N groups of sample data I1,q1...IN,qNAnd optimizing the neural network model parameters W according to the neural network model equation to obtain optimized neural network model parameters W of the formed sample set.
In the first embodiment of the present invention, the neural network model equation is
f(W,I1)=q1
...
f(W,IN)=qN。
Machine Learning (ML) is a multi-domain cross subject, and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis and algorithm complexity theory.
Since the output layer of the neural network model outputs 4 values representing the target three-dimensional spatial attitude at the time of establishing the forward propagation of the neural network model, since the range of the value range output by the neural network model is (- ∞, infinity), and the quaternion representing the target three-dimensional spatial attitude is subject to the constraint q that the sum of squares is equal to 10 2+q1 2+q2 2+q3 21. Therefore, when the vector q is a quaternion, the output processing procedure of the neural network model is as follows:
the vector Q output by the last output layer of the neural network model is processed by a unitization constraint layer to output a quaternion vector Q { Q }0,q1,q2,q3}; the calculation process is as follows:
forward propagation formulaWherein i is 0..3,this ensures that it is a quaternion q0,q1,q2,q3The unit vector constraint q of0 2+q1 2+q2 2+q3 2=1;
Formula of back propagationWherein,e is an error functionWhereinIs the expected value of the i-th component of the quaternion.
Quaternions predict three-dimensional spatial attitude, and degenerating into a binary number predicts the direction of a two-dimensional plane, and predicts the attitude of a two-dimensional plane target on the plane, for example, the quaternion can be used for aerial photography to predict the direction of a ground target.
Therefore, when the vector q is a binary number, the output processing procedure of the neural network model is as follows:
the vector Q output by the last output layer of the neural network model is processed by a unitization constraint layer to output a binary number vector Q { Q }0,q1}; the calculation process is as follows:
forward propagation formulaWherein, i is 0,1,can ensure { q0,q1Is the unit vector constraint q0 2+q1 2=1;
Formula of back propagationWhereinE is an error functionWhereinIs the unit direction vector expectation of the target on the plane.
And S104, substituting the optimized neural network model parameters W and the newly received target image I shot by the shooting device into a neural network model equation to obtain a vector q.
In the first embodiment of the present invention, the neural network model equation is f (W, I) ═ q.
And S105, calculating through the vector q to obtain the three-dimensional space attitude R of the shooting device relative to the target.
In the first embodiment of the present invention, the vector q may be a quaternion, a coordinate of n feature points on an image, a rotation vector, a rotation matrix, or the like, where n is greater than or equal to 3.
When the vector q is a quaternion, the three-dimensional attitude R of the camera with respect to the target can be calculated by:
when the vector q is the coordinates P of n feature points on the image1,…,PNDuring the shooting process, the three-dimensional space posture R and the position T of the shooting device relative to the target can be solved through the corresponding relation of the computer vision object image, and the three-dimensional space posture R of the shooting device relative to the target and the three-dimensional space coordinate T of the shooting device relative to the target can be obtained through a cv:: solvePp function in an OpenCV library function.
When the vector q is a rotation vector, the rotation vector can be converted into a three-dimensional spatial pose R of the camera relative to the target by a cv:: Rodrigues function in an OpenCV library function.
In the first embodiment of the present invention, after S105, the method may further include:
according to the formulaThe three-dimensional spatial coordinates T of the camera with respect to the object are approximated, wherein,cx,cyis the coordinate of the principal point of the camera, fx、fyIs the focal length of the pixel of the camera,where D is the diameter of the object, and Δ u, Δ v are the width and height, respectively, of the object as identified on the imageAnd the degrees u and v are central points of the target on the image, and the length of the target projected in the sight line direction under the coordinate system of the shooting device is z.
In the first embodiment of the present invention, after S105, the method may further include:
constructing a group of vectors Z related to the position and the posture of the target three-dimensional space; in the first embodiment of the present invention, the vector Z related to the target three-dimensional spatial attitude may be: quaternion { q0,q1,q2,q3And a projection parameter z of the target to the camera in the direction of the line of sight.
Receiving a target image I shot by a shooting device and a circumscribed rectangular frame coordinate r of a target in the image, wherein the circumscribed rectangular frame coordinate r of the target in the image can be obtained by the prior art;
using machine learning to sample N groups of data I1,r1,z1...IN,rN,zNOptimizing the neural network model parameter W according to the following neural network model equation to obtain the optimized neural network model parameter W,
f(W,I1,r1)=z1
...
f(W,IN,rN)=zN;
substituting the optimized neural network model parameters W and a newly received target image I r shot by the shooting device into a neural network model equation f (W, I, r) Z to obtain a vector Z;
and calculating to obtain the three-dimensional space coordinate T of the shooting device relative to the target through the vector Z and the three-dimensional space attitude R of the shooting device relative to the target.
In the first embodiment of the present invention, the vector Z may be a quantity related to the target position, the vector Z may also be a Z component λ of the vector KRT, and λ is calculated as a Z value by calculating the KRT during machine learning. When in prediction, R is predicted, and then Z ═ lambda predicted by the neural network is substituted into a formulaWherein,cx,cyis the coordinate of the principal point of the camera, fx、fyThe focal length of the pixel of the shooting device is shown, R is the three-dimensional space posture of the shooting device relative to the target, u and v are coordinates of the target origin on the image, and u and v can be obtained through coordinates of image points of the target origin on the image or through approximation of the center point of the target rectangular frame.
Example two:
referring to fig. 2, a device for detecting a three-dimensional attitude of a camera according to a second embodiment of the present invention includes:
a construction module 11, configured to construct a set of vectors q associated with the target three-dimensional spatial pose;
the receiving module 12 is used for receiving a target image I shot by the shooting device;
an optimization module 13 for utilizing machine learning to combine N sets of sample data I1,q1...IN,qNOptimizing a neural network model parameter W according to a neural network model equation to obtain an optimized neural network model parameter W of the formed sample set;
the vector calculation module 14 is configured to substitute the optimized neural network model parameter W and a newly received target image I captured by the capturing device into a neural network model equation to obtain a vector q;
and the calculating module 15 is used for calculating the three-dimensional space attitude R of the shooting device relative to the target through the vector q.
The detection device for the three-dimensional space attitude of the shooting device provided by the second embodiment of the invention and the detection method for the three-dimensional space attitude of the shooting device provided by the first embodiment of the invention belong to the same concept, and the specific implementation process is described in the whole specification, and is not described herein again.
Example three:
a third embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for detecting a three-dimensional spatial attitude of a shooting device according to the first embodiment of the present invention is implemented.
Example four:
referring to fig. 3, a method for detecting longitude and latitude of a target according to a fourth embodiment of the present invention includes the following steps: it should be noted that, if the result is substantially the same, the method for detecting the target longitude and latitude of the present invention is not limited to the flow sequence shown in fig. 3.
S201, obtaining a three-dimensional space attitude R of the shooting device relative to the target and a three-dimensional space coordinate T of the shooting device relative to the target according to the detection method of the three-dimensional space attitude of the shooting device provided by the embodiment of the invention.
And S202, inversely calculating the three-dimensional space coordinate of the object relative to the shooting device according to the three-dimensional space attitude R of the shooting device relative to the object and the three-dimensional space coordinate T of the shooting device relative to the object.
S203, indirectly solving the coordinate To of the target relative To the earth geocentric coordinate system according To the three-dimensional space coordinate of the target relative To the shooting device and the coordinate Tx and the attitude Rx of the shooting device relative To the earth geocentric coordinate system, and directly obtaining the longitude and latitude of the target according To the To.
Wherein, Tx can be obtained by GPS of the camera, and Rx can be obtained by gyroscope, magnetometer and accelerometer bound with the camera.
Wherein the three-dimensional space coordinates of the object relative to the cameraRx RgRV, Rg is attitude data relative to a northeast coordinate system measured by a gyroscope of the shooting device,wherein,is the longitude of the camera and θ is the latitude of the camera.
In the fourth embodiment of the present invention, S202 may further include the following steps:
and inversely calculating the three-dimensional space posture of the target relative to the shooting device according to the three-dimensional space posture R of the shooting device relative to the target.
S203 may further include the steps of:
according to the three-dimensional space attitude R of the target relative to the shooting deviceTAnd the attitude Rx of the shooting device relative to the earth geocentric coordinate system, and indirectly calculating the attitude Ro of the target relative to the earth geocentric coordinate system, wherein the specific formula is as follows: r is Ro ═ RTRx。
Example five:
a fifth embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for detecting a target longitude and latitude provided in the fourth embodiment of the present invention is implemented.
In the invention, N groups of sample data I are obtained by utilizing machine learning1,q1...IN,qNOptimizing a neural network model parameter W according to a neural network model equation to obtain an optimized neural network model parameter W of the formed sample set; substituting the optimized neural network model parameter W and a newly received target image I shot by the shooting device into a neural network model equation to obtain a vector q; and calculating through the vector q to obtain the three-dimensional space attitude R of the shooting device relative to the target. Therefore, the data source of the invention can be from real-time videos of common monocular shooting devices, and has low cost and convenient detection, which is different from the traditional target detection technology based on expensive and clumsy detection equipment such as TOF depth cameras, kinect and laser scanners; the method can realize the prediction of the moving direction of the target by the static picture, establish the mapping relation of the target from the video to the map and provide support for the related application expansion.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (11)
1. A method for detecting a three-dimensional space posture of a shooting device is characterized by comprising the following steps:
s101, constructing a group of vectors q related to the target three-dimensional space attitude, wherein the vectors q are quaternions, coordinates of n characteristic points on an image, rotation vectors or rotation matrixes, and n is more than or equal to 3; when a plane determined by two dimensions of the three-dimensional space is perpendicular to the sight line direction of the shooting device, the vector q is a binary number;
s102, receiving a target image I shot by a shooting device;
s103, utilizing machine learning to carry out N groups of sample data I1,q1...IN,qNFormed set of samples, I of said sample data1Means that the vector related to the three-dimensional space posture of the target shot by the shooting device is q1Target image of time, I in said sample dataNMeans that the vector related to the three-dimensional space posture of the target shot by the shooting device is qNOptimizing a neural network model parameter W of the target image according to a neural network model equation to obtain an optimized neural network model parameter W; the neural network model equation is
f(W,I1)=q1
...
f(W,IN)=qN;
S104, substituting the optimized neural network model parameters W and a newly received target image I shot by the shooting device into a neural network model equation to obtain a vector q;
and S105, calculating through the vector q to obtain the three-dimensional space attitude R of the shooting device relative to the target.
2. The method according to claim 1, wherein when the vector q is a quaternion, said S105 is specifically: the three-dimensional attitude R of the imaging device relative to the target is calculated by the following equation:
when the vector q is the coordinates P of n feature points on the image1,…,PNThen, the step S105 specifically includes: the three-dimensional space attitude R and the position T of the shooting device relative to the target are solved through the corresponding relation of the computer vision object image;
when the vector q is a rotation vector, the S105 specifically is: the rotation vector is converted into the three-dimensional space pose R of the shooting device relative to the target through cv:: Rodrigues function in OpenCV library function.
3. The method of claim 2, wherein when the vector q is a quaternion, the output process of the neural network model is:
the vector Q output by the last output layer of the neural network model is processed by a unitization constraint layer to output a quaternion vector Q { Q }0,q1,q2,q3}; the calculation process is as follows:
formula of back propagationWherein,e is an error functionWhereinIs the expected value of the i-th component of the quaternion;
when the vector q is a binary number, the output processing process of the neural network model is as follows:
the vector Q output by the last output layer of the neural network model is processed by a unitization constraint layer to output a binary number vector Q { Q }0,q1}; the calculation process is as follows:
4. The method according to any of claims 1 to 3, wherein after S105, the method further comprises:
according to the formulaThe three-dimensional spatial coordinates T of the camera with respect to the object are approximated, wherein,cx,cyis the coordinate of the principal point of the camera, fx、fyIs the focal length of the pixel of the camera,where D is the diameter of the object, Δ u, Δ v are the width and height, respectively, of the object as identified on the image, u, v are the center points of the object on the image, and z is the length of the object as projected in the direction of the line of sight under the camera coordinate system.
5. The method according to any of claims 1 to 3, wherein after S105, the method further comprises:
constructing a group of vectors Z related to the position and the posture of the target three-dimensional space;
receiving a target image I shot by a shooting device and a coordinate r of a circumscribed rectangular frame of a target in the image;
using machine learning to sample N groups of data I1,r1,z1...IN,rN,zNOptimizing the neural network model parameter W according to the following neural network model equation to obtain the optimized neural network model parameter W,
f(W,I1,r1)=z1
...
f(W,IN,rN)=zN;
substituting the optimized neural network model parameters W and a newly received target image I r shot by the shooting device into a neural network model equation f (W, I, r) Z to obtain a vector Z;
and calculating to obtain the three-dimensional space coordinate T of the shooting device relative to the target through the vector Z and the three-dimensional space attitude R of the shooting device relative to the target.
6. A device for detecting a three-dimensional spatial attitude of a photographing device, the device comprising:
the system comprises a construction module, a processing module and a processing module, wherein the construction module is used for constructing a group of vectors q related to the target three-dimensional space attitude, and the vectors q are quaternions, coordinates of n characteristic points on an image, rotation vectors or rotation matrixes, wherein n is more than or equal to 3; when a plane determined by two dimensions of the three-dimensional space is perpendicular to the sight line direction of the shooting device, the vector q is a binary number;
the receiving module is used for receiving a target image I shot by the shooting device;
an optimization module for using machine learning to combine N groups of sample data I1,q1...IN,qNFormed set of samples, I of said sample data1Means that the vector related to the three-dimensional space posture of the target shot by the shooting device is q1Target image of time, I in said sample dataNMeans that the vector related to the three-dimensional space posture of the target shot by the shooting device is qNOptimizing a neural network model parameter W of the target image according to a neural network model equation to obtain an optimized neural network model parameter W; the neural network model equation is
f(W,I1)=q1
...
f(W,IN)=qN;
The vector calculation module is used for substituting the optimized neural network model parameters W and a newly received target image I shot by the shooting device into a neural network model equation to obtain a vector q;
and the resolving module is used for resolving through the vector q to obtain the three-dimensional space attitude R of the shooting device relative to the target.
7. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of detecting a three-dimensional spatial attitude of a photographing apparatus according to any one of claims 1 to 5.
8. A method for detecting the longitude and the latitude of a target is characterized by comprising the following steps:
s201, obtaining a three-dimensional space attitude R of the shooting device relative to a target and a three-dimensional space coordinate T of the shooting device relative to the target according to the detection method of the three-dimensional space attitude of the shooting device in claim 4 or 5;
s202, inversely calculating the three-dimensional space coordinate of the target relative to the shooting device according to the three-dimensional space attitude R of the shooting device relative to the target and the three-dimensional space coordinate T of the shooting device relative to the target;
s203, indirectly solving the coordinate To of the target relative To the earth geocentric coordinate system according To the three-dimensional space coordinate of the target relative To the shooting device and the coordinate Tx and the attitude Rx of the shooting device relative To the earth geocentric coordinate system, and directly obtaining the longitude and latitude of the target according To the To.
9. The method of claim 8, wherein said S202 further comprises the steps of:
inversely calculating the three-dimensional space attitude of the target relative to the shooting device according to the three-dimensional space attitude R of the shooting device relative to the target;
the step S203 further includes the steps of:
according to the three-dimensional space attitude R of the target relative to the shooting deviceTAnd the attitude Rx of the shooting device relative to the earth geocentric coordinate system, and indirectly calculating the attitude Ro of the target relative to the earth geocentric coordinate system.
10. The method of claim 8, wherein the target is solved for coordinates To in the earth's geocentric coordinate system by:
wherein Tx is obtained by GPS of the shooting device, Rx is obtained by a gyroscope, magnetometer or accelerometer counter bound with the shooting device, and the three-dimensional space coordinate of the target relative to the shooting deviceRx RgRV, Rg is attitude data relative to a northeast coordinate system measured by a gyroscope of the shooting device,wherein,is the longitude of the camera and θ is the latitude of the camera.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for detecting the latitude and longitude of a target according to any one of claims 7 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810143272.5A CN108444452B (en) | 2018-02-11 | 2018-02-11 | Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810143272.5A CN108444452B (en) | 2018-02-11 | 2018-02-11 | Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108444452A CN108444452A (en) | 2018-08-24 |
CN108444452B true CN108444452B (en) | 2020-11-17 |
Family
ID=63192493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810143272.5A Expired - Fee Related CN108444452B (en) | 2018-02-11 | 2018-02-11 | Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108444452B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109377525B (en) * | 2018-09-13 | 2021-08-20 | 武汉雄楚高晶科技有限公司 | Three-dimensional coordinate estimation method of shooting target and shooting equipment |
CN110068326B (en) * | 2019-04-29 | 2021-11-30 | 京东方科技集团股份有限公司 | Attitude calculation method and apparatus, electronic device, and storage medium |
CN112949466B (en) * | 2021-02-26 | 2022-11-22 | 重庆若上科技有限公司 | Video AI smoke pollution source identification and positioning method |
CN113324528B (en) * | 2021-05-18 | 2023-04-07 | 武汉大学 | Close-range photogrammetry target positioning method and system with known camera station position |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102778224A (en) * | 2012-08-08 | 2012-11-14 | 北京大学 | Method for aerophotogrammetric bundle adjustment based on parameterization of polar coordinates |
CN104748751A (en) * | 2013-12-29 | 2015-07-01 | 刘进 | Calculating method of attitude matrix and positioning navigation method based on attitude matrix |
CN106679648A (en) * | 2016-12-08 | 2017-05-17 | 东南大学 | Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm |
CN106780569A (en) * | 2016-11-18 | 2017-05-31 | 深圳市唯特视科技有限公司 | A kind of human body attitude estimates behavior analysis method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3440428B1 (en) * | 2016-04-08 | 2022-06-01 | Orbital Insight, Inc. | Remote determination of quantity stored in containers in geographical region |
-
2018
- 2018-02-11 CN CN201810143272.5A patent/CN108444452B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102778224A (en) * | 2012-08-08 | 2012-11-14 | 北京大学 | Method for aerophotogrammetric bundle adjustment based on parameterization of polar coordinates |
CN104748751A (en) * | 2013-12-29 | 2015-07-01 | 刘进 | Calculating method of attitude matrix and positioning navigation method based on attitude matrix |
CN106780569A (en) * | 2016-11-18 | 2017-05-31 | 深圳市唯特视科技有限公司 | A kind of human body attitude estimates behavior analysis method |
CN106679648A (en) * | 2016-12-08 | 2017-05-17 | 东南大学 | Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN108444452A (en) | 2018-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102227583B1 (en) | Method and apparatus for camera calibration based on deep learning | |
EP3134870B1 (en) | Electronic device localization based on imagery | |
CN108444452B (en) | Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device | |
EP3028252B1 (en) | Rolling sequential bundle adjustment | |
JP6663040B2 (en) | Depth information acquisition method and apparatus, and image acquisition device | |
JP2019149809A (en) | System and method for imaging device modeling and calibration | |
WO2021004416A1 (en) | Method and apparatus for establishing beacon map on basis of visual beacons | |
CN112184824B (en) | Camera external parameter calibration method and device | |
GB2506411A (en) | Determination of position from images and associated camera positions | |
US20170084033A1 (en) | Method and system for calibrating an image acquisition device and corresponding computer program product | |
CN112837207A (en) | Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera | |
CN117495975A (en) | Zoom lens calibration method and device and electronic equipment | |
Sahin | Comparison and calibration of mobile phone fisheye lens and regular fisheye lens via equidistant model | |
Bastanlar | A simplified two-view geometry based external calibration method for omnidirectional and PTZ camera pairs | |
CN113436267B (en) | Visual inertial navigation calibration method, device, computer equipment and storage medium | |
CN107452036A (en) | A kind of optical tracker pose computational methods of global optimum | |
WO2019045722A1 (en) | Methods, devices and computer program products for 3d mapping and pose estimation of 3d images | |
CN109377525B (en) | Three-dimensional coordinate estimation method of shooting target and shooting equipment | |
WO2018100230A1 (en) | Method and apparatuses for determining positions of multi-directional image capture apparatuses | |
Sahin | The geometry and usage of the supplementary fisheye lenses in smartphones | |
Li et al. | Robust distortion estimation of fisheye cameras under stereographic projection model | |
Liu et al. | A Master‐Slave Surveillance System to Acquire Panoramic and Multiscale Videos | |
CN118200479B (en) | Method and device for determining target object distance based on monitoring video | |
Chen et al. | Dynamic view planning by effective particles for three-dimensional tracking | |
US11282280B2 (en) | Method and system for node vectorisation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210910 Address after: 528200 room 218-219, building 1, No. 28, East 1st block, Jiansha Road, Danzao Town, Nanhai District, Foshan City, Guangdong Province (residence declaration) Patentee after: Foshan Shixin Intelligent Technology Co.,Ltd. Address before: 430000 Building 2, Wulipu Wuke dormitory, Hanyang District, Wuhan City, Hubei Province Patentee before: WUHAN CHUXIONG GAOJING TECHNOLOGY Co.,Ltd. |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201117 |
|
CF01 | Termination of patent right due to non-payment of annual fee |