Nothing Special   »   [go: up one dir, main page]

CN105335977B - The localization method of camera system and target object - Google Patents

The localization method of camera system and target object Download PDF

Info

Publication number
CN105335977B
CN105335977B CN201510711384.2A CN201510711384A CN105335977B CN 105335977 B CN105335977 B CN 105335977B CN 201510711384 A CN201510711384 A CN 201510711384A CN 105335977 B CN105335977 B CN 105335977B
Authority
CN
China
Prior art keywords
image
target
layer
camera
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510711384.2A
Other languages
Chinese (zh)
Other versions
CN105335977A (en
Inventor
黑光月
袁肇飞
曾庆彬
邹文艺
晋兆龙
陈卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN201510711384.2A priority Critical patent/CN105335977B/en
Publication of CN105335977A publication Critical patent/CN105335977A/en
Application granted granted Critical
Publication of CN105335977B publication Critical patent/CN105335977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention provides a kind of localization method of target object, and for camera chain, the camera chain includes:First video camera, for obtaining the first image;And second video camera, for obtaining the second image;The localization method includes:A. the described first image and the second image for including target object are obtained;B. according to the first image acquisition first object image, first object image includes target object;C. according to the status information of the targeted message and the second video camera of the first video camera and the second video camera, initial homography matrix is calculated;D. according to initial homography matrix, the second image is mapped in the first image, obtains the second target image;E. light stream matching is carried out to first object image and the second target image, calculates Optic flow information;F. calculated according to Optic flow information and correct homography matrix;G. according to homography matrix is corrected, first object image is mapped in the second image, obtains and corrects the second target image, to position target object in the second image.

Description

Camera system and target object positioning method
Technical Field
The invention relates to the technical field of computer application, in particular to a camera system and a target object positioning method.
Background
A PTZ camera (Pan-Tilt-Zoom), referred to as a dome camera for short, which integrates a Pan-Tilt system and a camera system. The camera system can stretch the visual field, and the pan-tilt can enable the camera system to rotate horizontally and vertically. Therefore, the PTZ camera can track and amplify the target in the monitoring scene, and plays an important role in a monitoring system.
In the gun and ball linkage system, a wide-angle gun type camera carries out background modeling on a monitored area, detects a moving target and then controls a ball type camera to track, wherein the control comprises the control of P (horizontal rotation), T (oblique rotation), Zoom (zooming) of the ball type camera, the speed of the ball type camera during rotation and the like. Therefore, the wide-angle gun-shaped camera is used for detecting the target in a large scene, and the P, T, Zoom of the spherical camera and the rotating speed of the spherical camera are used for tracking and zooming the target, so that the purposes of monitoring the large field of view and not missing the details of the small target are achieved.
The image of the dome camera is wide-screen, and the target object is not matched with the shape of the image of the dome camera, for example, some target objects are thin and tall pedestrians, if the whole image of the dome camera is captured, the captured image is output to the attribute analysis module, and therefore a large amount of invalid information around the target is stored, and the attribute analysis result is influenced. It is also not reasonable to save only the middle part of the image of the dome camera. In the application of the actual gun and ball linkage system, the ball machine moves under the control of the gun, tracks the target and meanwhile continuously moves the target. Thus, objects may appear in different locations in the dome camera image.
Disclosure of Invention
In order to overcome the defects in the prior art, the present invention provides an image capturing system and a target object positioning method, which can quickly and effectively position a target object and acquire a target image that can be used for image processing.
The invention provides a target object positioning method, which is used for a camera system, and the camera system comprises: a first camera to acquire a first image, the first image being a wide-angle image of a scene view; and a second camera for acquiring a second image, the second image being a partial enlargement of the scene view; the positioning method comprises the following steps: a. acquiring the first image and the second image containing a target object; b. acquiring a first target image according to the first image, wherein the first target image comprises the target object; c. calculating an initial homography matrix according to the calibration information of the first camera and the second camera and the state information of the second camera; d. mapping the second image to the first image according to the initial homography matrix to obtain a second target image; e. performing optical flow matching on the first target image and the second target image, and calculating optical flow information; f. calculating a correction homography matrix according to the optical flow information; g. and mapping the first target image to the second image according to the corrected homography matrix to obtain a corrected second target image so as to position the target object in the second image.
Preferably, the step b includes: and in the first image, taking the center of the target object as the center, and intercepting a rectangular target image as the first target image.
Preferably, the rectangular target image is truncated to be a square target image of 96 x 96 pixels.
Preferably, the initial homography matrix is a homography matrix for converting the second image to the first image.
Preferably, the step c includes: c1. obtaining the scaling information, the scaling information including a second homography matrix in which pixel coordinates of a first image of the first camera are converted to physical coordinates of the second camera; c2. calculating physical coordinates of the second camera corresponding to the pixel coordinates of the second image of the second camera according to the pixel coordinates of the second image of the second camera and the state information of the second camera; c3. calculating pixel coordinates of a first image of the first camera corresponding to pixel coordinates of a second image of the second camera from an inverse matrix of the second homography matrix and physical coordinates of the second camera; and c4. calculating the initial homography matrix from pixel coordinates of a first image of the first camera and pixel coordinates of a second image of the second camera.
Preferably, the step c2 includes: and selecting the pixel coordinates of at least four non-collinear pixel points as the pixel coordinates of the second camera according to the second image.
Preferably, the step e comprises: e1. calculating a Gaussian pyramid of the first target image and the second target image; e2. calculating gradient information of the Gaussian pyramid of the second target image layer by layer; e3. and performing optical flow matching on the first target image and the second target image layer by layer according to the gradient information, and calculating optical flow information.
Preferably, said step e1 includes: performing convolution operation on the first target image and the second target image by using a Gaussian core; establishing a Gaussian pyramid with the height of 3 according to the first target image and the second target image, and respectively recording the Gaussian pyramid as a first target image set A and a second target image set B, wherein the first target image set A comprises a first layer of first target sub-images A with gradually reduced sizes1The second layer of the first target sub-image A2The third layer of the first target subimage A3(ii) a The second target image set B comprises first-layer second target sub-images B with gradually-reduced sizes1A second layer of a second target sub-image B2A third layer of a second target subimage B3
Preferably, the Gaussian kernel is [ 1/161/43/81/41/16 ]]x[1/16 1/4 3/8 1/4 1/16]T
Preferably, the first layer first target sub-image a1And the first layer second target subimage B1An image with 96 × 96 pixels; the second layer first target sub-image A2And the second layer second target subimage B248 images of pixels 48 by 48; the third layer of the first target sub-image A3And the third layer of the second target subimage B3Is an image of 24 x 24 pixels.
Preferably, said step e2 includes:
calculating the gradient information of the second target image set B layer by layer according to the following formula:
wherein,representing the i-th layer second target sub-image BiGradient information of, gradxRepresenting the i-th layer second target sub-image BiGradient information in the X direction, gradyRepresenting the i-th layer second target sub-image BiGradient information in the Y direction, i, is 3, 2, 1 in order.
Preferably, said step e3 includes:
calculating the optical flow information according to the following formula
Wherein d isxRepresents the offset in the X direction of the i-th layer first target sub-image Ai and the i-th layer second target sub-image Bi, dyRepresents the offset of the i-th layer first target sub-image Ai and the i-th layer second target sub-image Bi in the X direction,optical flow information representing the i-th layer first target sub-image Ai and the i-th layer second target sub-image Bi,
ΣNgxx、ΣNgyy、ΣNgxy、errxand erryRespectively according to the following formula:
ΣNgxx=ΣNgradx*gradx;
ΣNgyy=ΣNgrady*grady;
ΣNgxy=ΣNgradx*grady;
errx=ΣNDiff*gradx;
erry=ΣNDiff*grady;
and N represents the neighborhood of a characteristic point P, the characteristic point P is selected from each layer of the first target image set A, and Diff represents the gray level difference of pixel points in the field N.
Preferably, the field N is a square region with the feature point P as a center and odd number of pixel points as side lengths.
Preferably, the optical flow information of the i-th layer first target sub-image Ai and the i-th layer second target sub-image Bi is calculated according to the optical flow information of the i + 1-th layer first target sub-image Ai +1 and the i + 1-th layer second target sub-image Bi + 1.
Preferably, the step f includes: selecting N pixel points from the first target image as first pixel points; selecting N pixel points corresponding to the first pixel points respectively from the second target image as second pixel points; correcting the pixel coordinates of the N first pixel points by using the optical flow information to obtain N corrected first pixel points; and calculating the correction homography matrix by using the pixel coordinates of the correction first pixel point and the second pixel point.
Preferably, the first target image has a first target frame, the first target frame is a circumscribed rectangle of the first target image, and the step g includes: and mapping the first target frame into the second image according to the corrected homography matrix, taking the obtained circumscribed rectangle of the mapped target frame as a second target frame, and taking the image in the second target frame as the corrected second target image.
Preferably, the modified second target image is used for image recognition and image analysis of the target object.
According to still another aspect of the present invention, there is also provided an image pickup system including: a first camera to acquire a first image, the first image being a wide-angle image of a scene view; a second camera for acquiring a second image, the second image being a partial enlargement of the scene view; and a positioning device for controlling the second camera according to the first image to position the target object in the second image by using the positioning method.
Preferably, the first camera is a gun-type camera, and the second camera is a dome camera.
Compared with the prior art, the method and the device have the advantages that the wide-angle image and the local magnified image are obtained through the two cameras, the offset between the images is calculated according to the mapping and the optical flow matching of the wide-angle image and the local magnified image, and then the target image containing the target object in the wide-angle image is mapped into the local magnified image to position the target object in the local magnified image. The present invention uses only a target image containing a target object in a locally enlarged image for image processing for the target object. The target image provided by the target object positioning method accurately contains the target object, and does not contain a large amount of invalid information to increase the time and load of image processing.
Drawings
The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
Fig. 1 shows a schematic view of a camera system according to an embodiment of the invention.
Fig. 2 shows a flow chart of a method of positioning a target object according to an embodiment of the invention.
Fig. 3 shows a first image according to an embodiment of the invention.
Fig. 4 shows a second image according to an embodiment of the invention.
FIG. 5 illustrates a first target image according to an embodiment of the invention.
Fig. 6 illustrates a second target image according to an embodiment of the present invention.
FIG. 7 illustrates a modified second target image according to an embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus their repetitive description will be omitted.
The following describes an imaging system and a target object positioning method according to the present invention with reference to fig. 1 to 7.
The camera system 100 is preferably a dual camera linkage system that includes a first camera 110, a second camera 120, and a positioning device 130. The first camera 110 is used to acquire a first image 200 of a wide-angle image of a field of view of a scene. Preferably, the first camera 110 is a gun type camera for taking a wide-angle image. The second camera 120 is used to acquire a second image 300. The second image 300 is a partial enlargement of the scene view of the first image 200. The second camera 120 is preferably a dome camera. The positioning device 130 controls the second camera 120 according to the first image 200 to position the target object 900 in the second image 300 by the positioning method provided by the present invention. In one embodiment, the positioning device 130 may be integrated with the first camera 110. In another embodiment, the positioning device 130 may be integrated with the second camera 120. In yet another embodiment, the positioning device 130 is a stand-alone device and communicates with the first camera 110 and the second camera 120 via wired or wireless connections. In other embodiments, the positioning device 130 may also be distributively integrated with the first camera 110 and the second camera 120, respectively, to perform different steps at the first camera 110 and the second camera 120.
The method for locating a target object provided by the invention refers to a flow chart shown in fig. 2. The positioning method comprises the following steps:
s210: the first image 200 and the second image 300 comprising the target object are acquired. This step is obtained by shooting the same scene containing the target object 900 by the first camera 110 and the second camera 120. The second image 300 is a partial magnified view of the first image 200.
S220: a first target image 210 is acquired from the first image 200, the first target image 210 comprising a target object 900.
Specifically, the first target image 210 is a rectangular target image cut out of the first image 200 with the center of the target as the center. The rectangular target image is preferably a square target image. For example, the first target image 210 may be a square target image of 96 × 96 pixels.
In a particular embodiment, the first target image 210 has a first target frame 220. The first target image 210 is located within the first target frame 220. In this particular embodiment, the positioning device 130 also obtains the position information and the size information of the first target frame 220. The position information may be pixel coordinates of a center point of the first target frame or pixel coordinates of each vertex of the first target frame. The size information may be provided in units of pixel points.
S230: the initial homography matrix is calculated based on the scaling information of the first camera 110 and the second camera 120 and the state information of the second camera 120.
The initial homography matrix is the homography matrix that transforms the second image 300 to the first image 200. Specifically, the initial homography matrix is calculated as follows.
First, the calibration information of the first camera 110 and the second camera 120 is acquired. The scaling information includes a second homography matrix that transforms pixel coordinates of the first image 200 of the first camera 110 to physical coordinates of the second camera 120. The second homography matrix can be obtained in an existing manner, for example, according to the scaling method in "an automatic scaling method for use in a video surveillance system" of the invention with patent number CN 103198487A. The second homography matrix is preferably a 3 x 3 matrix that can transform the pixel coordinates of the first image 200 of the first camera 110 at any point to the physical coordinates of the second camera 120.
Then, the physical coordinates (horizontal and vertical deflection) of the second camera 120 corresponding to the pixel coordinates of the second image 300 of the second camera 120 are calculated from the pixel coordinates of the second image 300 of the second camera 120 and the state information of the second camera 120. Specifically, the pixel coordinates of N points are selected on the second image 300. N selects at least four non-collinear pixel points on the second image 300.
Then, the pixel coordinates of the first image 200 of the first camera 110 corresponding to the pixel coordinates of the second image 300 of the second camera 120 are calculated from the inverse of the second homography matrix and the physical coordinates of the second camera 120. Finally, an initial homography matrix is calculated based on the pixel coordinates of the first image 200 of the first camera 110 and the pixel coordinates of the second image 300 of the second camera 120.
Specifically, in one embodiment, in order to improve the calculation accuracy and the calculation efficiency, 5 pixels are selected from the second image 300, which are the central pixel of the second image 300 and the pixels near the four vertices. These 5 pixels are respectively denoted as pi(i-1-5) and the pixel coordinates thereof are respectively denoted by (X)di,Ydi). Based again on the interior of the second camera 120Parameters, current physical coordinates (P) of the second camera 120c、Tc)(PcAs horizontal deflection coordinate, TcVertical deflection coordinates) and the current focal length value of the second camera 120, to calculate the physical coordinates of the second camera 120 corresponding to the 5 pixels in the current second image 300 of the second camera 120. Then, pixel coordinates (X) of 5 pixels of the first image 200 of the first camera 110 corresponding to the physical coordinates of the second camera 120 are calculated from an inverse matrix of a second homography matrix of the pixel coordinates of the first image 200 to the physical coordinate system of the second camera 120bi,Ybi). And then according to the pixel coordinates (X) of 5 points on the second image 300 of the second camera 120di,Ydi) And the pixel coordinates (X) of the corresponding 5 points on the first image 200 of the first camera 120bi,Ybi) The initial homography matrix for the second image 300 to convert to the first image 200 is calculated.
S240: the second image 300 is mapped into the first image 200 according to the initial homography matrix, and a second target image 310 is acquired.
In particular, the second image 300 is mapped onto the scale of the first image 200, i.e. according to the initial homography matrix, as shown in fig. 6. Here, when there is a portion of the second image 300 without the original data, it is filled with a gray value of 0. Image interpolation may be utilized when the second image 300 is mapped. In order to make the interpolated image more effective, the cubic curve interpolation is preferred in the invention.
S250: and performing optical flow matching on the first target image and the second target image, and calculating optical flow information.
Specifically, to solve the optical flow matching for large displacements, the first target image 210 and the second target image 310 are processed using a gaussian pyramid. The Gaussian pyramid is an image set, each image in the set is derived from the same original image, and the images are obtained through Gaussian down-sampling of the images. Preferably, in the present embodiment, [ 1/161/43/81/41/16 ] is utilized]x[1/16 1/4 3/8 1/4 1/16]TThe first target image 210 and the second target image 310 are convolved by a gaussian kernel, where T represents a matrix transpose. According to the first target image 210 and the second target image 310, a gaussian pyramid with a height of 3 is established and is respectively marked as a first target image set a and a first target image set B. The first target image set A comprises a first layer of first target sub-images A of gradually decreasing size1The second layer of the first target sub-image A2The third layer of the first target subimage A3. The second target image set B includes first-layer second target sub-images B of gradually decreasing sizes1A second layer of a second target sub-image B2A third layer of a second target subimage B3. In a specific embodiment, the first layer first target sub-image A1And a first layer of a second target subimage B1The image is 96 x 96 pixels. Second layer first target subimage A2And a second layer of a second target subimage B248 x 48 pixel images. Third layer first target subimage A3And a third layer of a second target subimage B3Is an image of 24 x 24 pixels.
After the gaussian pyramids of the first target image 210 and the second target image 310 are established, the gradient information of the gaussian pyramids of the second target image is calculated layer by layer. Specifically, the gradient information of the second target image set B is calculated layer by layer according to the following formula:
wherein,representing the i-th layer second target sub-image BiGradient information of, gradxRepresenting the i-th layer second target sub-image BiGradient information in the X direction, gradyRepresenting the i-th layer second target sub-image BiIn the gradient information in the Y direction, i is 3, 2 and 1 in sequence, and T represents matrix transposition. In some embodiments, center difference, Sharr, may be usedComputing gradient information grad by equal methodxAnd grady. The invention preferably uses the Sharr method to calculate the calculated gradient information gradxAnd grady
And then, performing optical flow matching on the first target sub-image and the second target sub-image layer by layer according to the gradient information, and calculating optical flow information.
The principle of optical flow matching is expressed by the following equation:
Zd=err,
where Z represents the gradient matrix, d represents the offset, and err represents the difference. The gradient matrix Z, the offset d and the difference err are respectively:
wherein d isxRepresents the offset in the X direction of the i-th layer first target sub-image Ai and the i-th layer second target sub-image Bi, dyRepresents the offset of the i-th layer first target sub-image Ai and the i-th layer second target sub-image Bi in the X direction,and optical flow information representing the ith layer first target sub-image Ai and the ith layer second target sub-image Bi.
ΣNgxx、ΣNgyy、ΣNgxy、errxAnd erryRespectively according to the following formula:
ΣNgxx=ΣNgradx*gradx;
ΣNgyy=ΣNgrady*grady;
ΣNgxy=ΣNgradx*grady;
errx=ΣNDiff*gradx;
erry=ΣNDiff*grady,
n represents the neighborhood of the characteristic point P, the characteristic point P is selected from each layer of the first target image set A, and Diff represents the gray level difference of pixel points in the field N. The field N is a square area which takes the characteristic point P as the center and takes odd number pixel points as the side length. Preferably, the area N is a square area of 15 × 15 pixels.
Matrix of gradientsOffset amountAnd difference valueSubstituting into formula ZdErr de:
accordingly, the optical flow information is:
further simplification obtains:
where det (Z) represents the value of the determinant of the gradient matrix Z.
Accordingly, the optical flow information in the X direction and the optical flow information in the Y direction are calculated according to the following formulas:
specifically, in this step, a precise solution of the feature point P may be calculated by using a Newton-Raphson iterative method, so as to obtain the center point P of the first target image 210cThe optical flow information of (1) is recorded as [ d ]x,dy]. The above formula describes a specific optical flow information calculation method for each layer of image, and in the pyramid layer, in calculating optical flow information of a certain layer, the result of optical flow information of an upper layer is required as an initial optical flow estimation value of a lower layer, where the initial optical flow estimation value of the uppermost layer image is 0. The first calculation is the optical flow information of the top image of the Gaussian pyramid, and the output result of the layer is used as the input of the next layer. The recursive operation between two adjacent layers is now described using symbols. Suppose that two adjacent layers are L and L +1, respectively, and that the optical flow information of the L +1 th layer has been calculated as dL+1Then, an initial estimated value g of optical flow at the L-th layer is calculated from the L + 1-th layer imageLThe expression of (a) is:
gL=2(gL+1+dL+1)
wherein, the algorithm is assumed to have no credible initial estimation value of optical flow at the highest layer, namely:
according toIn the formula, when the optical flow vector is calculated in the L-th layer, the pixel coordinate of the feature point of the layer target image is translated by g without searching and matching at the feature point position coordinate of the layer target imageLAnd searching for matching, and calculating the minimum position of the residual error, so that the optical flow vector searched by each layer is small displacement.
The same method can calculate the displacement vector d of the L-1 layerL+1The process continues until the bottom layer L of the image is 1, i.e. the original image is reached, at which time both the image and the displacement vector are at the original resolution. The lowest layer of the optical flow displacement vector is:
d=g1+d1
it can also be represented by the optical flow vector of each layer:
by the operation, the displacement of the feature point P is ensured to be small in the process of calculating the optical flow information of each layer of the Gaussian King pyramid.
S260: and calculating a correction homography matrix according to the optical flow information. Wherein the modified homography matrix is used to map the first target image 210 into the second image 300.
Specifically, the modified homography matrix is calculated in the following manner in the present step: first, N pixel points are selected as first pixel points in the first target image 210. N pixel points corresponding to the first pixel points are selected from the second target image 310 as second pixel points. And correcting the pixel coordinates of the N first pixel points by using the optical flow information to obtain N corrected first pixel points. And calculating the correction homography matrix by using the pixel coordinates of the corrected first pixel point and the second pixel point.
In one embodiment, N is preferably 5. This step first selects 5 pixels from the first target image 210 as the first pixelspbi. Selecting 5 pixel points corresponding to the first pixel points respectively in the second target image 310 as second pixel points pdi. Subtracting the optical flow information [ d ] calculated in step S250 from the pixel coordinates of the 5 first pixel pointsx,dy]To obtain 5 corrected first pixel points pbi' pixel coordinates. By correcting the first pixel pbi' with the second pixel pdiThe modified homography matrix is calculated.
S270: the first target image 210 is mapped into the second image 300 according to the modified homography matrix, and a modified second target image is acquired to locate the target object 900 in the second image 300.
Specifically, the first target image 210 has a first target frame 220. The first target frame 220 is a circumscribed rectangle of the first target image 210. The method also comprises the following steps: according to the modified homography matrix, the first target frame 220 is mapped to the second image 300, and the obtained circumscribed rectangle of the mapped target frame is used as the second target frame 320. The image in the second target frame 320 is taken as the modified second target image. In some embodiments, the mapping target frame is not a rectangle, and therefore, it is preferable to use a circumscribed rectangle of the mapping target frame as the second target frame 320. The modification of the second target image within the second target box 320 may be used for subsequent image recognition and image analysis of the target object 900.
The present invention accurately locates the position and size of the target object 900 in the second image 300 by performing the gaussian pyramid and optical flow matching operation on the first image and the second image, and reduces invalid information in the finally obtained modified second target image.
Compared with the prior art, the method and the device have the advantages that the wide-angle image and the local magnified image are obtained through the two cameras, the offset between the images is calculated according to the mapping and the optical flow matching of the wide-angle image and the local magnified image, and then the target image containing the target object in the wide-angle image is mapped into the local magnified image to position the target object in the local magnified image. The present invention uses only a target image containing a target object in a locally enlarged image for image processing for the target object. The target image provided by the target object positioning method accurately contains the target object, and does not contain a large amount of invalid information to increase the time and load of image processing.
Exemplary embodiments of the present invention are specifically illustrated and described above. It is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.

Claims (19)

1. A method of locating a target object for use in a camera system, the camera system comprising:
a first camera to acquire a first image, the first image being a wide-angle image of a scene view; and
a second camera for acquiring a second image, the second image being a partial enlargement of the scene view;
the positioning method comprises the following steps:
a. acquiring the first image and the second image containing a target object;
b. acquiring a first target image according to the first image, wherein the first target image comprises the target object;
c. calculating an initial homography matrix according to the calibration information of the first camera and the second camera and the state information of the second camera;
d. mapping the second image to the first image according to the initial homography matrix to obtain a second target image;
e. performing optical flow matching on the first target image and the second target image, and calculating optical flow information;
f. calculating a correction homography matrix according to the optical flow information;
g. and mapping the first target image to the second image according to the corrected homography matrix to obtain a corrected second target image so as to position the target object in the second image.
2. The positioning method according to claim 1, wherein the step b comprises:
and in the first image, taking the center of the target object as the center, and intercepting a rectangular target image as the first target image.
3. The method of claim 2, wherein the truncated rectangular target image is a 96 x 96 pixel square target image.
4. The method of claim 1, wherein the initial homography matrix is a homography matrix that transforms the second image to the first image.
5. The positioning method according to claim 4, wherein the step c comprises:
c1. obtaining the scaling information, the scaling information including a second homography matrix in which pixel coordinates of a first image of the first camera are converted to physical coordinates of the second camera;
c2. calculating physical coordinates of the second camera corresponding to the pixel coordinates of the second image of the second camera according to the pixel coordinates of the second image of the second camera and the state information of the second camera;
c3. calculating pixel coordinates of a first image of the first camera corresponding to pixel coordinates of a second image of the second camera from an inverse matrix of the second homography matrix and physical coordinates of the second camera; and
c4. and calculating the initial homography matrix according to the pixel coordinates of the first image of the first camera and the pixel coordinates of the second image of the second camera.
6. The positioning method according to claim 5, wherein said step c2 includes:
and selecting the pixel coordinates of at least four non-collinear pixel points as the pixel coordinates of the second camera according to the second image.
7. The positioning method according to claim 1, wherein the step e comprises:
e1. calculating a Gaussian pyramid of the first target image and the second target image;
e2. calculating gradient information of the Gaussian pyramid of the second target image layer by layer;
e3. and performing optical flow matching on the first target image and the second target image layer by layer according to the gradient information, and calculating optical flow information.
8. The positioning method according to claim 7, wherein said step e1 includes:
performing convolution operation on the first target image and the second target image by using a Gaussian core;
establishing a Gaussian pyramid with the height of 3 according to the first target image and the second target image, and respectively recording the Gaussian pyramid as a first target image set A and a second target image set B, wherein,
the first target image set A comprises a first layer of first target sub-images A with gradually reduced sizes1The second layer of the first target sub-image A2The third layer of the first target subimage A3
The second target image set B comprises first-layer second target sub-images B with gradually-reduced sizes1A second layer of a second target sub-image B2A third layer of a second target subimage B3
9. The method of claim 8, wherein said gaussian kernel is [ 1/161/43/81/41/16 ]]x[1/16 1/4 3/8 1/4 1/16]T
10. The positioning method of claim 8,
the first layer first target sub-image A1And the first layer second target subimage B1An image with 96 × 96 pixels;
the second layer first target sub-image A2And the second layer second target subimage B248 images of pixels 48 by 48;
the third layer of the first target sub-image A3And the third layer of the second target subimage B3Is an image of 24 x 24 pixels.
11. The positioning method according to claim 8, wherein said step e2 includes:
calculating the gradient information of the second target image set B layer by layer according to the following formula:
wherein,representing the i-th layer second target image BiGradient information of, gradxRepresenting the i-th layer second target sub-image BiGradient information in the X direction, gradyRepresenting the i-th layer second target sub-image BiGradient information in the Y direction, i, is 3, 2, 1 in order.
12. The positioning method according to claim 11, wherein said step e3 includes:
calculating the optical flow information according to the following formula
Wherein d isxRepresents the offset in the X direction of the i-th layer first target sub-image Ai and the i-th layer second target sub-image Bi, dyRepresents the offset of the i-th layer first target sub-image Ai and the i-th layer second target sub-image Bi in the Y direction,optical flow information representing the i-th layer first target sub-image Ai and the i-th layer second target sub-image Bi,
ΣNgyy、ΣNgyy、∑Ngxy、errxand erryRespectively according to the following formula:
Ngxx=∑Ngradx*gradx;
Ngyy=∑Ngrady*grady;
Ngxy=∑Ngradx*grady;
errx=∑NDiff*gradx;
erry=∑NDiff*grady;
and N represents the neighborhood of the characteristic point P, the characteristic point P is selected from each layer of the first target image set A, and Diff represents the gray level difference of pixel points in the neighborhood N.
13. The method according to claim 12, wherein the neighborhood N is a square region having a feature point P as a center and an odd number of pixel points as side lengths.
14. The positioning method according to claim 12, wherein the optical flow information of the i-th layer first target sub-image Ai and the i-th layer second target image Bi is calculated according to the optical flow information of the i + 1-th layer first target sub-image Ai +1 and the i + 1-th layer second target sub-image Bi + 1.
15. The positioning method according to claim 1, wherein the step f comprises:
selecting N pixel points from the first target image as first pixel points;
selecting N pixel points corresponding to the first pixel points respectively from the second target image as second pixel points;
correcting the pixel coordinates of the N first pixel points by using the optical flow information to obtain N corrected first pixel points;
and calculating the correction homography matrix by using the pixel coordinates of the correction first pixel point and the second pixel point.
16. The positioning method according to claim 1, wherein the first target image has a first target frame, the first target frame being a circumscribed rectangle of the first target image, the step g comprising:
and mapping the first target frame into the second image according to the corrected homography matrix, taking the obtained circumscribed rectangle of the mapped target frame as a second target frame, and taking the image in the second target frame as the corrected second target image.
17. The localization method according to any of claims 1 to 16, wherein the modified second target image is used for image recognition and image analysis of the target object.
18. An image pickup system, comprising:
a first camera to acquire a first image, the first image being a wide-angle image of a scene view;
a second camera for acquiring a second image, the second image being a partial enlargement of the scene view; and
a positioning device for controlling the second camera to position the target object in the second image according to the first image by using the positioning method according to any one of claims 1 to 17.
19. The camera system of claim 18, wherein the first camera is a gun camera and the second camera is a dome camera.
CN201510711384.2A 2015-10-28 2015-10-28 The localization method of camera system and target object Active CN105335977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510711384.2A CN105335977B (en) 2015-10-28 2015-10-28 The localization method of camera system and target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510711384.2A CN105335977B (en) 2015-10-28 2015-10-28 The localization method of camera system and target object

Publications (2)

Publication Number Publication Date
CN105335977A CN105335977A (en) 2016-02-17
CN105335977B true CN105335977B (en) 2018-05-25

Family

ID=55286482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510711384.2A Active CN105335977B (en) 2015-10-28 2015-10-28 The localization method of camera system and target object

Country Status (1)

Country Link
CN (1) CN105335977B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107650122B (en) * 2017-07-31 2019-11-05 宁夏巨能机器人股份有限公司 A kind of robot hand positioning system and its localization method based on 3D visual identity
CN109242769B (en) * 2018-12-13 2019-03-19 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN109506658B (en) * 2018-12-26 2021-06-08 广州市申迪计算机系统有限公司 Robot autonomous positioning method and system
CN111698455B (en) * 2019-03-13 2022-03-11 华为技术有限公司 Method, device and medium for controlling linkage of ball machine and gun machine
CN111800604A (en) * 2020-06-12 2020-10-20 深圳英飞拓科技股份有限公司 Method and device for detecting human shape and human face data based on gun and ball linkage
CN111800605A (en) * 2020-06-15 2020-10-20 深圳英飞拓科技股份有限公司 Gun-ball linkage based vehicle shape and license plate transmission method, system and equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0714081A1 (en) * 1994-11-22 1996-05-29 Sensormatic Electronics Corporation Video surveillance system
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Video monitoring system for multi-target tracking close-up shooting
CN102932598A (en) * 2012-11-06 2013-02-13 苏州科达科技股份有限公司 Method for intelligently tracking image on screen by camera
CN103024350A (en) * 2012-11-13 2013-04-03 清华大学 Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same
CN103105858A (en) * 2012-12-29 2013-05-15 上海安维尔信息科技有限公司 Method capable of amplifying and tracking goal in master-slave mode between fixed camera and pan tilt zoom camera
CN103198487A (en) * 2013-04-15 2013-07-10 厦门博聪信息技术有限公司 Automatic calibration method for video monitoring system
CN104574425A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Calibration and linkage method for primary camera system and secondary camera system on basis of rotary model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8488001B2 (en) * 2008-12-10 2013-07-16 Honeywell International Inc. Semi-automatic relative calibration method for master slave camera control

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0714081A1 (en) * 1994-11-22 1996-05-29 Sensormatic Electronics Corporation Video surveillance system
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Video monitoring system for multi-target tracking close-up shooting
CN102932598A (en) * 2012-11-06 2013-02-13 苏州科达科技股份有限公司 Method for intelligently tracking image on screen by camera
CN103024350A (en) * 2012-11-13 2013-04-03 清华大学 Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same
CN103105858A (en) * 2012-12-29 2013-05-15 上海安维尔信息科技有限公司 Method capable of amplifying and tracking goal in master-slave mode between fixed camera and pan tilt zoom camera
CN103198487A (en) * 2013-04-15 2013-07-10 厦门博聪信息技术有限公司 Automatic calibration method for video monitoring system
CN104574425A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Calibration and linkage method for primary camera system and secondary camera system on basis of rotary model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Heterogeneous Fusion of Omnidirectional and PTZ Cameras for Multiple Object Tracking;Chung-Hao Chen 等;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;20080709;第18卷(第8期);1052-1063 *
一种用于鱼眼PTZ主从监控系统的标定方法;石皓 等;《系统仿真学报》;20131008;第25卷(第10期);2412-2417 *

Also Published As

Publication number Publication date
CN105335977A (en) 2016-02-17

Similar Documents

Publication Publication Date Title
CN105335977B (en) The localization method of camera system and target object
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN111750820B (en) Image positioning method and system
CN105957015B (en) A kind of 360 degree of panorama mosaic methods of threaded barrel inner wall image and system
CN110782394A (en) Panoramic video rapid splicing method and system
CN109064404A (en) It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
CN112767542A (en) Three-dimensional reconstruction method of multi-view camera, VR camera and panoramic camera
JP4377932B2 (en) Panorama image generating apparatus and program
CN109005334B (en) Imaging method, device, terminal and storage medium
CN108734657B (en) Image splicing method with parallax processing capability
CN113221665A (en) Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method
CN111199528A (en) Fisheye image distortion correction method
CN107016646A (en) One kind approaches projective transformation image split-joint method based on improved
JP2010041417A (en) Image processing unit, image processing method, image processing program, and imaging apparatus
CN111815517B (en) Self-adaptive panoramic stitching method based on snapshot pictures of dome camera
CN114549666B (en) AGV-based panoramic image splicing calibration method
JPH1093808A (en) Image synthesis device/method
JP6240016B2 (en) Tracking device and tracking system
US20210120194A1 (en) Temperature measurement processing method and apparatus, and thermal imaging device
CN103971375A (en) Panoramic gaze camera space calibration method based on image splicing
CN108200360A (en) A kind of real-time video joining method of more fish eye lens panoramic cameras
JP2005269419A (en) Method and device for estimating image deformation
JP2006287589A (en) Image processing method and image processing apparatus
CN108156383B (en) High-dynamic billion pixel video acquisition method and device based on camera array
JP2003179800A (en) Device for generating multi-viewpoint image, image processor, method and computer program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant