CN111161143A - Optical positioning technology-assisted operation visual field panoramic stitching method - Google Patents
Optical positioning technology-assisted operation visual field panoramic stitching method Download PDFInfo
- Publication number
- CN111161143A CN111161143A CN201911295009.9A CN201911295009A CN111161143A CN 111161143 A CN111161143 A CN 111161143A CN 201911295009 A CN201911295009 A CN 201911295009A CN 111161143 A CN111161143 A CN 111161143A
- Authority
- CN
- China
- Prior art keywords
- matching point
- target image
- optical positioning
- image
- optical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000000007 visual effect Effects 0.000 title claims abstract description 30
- 238000005516 engineering process Methods 0.000 title abstract description 13
- 239000011159 matrix material Substances 0.000 claims abstract description 55
- 230000009466 transformation Effects 0.000 claims description 12
- 238000011426 transformation method Methods 0.000 claims description 8
- 238000001356 surgical procedure Methods 0.000 claims description 4
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000005457 optimization Methods 0.000 abstract description 5
- 239000013598 vector Substances 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 description 2
- 208000029618 autoimmune pulmonary alveolar proteinosis Diseases 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 241000256837 Apidae Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Public Health (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an optical positioning technology-assisted operation visual field panoramic stitching method. According to the method, optical locating points based on a binocular vision principle are marked in a high-attention area of an operation visual field, SIFT feature matching point pairs are combined, and the AANAP algorithm is used for realizing the splicing of panoramic images of the operation visual field by utilizing a global similarity matrix optimized by depth information. The specific method comprises the steps of establishing an optical positioning point assisted local homography matrix, making up for low matching precision of SIFT feature points, and improving splicing quality of a high-attention area of a visual field. On the basis, a global similarity matrix based on depth information optimization is established for adjusting the non-overlapping area of the target image, so that large-angle distortion deformation of the non-overlapping area of the target image is reduced, and camera motion is compensated. The method can effectively solve the problem of insufficient precision when only SIFT feature matching is used for global splicing, reduces splicing ghosting of high-attention areas in the operation field, and improves the naturalness of image splicing.
Description
Technical Field
The invention relates to the technical field of medical operation navigation and image splicing, in particular to an optical positioning technology and an operation visual field panoramic splicing technology, and specifically relates to an operation visual field panoramic splicing method assisted by the optical positioning technology.
Background
Currently, Virtual Reality (VR) technology is widely used in various fields, particularly in the medical field, such as telemedicine and surgical teaching. Operation teaching combines together with the VR technique, lets medical student through the panorama operation video that presents in the VR environment, catches expert's operation in-process details, improves clinical experience. One way to provide realistic immersive VR content is to make high quality panoramic video. The application of VR is based on images, i.e. around the panoramic image stitching technique. During surgery, the surgeon may obstruct the surgical field of view. In addition, the complexity of an operating room and the arrangement of a plurality of medical devices can limit the shooting angle of a camera, and part of the operation visual field cannot be visually presented. Therefore, panoramic stitching of a multi-view surgical field is important.
The image splicing technology originates from photography and drawing and is widely applied to the fields of digital video, aviation, medical image processing and the like. Early image stitching techniques based on SIFT algorithm and only using a global transformation matrix to align images are prone to misalignment and ghosting, especially when stitching complex close-range images such as the surgical field. In order to obtain better stitching quality and make up for the defect of insufficient global matrix alignment capability, image stitching technology based on grid division is proposed at present, wherein an Adaptive As-Natural-As-Possible-passive (AANAP) algorithm is taken As a representative algorithm. However, in low-texture images, the above algorithm often fails to achieve satisfactory alignment. Due to the problems of complicated operation visual field, large shape, strong illumination, light reflection of positioning instruments and gloves and the like, extracted SIFT feature points in a high-attention area of the visual field are not uniformly and densely distributed, and the splicing precision is not met.
Disclosure of Invention
The purpose of the invention is realized by the following technical scheme.
The method utilizes the optical positioning technology to mark optical positioning points which are strictly matched with one another on reference and target images in the high-attention area of the operation visual field, and realizes image mapping by combining SIFT feature points; a global similarity matrix optimized based on depth information is provided for a non-overlapping area of a target image to compensate camera motion.
Specifically, according to an aspect of the present invention, an optical positioning assisted surgery field panoramic stitching method is provided, including:
acquiring optical positioning matching point pairs of a reference image and a target image, marking optical positioning points based on a binocular vision principle in a high-attention area of an operation visual field, and matching to obtain the optical positioning matching point pairs of the reference image and the target image;
extracting SIFT feature points of the reference image and the target image and performing feature matching to obtain SIFT feature matching point pairs of the reference image and the target image;
acquiring a local homography matrix, and calculating the local homography matrix based on grid division according to the optical positioning matching point pairs and SIFT feature matching point pairs of the reference image and the target image;
and image splicing, namely performing grid transformation on the reference image and the target image based on the local homography matrix to realize splicing of overlapping areas, and using a global similarity matrix classified based on depth information to a non-overlapping area of the target image to realize panoramic splicing of an operation visual field.
Further, the optical locating point is calculated by utilizing a stereoscopic vision principle based on the coordinates of the tip point of the near infrared optical tracking and locating instrument in a world coordinate system.
Further, the specific method for acquiring the matching point pair of the reference image and the target image is as follows:
acquiring an operation visual field reference image and a target image with an overlapping area through a binocular camera, acquiring points in an operation visual field high-attention area by using a positioning instrument, and tracking and positioning a three-dimensional space coordinate of a tip point of the instrument;
obtaining pixel coordinates of the tip point on a reference image and a target image according to a binocular vision principle, and obtaining an optical positioning matching point pair through stereo matching;
and extracting feature points on the reference image and the target image by using an SIFT algorithm, and performing feature matching to obtain an SIFT feature matching point pair.
Further, the optical positioning matching point pairs are matched on the reference image and the target image one by one.
Further, the acquisition of the local homography matrix is realized based on a moving direct linear transformation method.
Further, the specific steps of obtaining the local homography matrix are as follows: and performing grid division on the image, and calculating a local homography transformation matrix of each grid based on a moving direct linear transformation method according to the optical positioning matching point pairs and the SIFT feature matching point pairs.
Further, the moving direct linear transformation method is to introduce distance weight factors of an optical positioning point, a SIFT feature point and a grid center point on the basis of the direct linear transformation method.
Further, the depth information classification is to classify the optical positioning matching point pairs and the SIFT feature matching point pairs into front, middle and rear three types according to the spatial distance between the optical positioning matching point pairs and the SIFT feature matching point pairs and the optical center of the camera.
Further, the global similarity matrix is generated by front, middle and rear three types of matching point pairs, and then the similarity matrix with the minimum rotation angle is selected from the three similarity matrices as the global similarity matrix.
Further, the image stitching specifically comprises the following steps:
obtaining depth information of the optical positioning matching point pair and the SIFT feature matching point pair through a stereoscopic vision principle;
dividing the optical positioning matching point pair and the SIFT feature matching point pair into a front class, a middle class and a rear class according to the space distance between the optical positioning matching point pair and the optical center of the camera, calculating a similar matrix of each class of point pairs, and selecting the similar matrix with the minimum rotation angle as a global similar matrix;
and transforming the reference image and the target image by combining the local homography matrix to realize the panoramic stitching of the operation visual field.
The invention has the advantages that: according to the invention, the optical positioning matching point pairs which are strictly matched one by one on the reference image and the target image are marked in the high-attention area of the operation visual field, and the SIFT feature matching point pairs are combined, so that splicing ghosting is reduced, and the splicing subjective quality is improved; the global similarity matrix based on depth information optimization is used for a target image non-overlapping area, camera motion is further compensated, large-angle deformation is reduced, and the naturalness of a splicing result is improved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flow chart of the operation field panoramic stitching method implemented by the invention.
Fig. 2 shows a surgical scene simulation reference image and a target image acquired by a stereo camera.
Fig. 3 shows a schematic diagram of the distribution of optical positioning points on the surgical field reference, target image.
Fig. 4 is a schematic diagram showing the distribution of SIFT feature points on the reference, target image of the operation field.
Fig. 5 shows a depth classification information diagram of SIFT feature points and optical locating points.
Fig. 6 shows a schematic diagram of SIFT feature points and optical locating point classification obtained according to depth classification information.
Fig. 7 is a graph showing the experimental results of the present invention.
Figure 8 shows a graph comparing experiments with other methods of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention combines the optical positioning technology to improve the subjective quality of the panoramic stitching of the operation visual field. The optical positioning technology has the characteristics of high positioning precision, no electromagnetic radiation to patients and the like, and becomes one of the main development directions of spatial positioning in the surgical navigation system. The optical positioning system is not influenced by illumination and the texture of the image to be matched, and can quickly and accurately acquire the strict three-dimensional space matching point pair of the object to be registered. Therefore, in consideration of the advantages of the optical positioning system in the operation, the invention develops research on the operation visual field panoramic stitching based on the active optical positioning technology, and the subjective quality of image stitching is obviously improved.
According to the invention, optical positioning points based on a binocular vision principle are marked in the high-attention area of the operation visual field, and the AANAP algorithm is used for realizing the splicing of the panoramic image of the operation visual field by combining SIFT feature matching point pairs and utilizing the global similarity matrix optimized by depth information. The specific method comprises the steps of establishing an optical positioning point assisted local homography matrix, making up for low matching precision of SIFT feature matching point pairs, and improving the splicing quality of a high-attention area of a visual field. On the basis, a global similarity matrix based on depth information optimization is established for adjusting the non-overlapping area of the target image, so that large-angle distortion deformation of the non-overlapping area of the target image is reduced, and camera motion is compensated. The method can effectively solve the problem of insufficient precision when only SIFT feature matching is used for global splicing, reduces splicing ghosting of high-attention areas in the operation field, and improves the naturalness of image splicing.
As shown in fig. 1, the steps and flow of the surgical panoramic stitching algorithm according to the present invention will be described in detail.
The technical scheme of the invention is divided into three parts. Acquiring an optical positioning matching point pair, acquiring an SIFT feature matching point pair, and realizing the panoramic stitching of the operation visual field by an optimized AANAP algorithm.
The algorithm principle and the process of the invention are described in detail below.
Example 1
Step S1: and acquiring the optical positioning matching point pair.
Capturing a reference, target image of a simulated surgical scene by a stereo camera is shown in fig. 2. Two operators performed simulated surgery with surgical instruments in hand. And acquiring an optical positioning point in a high-attention area of a visual field by using the positioning instrument to obtain the three-dimensional space coordinates of the tip point of the instrument. Firstly, according to the pinhole imaging principle and the parallax principle, the relationship between the coordinates of the instrument tip point in the three-dimensional space and the coordinates of the instrument tip point in the image pixel is established, as shown in a formula (1),
wherein (u, v) is the coordinate of the imaging point of the tip point of the positioning instrument in the pixel coordinate system, (Xw, Yw, Zw) is the coordinate of the tip point in the world coordinate system, and ZcIs the z value in the camera coordinate system, f is the focal length of the camera, dx and dy are the physical size of each imaging unit of the camera sensor, (u)0,v0) Being the principal point of the camera, R, T is the translation and rotation matrices of the camera coordinate system to the world coordinate system. The A matrix only sums the focal length f, principal point (u) of the camera0,v0) Relating internal structures, called internal parameters; r, T shows the transformation of the camera coordinate system in space, called the extrinsic parameters.
The internal and external parameters of the left camera and the right camera of the binocular camera are obtained through camera calibration, and the pixel coordinates of the tip point on a reference image and a target image are obtained as an optical locating point according to the world coordinates of the tip point of the locating instrument and a formula (1). The distribution of the optical localization points on the reference, target image of the operative field is shown in fig. 3.
Step S2: and acquiring SIFT feature matching point pairs.
The SIFT algorithm firstly establishes a DoG (Difference-of-Gaussian) scale space, and searches local extreme points in the DoG space as key points. And simultaneously removing key points with low contrast and unstable edge response points. Then, the gradient histogram of the neighborhood pixels of the feature points is counted by using the gradients and the directions of the neighborhood pixels of the feature points, and a reference direction is determined for each feature point. To this end, three pieces of information can be obtained: location, dimension, and orientation. Finally, the key point is described by a group of vectors, and the descriptor not only comprises the key point, but also comprises pixel points which are around the key point and contribute to the key point. Next, the SIFT algorithm rotates the coordinate axis to the direction of the key point, then selects a 16 × 16 pixel region, divides the neighborhood into 4 × 4 sub-regions, calculates gradient information in 8 directions of each sub-region, and generates a feature vector with dimensions 4 × 4 × 8 ═ 128. And matching the SIFT feature points of the reference image and the target image according to the Euclidean distance. In order to improve the matching precision, a RANSAC algorithm is used for removing the mismatched SIFT feature point pairs.
The distributions of the SIFT feature points and the optical locating points obtained in steps S1 and S2 on the reference and target images of the operation field are shown in fig. 4, where the symbol ' ○ ' represents the optical locating points and the symbol ' represents the SIFT feature points.
Step S3: and acquiring a local homography matrix.
The AANAP algorithm is based on SIFT feature matching point pairs to build a local homography model. Due to the fact that SIFT feature points of the high-attention area are not enough, the reference and target images are well aligned. Therefore, optical positioning points which are strictly matched one by one are marked in the high-attention area, and the limitation of SIFT feature points is made up. After SIFT feature points and optical locating points are obtained, a local homography model is established based on a moving direct linear transformation (moving DLT) algorithm.
Let I and I' be the reference image and the target image, respectively. p ═ x, y]TAnd p ' ═ x ', y ']TIs a matching point pair containing SIFT feature points and optical locating points. The image is subjected to grid division, and the mapping relation between the reference image and the target image can be linearly estimated according to a Direct Linear Transformation (DLT) algorithm, as follows:
whereinAndthe homogeneous coordinates of p, p', respectively. h is1=[h1h4h7]T,h2=[h2h5h8]T,h3=[h3h6h9]T. The above formula is converted into the compound shown in the specification,
where h is a 9 × 1 vector, aiIs the first two rows of the left matrix of formula (2) calculated for the ith pair of matching points, A is the sum of all aiA 2N × 9 matrix of components. Dividing dense grids for the image, and based on a moving direct linear transformation (moving DLT) algorithm, dividing the central point p of the grids*The homography matrix of (a) can be obtained by solving the following equation:
According to the AANAP algorithm, the linearized mesh homography matrix results in Hl extrapolates to non-overlapping regions to reduce projection distortion. Obtaining a linear weighted Taylor level expansion h of the grid homography matrix according to equation (5)L,
Whereinq is the center point of the grid, JH(pi) Is at an anchor point piA Jacobian matrix of homography matrices. Then a linearized homography Hl is obtained according to equation (6),
Hl=μH+(1-μ)hL(6)
where μ is the maximum value of the projection of the vector formed by the target image pixel coordinate point and the reference image center point onto the vector formed by the reference image and the target image center point.
Step S4: and calculating a global similarity matrix based on depth information optimization.
The method comprises the steps of respectively calculating the depths of SIFT feature matching point pairs and the optical positioning matching point pairs according to an optical positioning principle, uniformly classifying the depths of the optical positioning matching point pairs and the SIFT feature matching point pairs in a front-middle-rear mode by adopting a K-means algorithm, obtaining a similarity matrix with a minimum rotation angle as a global similarity matrix S for splicing non-overlapping regions of target images, compensating camera motion, and reducing large-angle deformation, wherein the depth classification is shown in FIG. 5, and symbols '△' + ',' ▽ 'respectively represent front, middle and rear three types of depth information, and the classification conditions of the SIFT feature points and the optical positioning points on a reference image and a target image obtained according to the depth classification information are shown in FIG. 6, wherein a symbol' ○ 'represents an optical positioning point, a symbol' □ 'represents an SIFT feature point, a symbol' △ '+' respectively represents front, middle and rear three types.
Ht=ηHl+(1-η)S (7)
And (3) obtaining a final target image grid transformation matrix Ht according to an optimization matrix Hl of formula (7), wherein η is a projection value of a vector formed by a central point of a non-overlapping region of a target image and a central point of a reference image on a vector formed by a central point of the reference image and a central point of the target image, and the transformation of the target image by using a global similarity matrix can cause the alignment dislocation of the originally aligned reference and overlapping regions of the target image, in order to make up for the defect and make the image splicing effect more natural, calculating a reference image grid transformation matrix Hr. according to formula (8) and finally performing weighted fusion on the transformed reference image and the target image, wherein the final splicing result is shown in FIG. 7.
Hr=Hl\Ht (8)
In order to verify the technical effect of the invention, the invention also carries out a method comparison experiment, and the following is an experiment result.
Results of the experiment
The invention performs operation scene simulation in a laboratory, and acquires images by using a Bumblebee stereo camera (Bumblebee2) of the Philips America (FLIR SYSTEMS). Comparative experiments were performed based on the collected images. The comparison method comprises the following steps: the image processing method comprises an AANAP algorithm, an APAP algorithm, a splicing algorithm for image mapping of a global homography matrix, an SPHP algorithm and an REW algorithm. Fig. 8 is a comparison of various methods. As shown in fig. 8, the result of stitching using the global matrix is the least visually effective. The method provided by the invention has good alignment effect in the overlapping area. Although this method does not completely avoid ghosting, ghosting in regions of high interest is negligible. The second, third and fifth rows are the stitching results of APAP, AANAP and SPHP, respectively, showing severe ghosting. And the seventh line is the result of the REW algorithm, the algorithm is based on a Bayes refined model, local abnormal values are removed in a self-adaptive mode, and the parallax is small in the high-attention area. But the high attention area hand is more rough in profile than the method of the present invention. In the non-high attention area, such as the left hand at the lower right corner, the parallax obtained by the method is more obvious than that obtained by the REW algorithm. In addition, except the method, the seam splicing of other methods is very obvious, and the overall subjective quality is not the same as that of the method provided by the invention visually.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. An optical positioning-assisted surgery field panoramic stitching method is characterized by comprising the following steps:
acquiring optical positioning matching point pairs of a reference image and a target image, marking optical positioning points based on a binocular vision principle in a high-attention area of an operation visual field, and matching to obtain the optical positioning matching point pairs of the reference image and the target image;
extracting SIFT feature points of the reference image and the target image and performing feature matching to obtain SIFT feature matching point pairs of the reference image and the target image;
acquiring a local homography matrix, and calculating the local homography matrix based on grid division according to the optical positioning matching point pairs and SIFT feature matching point pairs of the reference image and the target image;
and image splicing, namely performing grid transformation on the reference image and the target image based on the local homography matrix to realize splicing of overlapping areas, and using a global similarity matrix classified based on depth information to a non-overlapping area of the target image to realize panoramic splicing of an operation visual field.
2. The method of claim 1,
the optical locating point is obtained by calculating based on the coordinate of the tip point of the near-infrared optical tracking locating instrument in a world coordinate system by utilizing a stereoscopic vision principle.
3. The method of claim 1,
the specific method for acquiring the matching point pair of the reference image and the target image comprises the following steps:
acquiring an operation visual field reference image and a target image with an overlapping area through a binocular camera, acquiring points in an operation visual field high-attention area by using a positioning instrument, and tracking and positioning a three-dimensional space coordinate of a tip point of the instrument;
obtaining pixel coordinates of the tip point on a reference image and a target image according to a binocular vision principle, and obtaining an optical positioning matching point pair through stereo matching;
and extracting feature points on the reference image and the target image by using an SIFT algorithm, and performing feature matching to obtain an SIFT feature matching point pair.
4. The method of claim 3,
the optical positioning matching point pairs are matched on the reference image and the target image one by one.
5. The method of claim 1,
the acquisition of the local homography matrix is realized based on a moving direct linear transformation method.
6. The method of claim 5,
the specific steps for obtaining the local homography matrix are as follows: and performing grid division on the image, and calculating a local homography transformation matrix of each grid based on a moving direct linear transformation method according to the optical positioning matching point pairs and the SIFT feature matching point pairs.
7. The method of claim 6,
the moving direct linear transformation method is characterized in that distance weight factors of an optical locating point, SIFT feature points and a grid central point are introduced on the basis of the direct linear transformation method.
8. The method of claim 1,
the depth information classification is to classify the optical positioning matching point pairs and the SIFT feature matching point pairs into front, middle and rear three types according to the space distance between the optical positioning matching point pairs and the optical center of the camera.
9. The method of claim 8,
the global similarity matrix is formed by generating three similarity matrixes from front, middle and rear three types of matching point pairs, and then selecting the similarity matrix with the minimum rotation angle from the three similarity matrixes as the global similarity matrix.
10. The method of claim 1,
the image splicing method comprises the following specific steps:
obtaining depth information of the optical positioning matching point pair and the SIFT feature matching point pair through a stereoscopic vision principle;
dividing the optical positioning matching point pair and the SIFT feature matching point pair into a front class, a middle class and a rear class according to the space distance between the optical positioning matching point pair and the optical center of the camera, calculating a similar matrix of each class of point pairs, and selecting the similar matrix with the minimum rotation angle as a global similar matrix;
and transforming the reference image and the target image by combining the local homography matrix to realize the panoramic stitching of the operation visual field.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911295009.9A CN111161143A (en) | 2019-12-16 | 2019-12-16 | Optical positioning technology-assisted operation visual field panoramic stitching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911295009.9A CN111161143A (en) | 2019-12-16 | 2019-12-16 | Optical positioning technology-assisted operation visual field panoramic stitching method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111161143A true CN111161143A (en) | 2020-05-15 |
Family
ID=70557215
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911295009.9A Pending CN111161143A (en) | 2019-12-16 | 2019-12-16 | Optical positioning technology-assisted operation visual field panoramic stitching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111161143A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111968035A (en) * | 2020-08-05 | 2020-11-20 | 成都圭目机器人有限公司 | Image relative rotation angle calculation method based on loss function |
CN114299120A (en) * | 2021-12-31 | 2022-04-08 | 北京银河方圆科技有限公司 | Compensation method, registration method and readable storage medium based on multiple camera modules |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663762A (en) * | 2012-04-25 | 2012-09-12 | 天津大学 | Segmentation method of symmetrical organs in medical image |
CN105931185A (en) * | 2016-04-20 | 2016-09-07 | 中国矿业大学 | Automatic splicing method of multiple view angle image |
CN106910208A (en) * | 2017-03-07 | 2017-06-30 | 中国海洋大学 | A kind of scene image joining method that there is moving target |
CN107067370A (en) * | 2017-04-12 | 2017-08-18 | 长沙全度影像科技有限公司 | A kind of image split-joint method based on distortion of the mesh |
WO2018104700A1 (en) * | 2016-12-05 | 2018-06-14 | Gaist Solutions Limited | Method and system for creating images |
CN108921781A (en) * | 2018-05-07 | 2018-11-30 | 清华大学深圳研究生院 | A kind of light field joining method based on depth |
CN109658370A (en) * | 2018-11-29 | 2019-04-19 | 天津大学 | Image split-joint method based on mixing transformation |
CN110544202A (en) * | 2019-05-13 | 2019-12-06 | 燕山大学 | parallax image splicing method and system based on template matching and feature clustering |
-
2019
- 2019-12-16 CN CN201911295009.9A patent/CN111161143A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663762A (en) * | 2012-04-25 | 2012-09-12 | 天津大学 | Segmentation method of symmetrical organs in medical image |
CN105931185A (en) * | 2016-04-20 | 2016-09-07 | 中国矿业大学 | Automatic splicing method of multiple view angle image |
WO2018104700A1 (en) * | 2016-12-05 | 2018-06-14 | Gaist Solutions Limited | Method and system for creating images |
CN106910208A (en) * | 2017-03-07 | 2017-06-30 | 中国海洋大学 | A kind of scene image joining method that there is moving target |
CN107067370A (en) * | 2017-04-12 | 2017-08-18 | 长沙全度影像科技有限公司 | A kind of image split-joint method based on distortion of the mesh |
CN108921781A (en) * | 2018-05-07 | 2018-11-30 | 清华大学深圳研究生院 | A kind of light field joining method based on depth |
CN109658370A (en) * | 2018-11-29 | 2019-04-19 | 天津大学 | Image split-joint method based on mixing transformation |
CN110544202A (en) * | 2019-05-13 | 2019-12-06 | 燕山大学 | parallax image splicing method and system based on template matching and feature clustering |
Non-Patent Citations (3)
Title |
---|
何川;周军;: "具有直线结构保护的网格化图像拼接", 中国图象图形学报, no. 07 * |
张晶晶: "基于特征分块的视差图像拼接算法", 计算机工程, vol. 44, no. 5, pages 1 - 7 * |
武博等: "光学定位技术辅助的手术场景全景拼接算法", 北京生物医学工程, vol. 3, no. 41 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111968035A (en) * | 2020-08-05 | 2020-11-20 | 成都圭目机器人有限公司 | Image relative rotation angle calculation method based on loss function |
CN111968035B (en) * | 2020-08-05 | 2023-06-20 | 成都圭目机器人有限公司 | Image relative rotation angle calculation method based on loss function |
CN114299120A (en) * | 2021-12-31 | 2022-04-08 | 北京银河方圆科技有限公司 | Compensation method, registration method and readable storage medium based on multiple camera modules |
CN114299120B (en) * | 2021-12-31 | 2023-08-04 | 北京银河方圆科技有限公司 | Compensation method, registration method, and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105374019B (en) | A kind of more depth map fusion methods and device | |
CN110782394A (en) | Panoramic video rapid splicing method and system | |
KR100793838B1 (en) | Appratus for findinng the motion of camera, system and method for supporting augmented reality in ocean scene using the appratus | |
US20120257016A1 (en) | Three-dimensional modeling apparatus, three-dimensional modeling method and computer-readable recording medium storing three-dimensional modeling program | |
CN107578376B (en) | Image splicing method based on feature point clustering four-way division and local transformation matrix | |
JPWO2018235163A1 (en) | Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method | |
CN107767339B (en) | Binocular stereo image splicing method | |
JP2009093644A (en) | Computer-implemented method for tacking 3d position of object moving in scene | |
CN109118544B (en) | Synthetic aperture imaging method based on perspective transformation | |
EP1960941A2 (en) | Device and method for calibrating an imaging device for generating three-dimensional surface models of moving objects | |
CN109448105B (en) | Three-dimensional human body skeleton generation method and system based on multi-depth image sensor | |
CN110717936B (en) | Image stitching method based on camera attitude estimation | |
CN110517211B (en) | Image fusion method based on gradient domain mapping | |
CN101729920A (en) | Method for displaying stereoscopic video with free visual angles | |
JP2012194751A (en) | Image processing method, image processing system and computer program | |
CN109493282A (en) | A kind of stereo-picture joining method for eliminating movement ghost image | |
CN111161143A (en) | Optical positioning technology-assisted operation visual field panoramic stitching method | |
CN118247435A (en) | Intestinal tract dense three-dimensional modeling method based on visual odometer and convolutional neural network | |
JP2013012045A (en) | Image processing method, image processing system, and computer program | |
CN116527975A (en) | MR fusion display method, fusion system and civil aircraft cockpit fusion system | |
CN108364345B (en) | Shielded target three-dimensional reconstruction method based on pixel marking and synthetic aperture imaging | |
CN111491151A (en) | Microsurgical stereoscopic video rendering method | |
CN109990756B (en) | Binocular ranging method and system | |
CN113159169B (en) | Matching deformation and joint-cutting optimization image splicing method based on prior target feature points | |
CN112669355B (en) | Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information |
Inventor after: Wu Bo Inventor after: Zhang Nan Inventor after: Yang Qiaoling Inventor after: Ye Can Inventor before: Wu Bo |
|
CB03 | Change of inventor or designer information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200515 |
|
RJ01 | Rejection of invention patent application after publication |