Nothing Special   »   [go: up one dir, main page]

CN107767440B - Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint - Google Patents

Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint Download PDF

Info

Publication number
CN107767440B
CN107767440B CN201710793380.2A CN201710793380A CN107767440B CN 107767440 B CN107767440 B CN 107767440B CN 201710793380 A CN201710793380 A CN 201710793380A CN 107767440 B CN107767440 B CN 107767440B
Authority
CN
China
Prior art keywords
image
points
homonymy
point
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710793380.2A
Other languages
Chinese (zh)
Other versions
CN107767440A (en
Inventor
胡春梅
夏国芳
张旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Civil Engineering and Architecture
Original Assignee
Beijing University of Civil Engineering and Architecture
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Civil Engineering and Architecture filed Critical Beijing University of Civil Engineering and Architecture
Priority to CN201710793380.2A priority Critical patent/CN107767440B/en
Publication of CN107767440A publication Critical patent/CN107767440A/en
Application granted granted Critical
Publication of CN107767440B publication Critical patent/CN107767440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cultural relic sequence image fine three-dimensional reconstruction method based on triangular net interpolation and constraint, which comprises the following steps: firstly, collecting a cultural relic sequence image; step two, carrying out same-name point matching among the sequence images; step three, obtaining the position and posture parameters of each image; step four, stereo image centering, extracting feature points of the left image by using a Harris grid and accurately positioning by using a Fostner operator, searching and matching corresponding homonymous points on the right image to establish uniformly distributed homonymous points, constructing a homonymous Dirony triangular network by using the homonymous points as seed points, and continuously interpolating and matching the gravity center points of the triangle to obtain high-density homonymous points of the cultural relic image; extracting the edge characteristics of the left image to obtain edge points, and matching edge information homonymy points in the stereo image pair; fifthly, according to the position and the posture parameters of the image, carrying out dense point reconstruction on the high-density same-name points of the cultural relic image; and step six, taking the three-dimensional point cloud of the cultural relic as a reference to realize absolute orientation and obtain a fine three-dimensional model.

Description

Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint
Technical Field
The invention belongs to the technical field of photogrammetry and laser radar, and relates to a cultural relics sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint.
Background
The three-dimensional information of the real world objective object can more intuitively and effectively express the environment and the attribute of the measured object, and with the continuous improvement of the three-dimensional reconstruction technology in the aspects of speed, precision and the like, the method is widely applied to the fields of surveying and mapping, urban planning, medicine, military, cultural heritage digital protection and the like. The surveying and mapping industry applies a three-dimensional reconstruction technology to generate a digital ground model, a digital city model and the like, and is applied to the aspects of data display, data updating, information management and the like; the urban three-dimensional reconstruction in urban planning construction management has important significance for the requirements of space analysis, the display of urban landscape and physiognomy, the dynamic monitoring of urban buildings and the reduction of decision errors; the method can be used for surveying and mapping navigation, constructing a virtual battlefield, providing accurate geographic information and establishing a real and measurable three-dimensional model in military, and has important significance in the aspects of target positioning, terrain visualization and the like; in medicine, CT three-dimensional reconstruction promotes the leap from a transverse fault to a multi-surface layer and even a three-dimensional layer, so that the traditional abstract table achieves the realistic, visual and visual representation, and the diagnosis accuracy is greatly improved; in the aspect of cultural heritage digital protection, three-dimensional reconstruction is digital important content, can record detailed information of characters and colors for archiving and disease investigation, and simultaneously guides virtual repair of the characters, so that an observer can browse from multiple angles and more comprehensively. At present, the requirement of the cultural relic department on digitization is higher and higher, and efficient and fine three-dimensional reconstruction of the cultural relic is a hotspot of current research.
Disclosure of Invention
An object of the present invention is to solve at least the above problems and/or disadvantages and to provide at least the advantages described hereinafter.
The invention also aims to provide a cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint.
Therefore, the technical scheme provided by the invention is as follows:
a cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint comprises the following steps:
firstly, acquiring a sequence image of a cultural relic by using a camera;
secondly, carrying out homonymy point matching among the sequence images to obtain homonymy points for orientation of the sequence images;
thirdly, firstly, solving by using the homonymy points to obtain an orientation parameter of each image, and then obtaining a position and posture parameter of each image by using the image orientation parameter as an initial value and by means of bundle adjustment;
step four, dense matching of stereo pairs in the sequence images: in the stereoscopic image pair, extracting characteristic points of a left image by using a Harris grid and accurately positioning by using a Fostner operator, searching and matching corresponding homonymous points on a right image to establish uniformly distributed homonymous points, constructing a homonymous Dirony triangular network by using the uniformly distributed homonymous points as seed points, and continuously interpolating and matching the gravity center points of the triangle and continuously increasing the matched homonymous points to obtain high-density homonymous points of the cultural relic image;
fifthly, according to the position and posture parameters of each image obtained in the third step, carrying out dense point reconstruction on the high-density homonymous points of the cultural relic image to construct a fine point cloud of the cultural relic image;
and sixthly, taking the three-dimensional point cloud of the cultural relic as a reference, and realizing absolute orientation of the fine point cloud of the cultural relic image through space similarity transformation between coordinate systems, so that the fine point cloud of the cultural relic image has actual size and scale, and a fine three-dimensional model of the cultural relic sequence image is obtained.
Preferably, in the method for reconstructing a cultural relic sequence image in a fine three-dimensional manner based on triangulation network interpolation and constraint, the third step specifically includes the following steps:
3.1) obtaining the corresponding position and attitude parameters of the initial stereopair by combining the homonymy points obtained in the second step with a method of direct solution of relative orientation and tight solution, and simultaneously applying forward intersection to the homonymy points to calculate three-dimensional coordinates of the homonymy points;
3.2) establishing an index relationship between a two-dimensional homonymy point of the initial stereopair and a three-dimensional point intersected in front of the initial stereopair, determining a homonymy two-dimensional point P between the three-dimensional point P of the initial stereopair and a third image according to the homonymy point of a second three image, performing space rear intersection on the right image of the second stereopair by using the extracted three-dimensional point set P and the homonymy two-dimensional point set P by adopting an improved Danish method weight selection iteration method based on a collinear equation to obtain the space position and the posture of the right image, and then performing two-three-dimensional index establishment, two-three-dimensional homonymy point determination and subsequent image orientation of the next stereopair image according to the same method to obtain the position and posture initial value of each image;
and 3.3) taking the position and attitude initial values of each image as initial values of beam adjustment, taking a collinear equation as a mathematical model to perform free net beam adjustment, and solving three-dimensional points corresponding to the orientation parameters and SIFT + least square matching points by utilizing a Levenberg-Marquardt (LM) algorithm to obtain the position and attitude parameters of each image.
Preferably, in the method for reconstructing a cultural relic sequence image based on triangulation interpolation and constraint, the fourth step specifically includes the following steps:
4.1) carrying out grid division on each image according to a certain step length, extracting and calculating interest values of all pixel points in a window according to Harris characteristics, taking extreme points of the interest values in the window as characteristic points, applying Fostner operators to carry out accurate positioning, and establishing uniformly distributed characteristic points on a left image of a stereopair;
4.2) firstly, utilizing the homonymy points between the stereo pairs in image orientation to calculate homonymy matrix parameters, homonymy transforming the characteristic points on the left image in the stereo pair to the right image according to the one-to-one correspondence relationship, taking the homonymy transformation points as the rough value points of the homonymy points corresponding to the characteristic points, then utilizing the geometric relationship of epipolar lines, utilizing the one-dimensional searching and matching method of the homonymy points on the homonymy epipolar lines to further narrow the matched searching range, firstly utilizing the correlation coefficient matching to determine the initial positions of the homonymy points in the searching range, and then precisely positioning the homonymy points through least square matching to obtain the uniformly distributed homonymy points in the stereo pair;
4.3) constructing a Diloney triangulation network on the basis of the characteristic points on the left image, and constructing a corresponding triangulation network on the right image according to the corresponding relation between the homonymous points, wherein a triangle formed by the homonymous points is called a homonymous triangle;
4.4) interpolating a triangle gravity center point by taking a triangular net on the left image as a matching unit, searching and matching the same-name point in the same-name triangle corresponding to the right image by taking the epipolar geometry between the corresponding same-name triangle and the stereo image pair as a constraint condition, continuously updating the triangular net, setting a side length threshold value by taking the side length of the triangle as a gravity center interpolation condition according to the characteristics of the Diloney triangular net, and not interpolating the triangle when the side length of the triangle is smaller than the threshold value so as to obtain the high-density same-name point of the cultural relic image.
Preferably, the method for reconstructing a cultural relic sequence image in a fine three-dimensional manner based on triangulation interpolation and constraint further includes, after the fourth step and before the fifth step:
a, extracting image edge information by using a Canny operator, determining a search range by using homography transformed coarse value points and kernel line geometry for edge points outside a triangulation network by using the Dirony triangulation network as a judgment condition, determining a search range by using homonymous triangles where the edge points are located and the kernel line geometry for the edge points in the triangulation network, and accurately positioning the homonymous points of the edges of the cultural relic image by adopting correlation coefficients and least square matching in the determined search range;
and then, in the fifth step, according to the position and posture parameters of each image, carrying out dense point reconstruction on the high-density homonymous points of the cultural relic image and carrying out dense point reconstruction and edge reconstruction on the edge homonymous points of the cultural relic image respectively, and fusing corresponding edge point clouds on the basis of the dense point clouds to obtain the fine point cloud of the cultural relic image.
Preferably, in the method for reconstructing a cultural relic sequence image in a fine three-dimensional manner based on triangulation interpolation and constraint, the second step specifically includes the following steps:
2.1) after homonymy point matching is carried out between the sequence images, sequentially carrying out bidirectional consistency constraint, RANSAC random sampling consistency constraint and affine transformation constraint, and carrying out step-by-step item-by-item constraint so as to improve the accuracy of homonymy points and obtain a high-quality homonymy point set;
and 2.2) further accurately positioning the high-quality homonymy point set by using a least square matching method to obtain homonymy points for sequence image orientation.
Preferably, the method for reconstructing a cultural relic sequence image in a fine three-dimensional mode based on triangulation interpolation and constraint further comprises the following steps after the first step and before the second step:
firstly, calibrating a camera to obtain internal reference elements and distortion parameters of the camera, wherein the internal reference elements are used for orienting an image, and the distortion parameters are used for correcting the distortion of image data;
then, distortion correction is carried out on the sequence image data, and filtering processing is carried out on the distorted image; and then entering the step two.
Preferably, in the method for reconstructing a fine three-dimensional historical relic sequence image based on triangulation network interpolation and constraint, in the step B, when the camera calibration is performed, images used for calculating camera calibration parameters are shot at different angles and different distances relative to the calibration plate.
Preferably, in the method for finely reconstructing a historical relic sequence image based on the triangulation network interpolation and the constraint, in the sixth step, the three-dimensional point cloud is obtained by scanning with a joint arm scanner.
The invention at least comprises the following beneficial effects:
firstly, the invention calculates the orientation parameters between image pairs by adopting a method of directly solving and strictly solving an initial image pair, simultaneously calculates the three-dimensional coordinates of SIFT same-name points by applying forward intersection, and establishes two-three-dimensional indexes of the three-dimensional coordinates and connecting points of each image pair. And extracting the three-dimensional points corresponding to the next image pair and the homonymous image points of the three image pairs and the third image by using the established two three-dimensional indexes, and calculating the spatial position and the attitude of the right image in the image pair by a space rear intersection method. The orientation parameters of the images are sequentially solved by the method, the obtained image orientation parameters are used as initial values, and then the orientation parameters of the sequence images are accurately determined by adjustment of a free net beam method. Secondly, the homonymous points with high density and uniform distribution can be established in the stereo image pair in the sequence image through the method, the problems of low density and nonuniform distribution in dense matching are solved, in the dense matching, the triangles are used as matching units, the gravity center point interpolation matching is carried out on the triangles, the search range is restricted by the epipolar lines in the homonymous triangles, and the matched homonymous points are continuously increased, so that the aim of dense matching is fulfilled. Thirdly, the method of the invention is used for fine three-dimensional reconstruction of the near view sequence image of the cultural relic object, can generate an image point cloud with high fine degree and rich detail information, has actual size and scale, and can analyze and measure the image point cloud.
The invention aims at the fine three-dimensional reconstruction problem of the near view sequence image of the cultural relic object, takes the ground laser radar point cloud and the near view image as data sources, researches the problems of camera calibration, image matching, image orientation, dense matching, edge information matching, image reconstruction, absolute orientation and the like, and realizes the fine three-dimensional reconstruction of the near view sequence image of the cultural relic object based on triangulation network interpolation and constraint through deep research on each part. The research result of the invention can meet the requirement of refined reconstruction, and can also be applied to other industries such as city planning assistance, cultural relic information retention, three-dimensional display and the like.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is a flow chart of a cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint according to the invention;
FIG. 2 is a schematic diagram of camera checkerboard calibration images acquired in an embodiment of the present invention;
FIGS. 3A and 3B are schematic diagrams of a distortion parameter calculated according to camera calibration before and after image distortion correction according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of laser point cloud data of a close-up image for reconstruction according to an embodiment of the present disclosure;
FIGS. 5A and 5B are schematic diagrams of selected stereopairs in a sequence of images according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating the result of image filtering preprocessing according to an embodiment of the present invention;
FIGS. 7A and 7B are schematic diagrams illustrating the results of matching stereo images with SIFT + least squares according to an embodiment of the present invention;
FIGS. 8A and 8B are partial schematic diagrams of a stereo image versus SIFT + least squares matching result in an embodiment of the present invention;
FIGS. 9A and 9B are schematic diagrams of the matching results of the stereo pair after the mismatching rejection according to the present invention;
FIGS. 10A and 10B are schematic diagrams of partial matching results of a stereo pair after the mismatching rejection according to the present invention;
FIG. 11 is a diagram illustrating the result of primary image orientation according to the present invention;
FIG. 12 is a schematic diagram illustrating the misalignment in the initial image orientation according to the present invention;
FIG. 13 is a schematic illustration of the results of free net beam method smoothing in accordance with the present invention;
FIGS. 14A and 14B are schematic diagrams illustrating the results of uniformly distributing homonymous points established after matching stereo images to Harris + Fosrtner feature points in an embodiment of the present invention;
FIG. 15 is a diagram illustrating a result of a homonymous Dirony triangulation network constructed by uniformly distributing homonymous points in a stereo image according to an embodiment of the present invention;
fig. 16 is a diagram illustrating the results of a close-match of a stereo pair according to the present invention.
FIG. 17 is a diagram illustrating the result of extracting the left image edge from the stereo image according to the embodiment of the present invention;
FIG. 18 is a diagram illustrating the matching of stereo images to edge information according to an embodiment of the present invention;
FIG. 19 is a graph illustrating the result of a stereo-to-edge reconstruction in accordance with an embodiment of the present invention;
FIG. 20 is a diagram illustrating the result of a stereo pair fine reconstruction according to an embodiment of the present invention;
FIG. 21 is a diagram illustrating a result of a fine reconstruction of a sequential image according to an embodiment of the present invention.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
It will be understood that terms such as "having," "including," and "comprising," as used herein, do not preclude the presence or addition of one or more other elements or groups thereof.
As shown in fig. 1, the present invention provides a method for reconstructing a cultural relic sequence image in a fine three-dimensional manner based on triangulation interpolation and constraint, comprising:
firstly, acquiring a sequence image of a cultural relic by using a camera;
secondly, carrying out homonymy point matching among the sequence images to obtain homonymy points for orientation of the sequence images; for example, homonym point matching is performed between sequential images, mismatching is eliminated through multiple stepwise constraints, a high-quality homonym point set is obtained, and accurate and high-precision homonym points between the images are obtained through least square matching.
Thirdly, firstly, solving by using the homonymy points to obtain an orientation parameter of each image, and then obtaining a position and posture parameter of each image by using the image orientation parameter as an initial value and by means of bundle adjustment; for example, the orientation parameters between initial image pairs are calculated, forward intersection is applied to the same-name point to calculate the three-dimensional coordinates of the image pairs, and two-dimensional and three-dimensional indexes of the three-dimensional coordinates of the image pairs and the connection points are established. And extracting the three-dimensional points corresponding to the three-dimensional points and the next image pair and the image points with the same name as the third image by using the established two three-dimensional indexes, and calculating the spatial position and the attitude of the right image in the image pair by a space rear intersection method. The orientation parameters of each image are sequentially solved by the method, the obtained image orientation parameters are used as initial values, and the orientation parameters of the sequence images are accurately determined by adjustment of a free net beam method.
Step four, dense matching of stereo pairs in the sequence images: in the stereoscopic image pair, extracting characteristic points of a left image by using a Harris grid and accurately positioning by using a Fostner operator, searching and matching corresponding homonymous points on a right image to establish uniformly distributed homonymous points, constructing a homonymous Dirony triangular network by using the uniformly distributed homonymous points as seed points, and continuously interpolating and matching the gravity center points of the triangle and continuously increasing the matched homonymous points to obtain high-density homonymous points of the cultural relic image;
fifthly, according to the position and posture parameters of each image obtained in the third step, carrying out dense point reconstruction on the high-density homonymous points of the cultural relic image to construct a fine point cloud of the cultural relic image;
and sixthly, taking the three-dimensional point cloud of the cultural relic as a reference, and realizing absolute orientation of the fine point cloud of the cultural relic image through space similarity transformation between coordinate systems, so that the fine point cloud of the cultural relic image has actual size and scale, and a fine three-dimensional model of the cultural relic sequence image is obtained. For example, the scanning point cloud of the articulated arm is used as a reference, a control point is selected, an image space rectangular coordinate system of the image point cloud is converted into a local scanning coordinate system of the articulated arm, and the absolute orientation of the image point cloud is realized through space similarity transformation between the two coordinate systems, so that the image point cloud has actual size and scale on the basis of fine detail information.
In one embodiment of the present invention, preferably, the third step specifically includes the following steps:
3.1) obtaining the corresponding position and attitude parameters of the initial stereopair by combining the homonymy points obtained in the second step with a method of direct solution of relative orientation and tight solution, and simultaneously applying forward intersection to the homonymy points to calculate three-dimensional coordinates of the homonymy points;
3.2) establishing an index relationship between a two-dimensional homonymy point of the initial stereopair and a three-dimensional point intersected in front of the initial stereopair, determining a homonymy two-dimensional point P between the three-dimensional point P of the initial stereopair and a third image according to the homonymy point of a second three image, performing space rear intersection on the right image of the second stereopair by using the extracted three-dimensional point set P and the homonymy two-dimensional point set P by adopting an improved Danish method weight selection iteration method based on a collinear equation to obtain the space position and the posture of the right image, and then performing two-three-dimensional index establishment, two-three-dimensional homonymy point determination and subsequent image orientation of the next stereopair image according to the same method to obtain the position and posture initial value of each image;
and 3.3) taking the position and attitude initial values of each image as initial values of beam adjustment, taking a collinear equation as a mathematical model to perform free net beam adjustment, and solving three-dimensional points corresponding to the orientation parameters and SIFT + least square matching points by utilizing a Levenberg-Marquardt (LM) algorithm to obtain the position and attitude parameters of each image.
In some embodiments of the present invention, preferably, the fourth step specifically includes the following steps:
4.1) carrying out grid division on each image according to a certain step length, extracting and calculating interest values of all pixel points in a window according to Harris characteristics, taking extreme points of the interest values in the window as characteristic points, applying Fostner operators to carry out accurate positioning, and establishing uniformly distributed characteristic points on a left image of a stereopair;
4.2) firstly, utilizing the homonymy points between the stereo pairs in image orientation to calculate homonymy matrix parameters, homonymy transforming the characteristic points on the left image in the stereo pair to the right image according to the one-to-one correspondence relationship, taking the homonymy transformation points as the rough value points of the homonymy points corresponding to the characteristic points, then utilizing the geometric relationship of epipolar lines, utilizing the one-dimensional searching and matching method of the homonymy points on the homonymy epipolar lines to further narrow the matched searching range, firstly utilizing the correlation coefficient matching to determine the initial positions of the homonymy points in the searching range, and then precisely positioning the homonymy points through least square matching to obtain the uniformly distributed homonymy points in the stereo pair;
4.3) constructing a Diloney triangulation network on the basis of the characteristic points on the left image, and constructing a corresponding triangulation network on the right image according to the corresponding relation between the homonymous points, wherein a triangle formed by the homonymous points is called a homonymous triangle;
4.4) interpolating a triangle gravity center point by taking a triangular net on the left image as a matching unit, searching and matching the same-name point in the same-name triangle corresponding to the right image by taking the epipolar geometry between the corresponding same-name triangle and the stereo image pair as a constraint condition, continuously updating the triangular net, setting a side length threshold value by taking the side length of the triangle as a gravity center interpolation condition according to the characteristics of the Diloney triangular net, and not interpolating the triangle when the side length of the triangle is smaller than the threshold value so as to obtain the high-density same-name point of the cultural relic image.
In one embodiment of the present invention, preferably, after the step four, before the step five, the method further includes:
a, extracting image edge information by using a Canny operator, determining a search range by using homography transformed coarse value points and kernel line geometry for edge points outside a triangulation network by using the Dirony triangulation network as a judgment condition, determining a search range by using homonymous triangles where the edge points are located and the kernel line geometry for the edge points in the triangulation network, and accurately positioning the homonymous points of the edges of the cultural relic image by adopting correlation coefficients and least square matching in the determined search range;
and then, in the fifth step, according to the position and posture parameters of each image, carrying out dense point reconstruction on the high-density homonymous points of the cultural relic image and carrying out dense point reconstruction and edge reconstruction on the edge homonymous points of the cultural relic image respectively, and fusing corresponding edge point clouds on the basis of the dense point clouds to obtain the fine point cloud of the cultural relic image.
In one embodiment of the present invention, preferably, the second step specifically includes the following steps:
2.1) after homonymy point matching is carried out between the sequence images, sequentially carrying out bidirectional consistency constraint, RANSAC random sampling consistency constraint and affine transformation constraint, and carrying out step-by-step item-by-item constraint so as to improve the accuracy of homonymy points and obtain a high-quality homonymy point set;
and 2.2) further accurately positioning the high-quality homonymy point set by using a least square matching method to obtain homonymy points for sequence image orientation.
In one embodiment of the present invention, preferably, after the step one, before the step two, a step B is further included:
firstly, calibrating a camera to obtain internal reference elements and distortion parameters of the camera, wherein the internal reference elements are used for orienting an image, and the distortion parameters are used for correcting the distortion of image data;
then, distortion correction is carried out on the sequence image data, and filtering processing is carried out on the distorted image; and then entering the step two.
In one embodiment of the present invention, preferably, in the step B, when the camera calibration is performed, the images for calculating the camera calibration parameters are to be taken at different angles and at different distances with respect to the calibration board.
In one embodiment of the present invention, preferably, in the sixth step, the three-dimensional point cloud is obtained by scanning with an articulated arm scanner.
In order that those skilled in the art will better understand the present invention, the following examples are now provided for illustration:
as shown in fig. 1, the present invention provides a method for reconstructing a cultural relic sequence image in a fine three-dimensional manner based on triangulation interpolation and constraint, comprising:
step one, camera calibration: obtaining calibration images at different positions and different distances of a camera for collecting images, and calculating internal reference elements and distortion parameters of the camera according to a camera calibration method of computer vision, wherein fig. 2 is a collected calibration image and specifically comprises the following steps:
in camera calibration, a matrix formed by the position of a principal point in an image and the focal length of a camera is generally called an internal reference matrix, and the relationship between image pixel coordinates and calibration plate coordinates is as follows:
q=MQ (1)
wherein
Figure GDA0001538564450000091
Wherein q is a pixel coordinate with the upper left corner of the image as the origin of coordinates; in matrix M fx fyThe focal length of the camera in the directions of the x axis and the y axis; s represents the non-orthogonality of the x-axis and y-axis directions of the imaging plane, and is generally 0;
the imaging process for the point-to-camera at different positions of the checkerboard can be represented by:
m=λA[R t]M (3)
Figure GDA0001538564450000092
wherein M ═ X Y Z1]TThe space point coordinates in the world coordinate system under the homogeneous coordinates; m ═ u v 1]TThe coordinate of a two-dimensional point in a pixel coordinate system under the homogeneous coordinate; λ is the scaling;
assuming that the calibration board is on the plane of the world coordinate system Z ═ 0 in the camera calibration, equations 2-20 can be further simplified as:
Figure GDA0001538564450000093
taking H ═ lambda A [ r1 r2 t]=[h1 h2 h3]The matrix H is also called homography matrix, then
Figure GDA0001538564450000094
In the above equation, the rotation vectors are mutually orthogonal in construction, and referring to the scaling factor to the outside, r is1And r2Are orthogonal to each other. Then there is
Figure GDA0001538564450000095
According to the meaning of orthogonality of two vectors, there are two basic constraints:
Figure GDA0001538564450000096
Figure GDA0001538564450000097
to make the following description relatively easy, B is taken to be a-TA-1The concrete is as follows:
Figure GDA0001538564450000101
in practice, the matrix B has a closed solution of the general form:
Figure GDA0001538564450000102
the matrix B is introduced into two basic constraint conditions, and then a general constraint form exists
Figure GDA0001538564450000103
Considering the symmetry of the matrix B, and expanding the matrix into the form of each element, rearranging to obtain a new vector B, then:
Figure GDA0001538564450000104
reference to
Figure GDA0001538564450000105
Two basic definitions can also be written:
Figure GDA0001538564450000106
if there are K checkerboard images, cumulatively listing these equations, then there are:
Vb=0 (14)
where V is a 2 x K6 matrix, obtained in a closed solution of the general form of matrix B:
Figure GDA0001538564450000107
wherein
Figure GDA0001538564450000108
The rotation and translation are obtained by:
Figure GDA0001538564450000115
in practical solution, take R ═ R1 r2 r3]Performing singular value decomposition on the matrix R to obtain a diagonal matrix D and two orthogonal matrices U and V, and simultaneously setting the diagonal matrix as a unit matrix to make R equal to UDVTThereby obtaining an internal reference matrix M and a distortion matrix P.
Wherein
Figure GDA0001538564450000111
P=[k1 k2 p1 p2 k3]。
[k1 k2 k3]Is a radial distortion coefficient, [ p ]1 p2]Is the tangential distortion coefficient.
Using distortion parameter pairs (x) in combination with pinhole modelsd,yd) Distortion point progressionCorrect to get the correct point (x)p,yp) As follows:
Figure GDA0001538564450000112
in the calibration procedure, the distortion parameter p and the internal reference element M of the camera are calculated, and the projection points are back-calculated for the spatial three-dimensional points, and the calibration projection error V is equal to 0.1708 pixels.
Figure GDA0001538564450000113
P=[0.987627 -40.8725 0.00451 0.01172 816.1225] (19)
Secondly, distortion correction and filtering processing of the image;
the close-range image data and the corresponding laser point cloud data of the measured object are acquired as shown in fig. 4, one of the three-dimensional objects is acquired as shown in fig. 5, the calibrated image distortion correction is performed by using distortion parameters as shown in fig. 3, all close-range images are also subjected to distortion correction, and the image filtering preprocessing is performed by using a Wallis filter as shown in fig. 6, wherein the Wallis filtering is represented in the form:
Figure GDA0001538564450000114
the Wallis filter can also be expressed as
gc(x,y)=g(x,y)r1+r0 (21)
r1=(csf)/(csg+sf/c) (22)
r0=bmf+(1-b)mg (23)
Wherein r is1Is a multiplicative coefficient of r0C represents an image contrast expansion constant as an additive coefficient; b represents the image luminance coefficient, mgThe average value of the gray levels of the image in a certain neighborhood of a certain pixel in the image. m isfIs the gray variance, s, of a certain neighborhood of a certain pixel in the imagefIs a target value of image variance, sgIs the gray variance of a certain neighborhood of a certain pixel in the image.
Step three, matching the homonymous points among the sequence images:
(1) the results of the SIFT initial matching are shown in fig. 7. As can be seen from the partially enlarged schematic diagram of fig. 8, a large number of mismatches are contained in the matching points, and therefore, the accuracy of the identical points is improved by performing step-by-step constraint on the sequence of the bidirectional consistency constraint, the RANSAC random sampling consistency constraint and the affine transformation constraint;
(2) and after a high-quality matching set is obtained through multiple constraints, further accurately positioning the homonymy point by utilizing least square matching, and taking the homonymy point obtained in the way as the homonymy point for image orientation. The result is shown in fig. 9, where fig. 10 is a partial schematic diagram of matching, and it can be seen that the point location of the matched homologous point is accurate and has high precision.
Step four, orienting the sequence images, and determining the position and attitude parameters of each image:
s1, obtaining homonymy points between stereo pairs by SIFT + least square matching, combining the matched homonymy points, and directly solving strict solutions through relative orientation to accurately obtain relative positions and attitude parameters of the initial pairs, specifically:
photographic baseline S1S2And light S of the same name1M、S2M coplanar, i.e. satisfying the coplanar condition, mathematical model thereof
B·(R1×R2)=0 (24)
Wherein the vector B represents the photographic baseline S1S2;R1And R2Respectively represent like-name rays S1M、S2M。
The coplanar condition equation is expressed in a coordinate form, and the matrix form of the coplanar condition equation is as follows:
Figure GDA0001538564450000121
wherein [ X ]1 Y1 Z1]TAnd [ X ]2 Y2 Z2]TThe coordinates of the points in the auxiliary coordinate system like the space can be expressed as follows:
Figure GDA0001538564450000122
wherein a is1a2a3b1b2b3c1c2c3A matrix formed by left image rotation parameters; a'1a’2a’3b’1b’2b’3c’1c’2c’3Is a matrix formed by the rotation parameters of the right image.
Usually, if the image space auxiliary coordinate system of the left image is coincident with the image space direct coordinate system, then (26) can be expressed as follows:
Figure GDA0001538564450000131
substituting the homonymous points into the coplanar condition equation of (25) by using the formulas (26) and (27), and the specific expansion form is as follows:
L1y1x2+L2y1y2-L3y1f+L4fx2+L5fy2-L6ff+L7x1x2+L8x1y2-L9x1f=0 (28)
the left and right sides of the equation of equation (28) are divided by L simultaneously5The result of the collation equation is as follows:
Figure GDA0001538564450000132
wherein
Figure GDA0001538564450000133
Given B after solving for the unknowns in equation (29)xNumerical values, then the solution can be obtainedThe required orientation parameters are expressed as follows:
Figure GDA0001538564450000134
in the formula (30), L5Value of (A)
Figure GDA0001538564450000135
Due to the action of the square, L5Has positive and negative values, L is shown in the formula (30)5The values of (a) will affect 9 rotation parameters, so two different sets of position and attitude angles will be calculated, taking into account that the same name light is formed by different camera stations acquiring image information from the same place and object point. The corner element of the right image should satisfy the following condition:
Figure GDA0001538564450000136
in order to obtain orientation parameters with good accuracy and high precision, the parameters of the direct solution are used as initial values on the basis of the direct solution of the relative orientation, and iterative solution is carried out in a formula of a rigorous solution of the relative orientation. The rigorous solution formula is expressed as follows:
Figure GDA0001538564450000141
wherein Q is the parallax between the upper and lower sides,
Figure GDA0001538564450000142
s2, establishing an index relationship between the two-dimensional homonymy point of the initial stereopair and the three-dimensional point intersected in front of the initial stereopair, and determining the homonymy two-dimensional point P between the three-dimensional point P of the initial stereopair and the third image according to the homonymy points of the two images and the three images. And performing space back intersection on the right image of the second image pair by using the extracted three-dimensional point set P and the homonymy two-dimensional point set P by adopting an improved Danish method weight selection iterative method based on a collinear equation to obtain the space position and the posture of the right image, and performing two-three-dimensional index establishment, two-three-dimensional homonymy control point determination and subsequent image orientation on the image of the next image pair according to the same method to obtain the initial value of the position and the posture of each image. The result of the initial sparse reconstruction is shown in fig. 11, and the rotation, scaling and misalignment errors of the reconstructed result are shown in fig. 12.
And S3, taking the result of the initial orientation as the initial value of the beam adjustment, taking a collinear equation as a mathematical model to perform free net beam adjustment, and quickly solving a three-dimensional point corresponding to the orientation parameter and the SIFT + least square matching point by utilizing a Levenberg-Marquardt (LM) algorithm. The method specifically comprises the following steps:
the collinearity equation is used as a mathematical model in the bundle adjustment, and the mathematical model is expressed as follows:
Figure GDA0001538564450000143
the auxiliary coordinates of the image space of the first image under the influence of the shooting sequence are taken as a reference, wherein (x, y) represents the coordinates of an image point under an image plane rectangular coordinate system; (X)0,Y0,Z0) Representing the spatial coordinates of the photographing center, f being the focal length of the camera; (X, Y, Z) are spatial three-dimensional point coordinates; (a)k,bk,ck) (k ═ 1,2,3) denotes the direction cosine formed by the angular elements phi, omega, k of the picture.
For a plurality of images, according to the sequence of the sequential images, the initial values of the position and the posture of each image are sequentially expressed as follows:
Figure GDA0001538564450000151
taking p [. DELTA.X,. DELTA.Y,. DELTA.Z [ ]]For reconstructing point coordinates Xj,Yj,ZjThe increment of (a) is increased by (b),
Figure GDA0001538564450000152
is an exterior orientation element eiThe taylor linearization of the collinearity equation (33) is developed and an error equation is established, the expression being:
Figure GDA0001538564450000153
wherein v isx vyIs a correction number; lx lyIs an approximation of x, y; a isij(i-1, 2; j-1, 2.. 6) is a weighting number. The above formula can be further written as
Figure GDA0001538564450000154
Figure GDA0001538564450000155
Can be simplified as follows:
Figure GDA0001538564450000156
wherein
Figure GDA0001538564450000157
Figure GDA0001538564450000158
On the basis of the traditional light beam adjustment, the Levenberg-Marquardt (LM) algorithm is used for resolving the position, the attitude and the three-dimensional point coordinates of the image.
The main problem solved by the bundle adjustment is to calculate the accurate orientation parameters and the three-dimensional points corresponding to the SIFT + least square matching points, and the accuracy of the adjustment directly influences the error of the observed value and the estimated value of the image point. Now assume that n points in space are projected onto m images, using xijRepresenting the point at which the ith spatial point is projected onto the jth image.
The error magnitude V can be represented by:
Figure GDA0001538564450000161
wherein, cijRepresenting an indicator quantity, in general cij1, d (x, y) represents the distance between an observation point x and a back calculation estimation point y, the adjustment by the beam method is the process of minimizing an error V, and the orientation parameter elements and the three-dimensional point coordinates are continuously corrected in the error minimization process, so that the error magnitude V is used as a judgment condition of adjustment accuracy.
A in the above formulajAnd biUnify to set P and combine observations xijIntegrated into the observation vector X, is represented as:
Figure GDA0001538564450000162
wherein,
Figure GDA0001538564450000163
and (3) coordinates of the space point i in the image space rectangular coordinate system of the image.
Get P0Taking the initial parameter, sigma x as covariance, taking the covariance matrix as unit matrix without any prior condition, and P0The estimated observation vector X can be represented as follows:
Figure GDA0001538564450000164
wherein
Figure GDA0001538564450000165
Based on the least square principle, the adjustment of the light beam method is converted into the minimization of the Mahalanobis distance as shown in the following formula (41), and the normal equation can be shown as the following formula (42) according to the LM algorithm
Figure GDA0001538564450000166
JT∑x-1Jδ=JT∑x-1ε (42)
Where J is the Jacobian matrix and δ represents the search step size. Because the irrelevance of the projection matrix is considered, a sparse structure exists in the formula (42), the sparse structure is high in calculation efficiency, and three-dimensional points corresponding to the orientation parameters and the SIFT matching points can be rapidly solved. The free net bundle adjustment correction of the primary orientation results is shown in fig. 13.
Step five, dense matching of stereo pairs in the sequence images:
i, carry out the graticule mesh to the image according to certain step length and divide, draw the interest value of each pixel in the calculation window according to Harris characteristic to the extreme point of interest value is regarded as the characteristic point in the window, establishes evenly distributed's characteristic point on the left image of stereopair, and it specifically is:
Figure GDA0001538564450000171
after Harris establishes uniformly distributed feature points, the Fostner operator is applied to accurately position the extracted feature points. The formula of the Fostner operator positioning method is as follows:
Figure GDA0001538564450000172
and II, calculating homography matrix parameters by utilizing SIFT + least square matching high-precision homonymy points between image orientation image pairs. And homography transforming the Harris + Fostner characteristic points on the left image in the image pair to the right image according to the one-to-one correspondence, and taking the homography transformed points as the coarse value points of the corresponding homonymy points of the characteristic points. And a one-dimensional searching and matching method that homonym points must be on homonym epipolar lines is utilized to further narrow the searching range of matching, the initial position of the homonym point is firstly determined by utilizing correlation coefficient matching in the searching range, then the homonym point is accurately positioned by least square matching, and the homonym points which are uniformly distributed are obtained in the stereo image pair. The results are shown in fig. 14, which specifically includes:
the homography matrix is used for describing a mapping relation between image pairs and homonymy points, SIFT + least square matching high-precision homonymy points between image pairs in image orientation are utilized, homography matrix parameters are calculated, and a mathematical model is expressed as follows:
Figure GDA0001538564450000173
wherein (x, y) is the pixel coordinate of the point on the left image; h is a homography matrix; s is the scaling factor between the coordinates.
After image orientation is complete, epipolar geometric relationships between image pairs are determined according to the coplanar condition equation, as shown below
Figure GDA0001538564450000174
Wherein the coefficients in the formula have been solved for the point (x) on the left image at the time of orientation1,y1) If x on the right image is known2Then y on the epipolar line can be calculated according to equation 462The value is obtained.
And finding a search range corresponding to the feature point of the left image on the right image, and determining the homonymy point by using the correlation coefficient and least square matching. Firstly, a pixel point with the maximum correlation coefficient is found in the searching range above and below the epipolar line, and whether the pixel point is the homonymy point or not is screened through the correlation coefficient threshold value. The correlation coefficient can be expressed as:
Figure GDA0001538564450000175
wherein m and n are the size of a search window, and m is n in general; gi,jThe gray value of the pixel in the m x n target window is obtained; g'i+r,j+cExpressed as pixel gray values in the search window;
Figure GDA0001538564450000181
expressed as the mean value of the pixel gray levels in the target window;
Figure GDA0001538564450000182
the average value of the pixel gray levels in the search window is obtained;
in order to improve the accuracy of the correlation coefficient matching, the correlation coefficients of R, G, B three primary color channels are considered herein, and the final similarity P is determined by the mean valueend
Figure GDA0001538564450000183
III, Harris + Fostner feature matching to obtain uniformly distributed homonymous points on the left image and the right image, constructing a Dironey triangulation network based on the homonymous points on the left image, constructing a corresponding triangulation network on the right image according to the corresponding relation between the homonymous points, wherein triangles formed by the homonymous points are called homonymous triangles, and the homonymous triangles constructed by the stereopair pairs are shown in FIG. 15;
and IV, taking a triangle on the left image as a matching unit, interpolating a gravity center point of the triangle, taking the corresponding epipolar triangle and the epipolar geometry between image pairs as constraint conditions, searching and matching the same-name point in the corresponding same-name triangle of the right image to continuously update a triangulation network, taking the side length of the triangle as the gravity center interpolation condition and setting a side length threshold according to the characteristics of the Dirony triangulation network, wherein the side length threshold is determined according to the density of the final desired three-dimensional point, when the side length of the triangle is less than the threshold, the triangle is not interpolated, and the dense matching result is shown in figure 16, wherein the dense matching result is specifically as follows:
firstly, calculating the area of a triangle in a Diloney triangulation network constructed by left Harris + Fostner matching, judging and setting the size of an area threshold, wherein the area S calculation formula is as follows
Figure GDA0001538564450000184
Wherein (x)1,y1),(x2,y2),(x3,y3) Is the pixel coordinate of the same name point.
If the area of the triangle matching unit is larger than the area threshold value, interpolating the gravity center point of the triangle, and calculating the coordinate formula of the gravity center point as follows
Figure GDA0001538564450000185
According to the geometric relation of the epipolar lines of the stereopair in the orientation result, the epipolar line corresponding to the right image of the interpolated gravity center point is calculated, the linear equation of the three sides of the corresponding homonymous triangle is calculated, the intersection point of the epipolar line and the epipolar line is solved, the range of the epipolar line in the homonymous triangle is determined according to the two intersection points, the matching initial value is obtained by searching in the epipolar line range in the homonymous triangle through the matching of the correlation coefficient, and the position of the homonymous point is accurately positioned through least square matching.
Splitting the barycenter in the left image triangle and the homonymous point obtained by searching and matching in the right image homonymous triangle into three triangles, updating the triangulation network, storing according to the corresponding point location relation, keeping the split triangle in the homonymous triangle principle, and repeating the steps.
Step six, matching the edge information of the stereo image pair:
the Canny operator is used for extracting image edge information, which is the edge characteristic of an object extracted from an image (possibly in a triangle, or coincident with a triangle vertex, or intersected with an edge of the triangle). As shown in fig. 17, a dironi triangulation network constructed from densely matched seed points is used as a determination condition, and a search range is determined for edge points outside the triangulation network by using homographic coarse value points and epipolar line geometry. For the edge points in the triangulation network, a search range is geometrically determined by using homonymous triangles where the edge points are located and epipolar lines, the high-precision homonymous points are accurately positioned in the determined search range by adopting correlation coefficients and least square matching, and the edge information matching result of the stereopair is shown in fig. 18.
Step seven, fine reconstruction of sequence images:
obtaining high-density homonymous points according to a dense matching method, obtaining high-precision edge homonymous points through edge information matching, performing edge reconstruction and dense point reconstruction on the edge homonymous points and the high-density homonymous point results of a stereopair in a sequence image by utilizing the orientation parameters of the image, fusing corresponding edge point clouds on the basis of the dense point clouds, and finally obtaining a sequence image fine reconstruction point cloud with rich detail information and high point cloud density, wherein the result is shown in fig. 21, and fig. 19 and 20 show the fine reconstruction result of the edge reconstruction result of one stereopair and the blending of the edge reconstruction and the dense point reconstruction.
Step eight, image point cloud absolute orientation
Taking the scanning point cloud of the articulated arm as a reference, selecting a control point, converting an image space rectangular coordinate system of the image point cloud into a local scanning coordinate system of the articulated arm, and realizing absolute orientation of the image point cloud through space similarity transformation between the two coordinate systems, so that the image point cloud has actual size and scale on the basis of fine detail information, and can be specifically as follows:
taking any point image space rectangular coordinate system (X) of the image point cloudP YP ZP) The corresponding local scanning coordinate system of the articulated arm is (X)tp Ytp Ztp) The spatial similarity transformation model is
Figure GDA0001538564450000191
Wherein, the delta X, the delta Y and the delta Z are translation parameters; a isi,bi,ciIs a rotation parameter; and lambda is the scaling from the image space rectangular coordinate system to the joint arm local scanning coordinate system.
The number of modules and the processing scale described herein are intended to simplify the description of the invention. The application, modification and variation of the method for reconstructing the fine three-dimensional image of the cultural relic sequence based on the triangular net interpolation and constraint of the invention are obvious to those skilled in the art.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.

Claims (6)

1. A cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint is characterized by comprising the following steps:
firstly, acquiring a sequence image of a cultural relic by using a camera;
secondly, carrying out homonymy point matching among the sequence images to obtain homonymy points for orientation of the sequence images;
thirdly, firstly, solving by using the homonymy points to obtain an orientation parameter of each image, and then obtaining a position and posture parameter of each image by using the image orientation parameter as an initial value and by means of bundle adjustment;
step four, dense matching of stereo pairs in the sequence images: in the stereoscopic image pair, extracting characteristic points of a left image by using a Harris grid and accurately positioning by using a Fostner operator, searching and matching corresponding homonymous points on a right image to establish uniformly distributed homonymous points, constructing a homonymous Dirony triangular network by using the uniformly distributed homonymous points as seed points, and continuously interpolating and matching the gravity center points of the triangle and continuously increasing the matched homonymous points to obtain high-density homonymous points of the cultural relic image;
fifthly, according to the position and posture parameters of each image obtained in the third step, carrying out dense point reconstruction on the high-density homonymous points of the cultural relic image to construct a fine point cloud of the cultural relic image;
taking the three-dimensional point cloud of the cultural relic as a reference, and realizing absolute orientation of the fine point cloud of the cultural relic image through space similarity transformation between coordinate systems, so that the fine point cloud of the cultural relic image has actual size and scale, and a fine three-dimensional model of the cultural relic sequence image is obtained;
the fourth step specifically comprises the following steps:
4.1) carrying out grid division on each image according to a certain step length, extracting and calculating interest values of all pixel points in a window according to Harris characteristics, taking extreme points of the interest values in the window as characteristic points, applying Fostner operators to carry out accurate positioning, and establishing uniformly distributed characteristic points on a left image of a stereopair;
4.2) firstly, utilizing the homonymy points between the stereo pairs in image orientation to calculate homonymy matrix parameters, homonymy transforming the characteristic points on the left image in the stereo pair to the right image according to the one-to-one correspondence relationship, taking the homonymy transformation points as the rough value points of the homonymy points corresponding to the characteristic points, then utilizing the geometric relationship of epipolar lines, utilizing the one-dimensional searching and matching method of the homonymy points on the homonymy epipolar lines to further narrow the matched searching range, firstly utilizing the correlation coefficient matching to determine the initial positions of the homonymy points in the searching range, and then precisely positioning the homonymy points through least square matching to obtain the uniformly distributed homonymy points in the stereo pair;
4.3) constructing a Diloney triangulation network on the basis of the characteristic points on the left image, and constructing a corresponding triangulation network on the right image according to the corresponding relation between the homonymous points, wherein a triangle formed by the homonymous points is called a homonymous triangle;
4.4) interpolating a triangle gravity center point by taking a triangular net on a left image as a matching unit, searching and matching the same-name point in the same-name triangle corresponding to a right image by taking the epipolar geometry between the corresponding same-name triangle and a stereo pair as a constraint condition, continuously updating the triangular net, setting a side length threshold value by taking the side length of the triangle as a gravity center interpolation condition according to the characteristics of the Diloney triangular net, and not interpolating the triangle when the side length of the triangle is smaller than the threshold value so as to obtain the high-density same-name point of the cultural relic image;
after the step four, before the step five, the method further comprises:
a, extracting image edge information by using a Canny operator, determining a search range by using homography transformed coarse value points and kernel line geometry for edge points outside a triangulation network by using the Dirony triangulation network as a judgment condition, determining a search range by using homonymous triangles where the edge points are located and the kernel line geometry for the edge points in the triangulation network, and accurately positioning the homonymous points of the edges of the cultural relic image by adopting correlation coefficients and least square matching in the determined search range;
and then, in the fifth step, according to the position and posture parameters of each image, carrying out dense point reconstruction on the high-density homonymous points of the cultural relic image and carrying out dense point reconstruction and edge reconstruction on the edge homonymous points of the cultural relic image respectively, and fusing corresponding edge point clouds on the basis of the dense point clouds to obtain the fine point cloud of the cultural relic image.
2. The method for reconstructing a cultural relic sequence image fine three-dimensional based on the triangulation network interpolation and the constraint as claimed in claim 1, wherein the third step comprises the following steps:
3.1) obtaining the corresponding position and attitude parameters of the initial stereopair by combining the homonymy points obtained in the second step with a method of direct solution of relative orientation and tight solution, and simultaneously applying forward intersection to the homonymy points to calculate three-dimensional coordinates of the homonymy points;
3.2) establishing an index relationship between a two-dimensional homonymy point of the initial stereopair and a three-dimensional point intersected in front of the initial stereopair, determining a homonymy two-dimensional point P between the three-dimensional point P of the initial stereopair and a third image according to the homonymy point of a second three image, performing space rear intersection on the right image of the second stereopair by using the extracted three-dimensional point set P and the homonymy two-dimensional point set P by adopting an improved Danish method weight selection iteration method based on a collinear equation to obtain the space position and the posture of the right image, and then performing two-three-dimensional index establishment, two-three-dimensional homonymy point determination and subsequent image orientation of the next stereopair image according to the same method to obtain the position and posture initial value of each image;
and 3.3) taking the position and attitude initial values of each image as initial values of beam adjustment, taking a collinear equation as a mathematical model to perform free net beam adjustment, and solving three-dimensional points corresponding to the orientation parameters and SIFT + least square matching points by utilizing a Levenberg-Marquardt (LM) algorithm to obtain the position and attitude parameters of each image.
3. The method for reconstructing a cultural relic sequence image in a fine three-dimensional manner based on the triangulation network interpolation and the constraint, as claimed in claim 1, wherein the second step comprises the following steps:
2.1) after homonymy point matching is carried out between the sequence images, sequentially carrying out bidirectional consistency constraint, RANSAC random sampling consistency constraint and affine transformation constraint, and carrying out step-by-step item-by-item constraint so as to improve the accuracy of homonymy points and obtain a high-quality homonymy point set;
and 2.2) further accurately positioning the high-quality homonymy point set by using a least square matching method to obtain homonymy points for sequence image orientation.
4. The method for the fine three-dimensional reconstruction of the image of the cultural relic sequence based on the triangular mesh interpolation and the constraint as claimed in claim 1, wherein after the step one, before the step two, the method further comprises the step B:
firstly, calibrating a camera to obtain internal reference elements and distortion parameters of the camera, wherein the internal reference elements are used for orienting an image, and the distortion parameters are used for correcting the distortion of image data;
then, distortion correction is carried out on the sequence image data, and filtering processing is carried out on the distorted image; and then entering the step two.
5. The method as claimed in claim 4, wherein in step B, the images for calculating the camera calibration parameters are taken at different angles and at different distances with respect to the calibration plate during the camera calibration.
6. The method for reconstructing a cultural relic sequence image fine three-dimensional based on the triangulation interpolation and the constraint in the claim 1, wherein in the sixth step, the three-dimensional point cloud is obtained by scanning with an articulated arm scanner.
CN201710793380.2A 2017-09-06 2017-09-06 Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint Active CN107767440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710793380.2A CN107767440B (en) 2017-09-06 2017-09-06 Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710793380.2A CN107767440B (en) 2017-09-06 2017-09-06 Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint

Publications (2)

Publication Number Publication Date
CN107767440A CN107767440A (en) 2018-03-06
CN107767440B true CN107767440B (en) 2021-01-26

Family

ID=61264984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710793380.2A Active CN107767440B (en) 2017-09-06 2017-09-06 Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint

Country Status (1)

Country Link
CN (1) CN107767440B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118587A (en) * 2018-07-14 2019-01-01 武汉华宇世纪科技发展有限公司 The production method and device of digital orthoimage
CN109238173B (en) * 2018-08-16 2020-03-13 煤炭科学研究总院 Three-dimensional live-action reconstruction system for coal storage yard and rapid coal quantity estimation method
CN109945853B (en) * 2019-03-26 2023-08-15 西安因诺航空科技有限公司 Geographic coordinate positioning system and method based on 3D point cloud aerial image
CN111311731B (en) * 2020-01-23 2023-04-07 深圳市易尚展示股份有限公司 Random gray level map generation method and device based on digital projection and computer equipment
CN113379822B (en) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN111595299B (en) * 2020-04-14 2021-11-19 武汉倍思凯尔信息技术有限公司 Clothing information acquisition system based on photogrammetry
CN112419380B (en) * 2020-11-25 2023-08-15 湖北工业大学 Cloud mask-based high-precision registration method for stationary orbit satellite sequence images
CN112837389B (en) * 2021-02-01 2024-09-06 北京爱奇艺科技有限公司 Object model construction method and device and electronic equipment
CN112929626B (en) * 2021-02-02 2023-02-14 辽宁工程技术大学 Three-dimensional information extraction method based on smartphone image
CN113589340B (en) * 2021-06-15 2022-12-23 北京道达天际科技股份有限公司 High-precision positioning method and device for satellite images assisted by reference network
CN114266830B (en) * 2021-12-28 2022-07-15 北京建筑大学 Underground large space high-precision positioning method
CN114353780B (en) * 2021-12-31 2024-04-02 高德软件有限公司 Gesture optimization method and device
CN114913246B (en) * 2022-07-15 2022-11-01 齐鲁空天信息研究院 Camera calibration method and device, electronic equipment and storage medium
CN115830246B (en) * 2023-01-09 2023-04-28 中国地质大学(武汉) Spherical panoramic image three-dimensional reconstruction method based on incremental SFM
CN117115336A (en) * 2023-07-13 2023-11-24 中国工程物理研究院计算机应用研究所 Point cloud reconstruction method based on remote sensing stereoscopic image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003088085A1 (en) * 2002-04-04 2003-10-23 Arizona Board Of Regents Three-dimensional digital library system
CN102074052A (en) * 2011-01-20 2011-05-25 山东理工大学 Sampling point topological neighbor-based method for reconstructing surface topology of scattered point cloud
CN104952107A (en) * 2015-05-18 2015-09-30 湖南桥康智能科技有限公司 Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data
FR3040798A1 (en) * 2015-09-08 2017-03-10 Safran PLENOPTIC CAMERA

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003088085A1 (en) * 2002-04-04 2003-10-23 Arizona Board Of Regents Three-dimensional digital library system
CN102074052A (en) * 2011-01-20 2011-05-25 山东理工大学 Sampling point topological neighbor-based method for reconstructing surface topology of scattered point cloud
CN104952107A (en) * 2015-05-18 2015-09-30 湖南桥康智能科技有限公司 Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data
FR3040798A1 (en) * 2015-09-08 2017-03-10 Safran PLENOPTIC CAMERA

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Forstner 算子及其改进;张莉等;《北京工业职业技术学院学报》;20070715;全文 *
影像匹配与密集点云生成研究;董友强;《中国优秀硕士学位论文全文数据库(电子期刊)基础科学辑》;20141215;第2-4节 *

Also Published As

Publication number Publication date
CN107767440A (en) 2018-03-06

Similar Documents

Publication Publication Date Title
CN107767440B (en) Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN112102458B (en) Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
AU2011312140C1 (en) Rapid 3D modeling
Grussenmeyer et al. Solutions for exterior orientation in photogrammetry: a review
CN107578376B (en) Image splicing method based on feature point clustering four-way division and local transformation matrix
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN110345921B (en) Stereo visual field vision measurement and vertical axis aberration and axial aberration correction method and system
Remondino 3-D reconstruction of static human body shape from image sequence
JP7502440B2 (en) Method for measuring the topography of an environment - Patents.com
US20070052974A1 (en) Three-dimensional modeling from arbitrary three-dimensional curves
CN107014399A (en) A kind of spaceborne optical camera laser range finder combined system joint calibration method
CN109900205B (en) High-precision single-line laser and optical camera rapid calibration method
CN111612731B (en) Measuring method, device, system and medium based on binocular microscopic vision
KR20130121290A (en) Georeferencing method of indoor omni-directional images acquired by rotating line camera
CN110675436A (en) Laser radar and stereoscopic vision registration method based on 3D feature points
CN112270698A (en) Non-rigid geometric registration method based on nearest curved surface
CN113947638A (en) Image orthorectification method for fisheye camera
CN115719320B (en) Tilt correction dense matching method based on remote sensing image
CN110827359A (en) Checkerboard trihedron-based camera and laser external reference checking and correcting method and device
Wu Photogrammetry: 3-D from imagery
CN112819900B (en) Method for calibrating internal azimuth, relative orientation and distortion coefficient of intelligent stereography
CN108053467B (en) Stereopair selection method based on minimum spanning tree

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant