Nothing Special   »   [go: up one dir, main page]

CN101980304A - Three-dimensional digital volume image distortion measuring method - Google Patents

Three-dimensional digital volume image distortion measuring method Download PDF

Info

Publication number
CN101980304A
CN101980304A CN 201010520673 CN201010520673A CN101980304A CN 101980304 A CN101980304 A CN 101980304A CN 201010520673 CN201010520673 CN 201010520673 CN 201010520673 A CN201010520673 A CN 201010520673A CN 101980304 A CN101980304 A CN 101980304A
Authority
CN
China
Prior art keywords
mrow
msub
munderover
sigma
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201010520673
Other languages
Chinese (zh)
Inventor
黄建永
潘晓畅
李姗姗
彭小玲
熊春阳
方竞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN 201010520673 priority Critical patent/CN101980304A/en
Publication of CN101980304A publication Critical patent/CN101980304A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a three-dimensional digital volume image distortion measuring method, which comprises the following steps of: (1) selecting scattered sampling points on a digital volume image before distortion; (2) determining a reference sub-volume and the dimension of a searched area; (3) performing three-dimensional image-related operation on the reference sub-volume and the searched area based on a three-dimensional Fast Fourier Transform method; (4) establishing an image gray three-dimensional summing table and an image energy three-dimensional summing table; (5) solving a three-dimensional zero-mean normalization cross correlation coefficient matrix by using the operational result in the step (3) and the three-dimensional summing tables; (6) calculating subvoxel displacement values of the sampling points by using a gradient-based three-dimensional subvoxel displacement location algorithm; (7) calculating accurate displacement values at all the sampling points according to the steps (3) to (6) so as to obtain a three-dimensional displacement field of the whole distorted volume image relative to the volume image before distortion; and (8) calculating a three-dimensional strain field of the digital volume image according to the Lagrangian strain tensor theory in mechanics of continuous media. The three-dimensional distortion of the digital volume image can be accurately and efficiently measured.

Description

Three-dimensional digital volume image deformation measurement method
Technical Field
The invention relates to a measuring method, in particular to a deformation measuring method for quantitatively analyzing a three-dimensional digital volume image.
Background
The basic principle of the related art of two-dimensional digital images is to compare a digital image before deformation (reference image) with an image after deformation (deformation image), i.e. to compare the reference image with the deformation image. And then a series of cross correlation operations are carried out to obtain a displacement field and a strain field of the deformed image relative to the image before deformation. At present, the traditional two-dimensional digital image correlation method becomes a classical experimental mechanics technology which is widely applied to the fields of material mechanics behavior testing, micro-scale deformation measurement, micro-electro-mechanical system (MEMS) structure dynamic characterization, biological tissue mechanical property evaluation and the like. On the basis of the two-dimensional Digital image Correlation method, Bay and its co-workers proposed the concept of Digital Volume Correlation (DVC) in 1999 (Bay B.K., Smith T.S., Fyhrie D.P., and Saad M., Digital Volume Correlation: Three-dimensional mapping using X-ray tomogry, Experimental Mechanics, 1999, 39(3), 217-226; Bell, Smith, Share, Seawatt, Digital Volume image Correlation: Three-dimensional strain characterization using X-ray tomography, Experimental Mechanics, 1999, 39(3), 217-226), and was first applied to the deformation measurement of Three-dimensional X-ray microstructure in osteoporosis. The basic principle of the digital volume correlation method is similar to that of the digital image correlation method, and the core idea is to perform three-dimensional image cross-correlation operation between a specific Reference sub-volume (Reference Subvolume) in a deformed precursor image and all possible Target subvolumes (Target subvolumes) in a corresponding search area in the deformed volume image, and determine the most possible position of the Reference Subvolume in the deformed volume image through the extreme condition of a correlation function, so as to obtain a three-dimensional displacement field and a strain field of the digital volume image.
With the development of Micro-computed Tomography (μ CT), Magnetic Resonance Imaging (MRI), and other three-dimensional microstructure topography detection technologies in recent years, quantitative analysis of three-dimensional deformation of porous honeycomb materials (such as bones, wooden materials, clay materials, foam structure materials, and the like) by means of a digital volume correlation method is started. With the rapid development of laser-confocal microscopy (LSCM), research was undertaken to investigate three-dimensional deformation of soft materials with random markers and better light transmission (e.g., collagen materials with randomly distributed protein fibers) by using a digital volume-dependent method (Roeder b.a., Kokini k., Robinson j.p., and volume-Harbin s.l., Local, three-dimensional structure measures with transformed extracellular matrix structures, Journal of biological engineering, 2004, 126(6), 699-708; jord, cockayne, Robinson, waruck-708, strain measurement of extracellular matrix structures with Local three-dimensional large deformation, Journal of biological engineering, 2004, 69126 (6), 699). However, compared with the digital image correlation method which is developed more mature, the digital volume correlation method still needs to be developed and perfected in the aspects of solving precision and calculating efficiency, and the application field of the digital volume correlation method still needs to be further expanded.
At present, the digital volume correlation method mainly only considers the Voxel (also called Voxel) correlation matching operation between a reference sub-volume and a target sub-volume in the application process, and usually considers at most the rigid body translation and rotation effects between the sub-volumes in the matching process, and the sub-Voxel (subvolume) positioning algorithm framework of the digital volume correlation method is not established systematically, and particularly the research of the three-dimensional image sub-Voxel positioning algorithm based on laser confocal imaging is blank. On the other hand, as can be seen from the basic principle of the digital volume correlation method, for each selected reference sub-volume, three-dimensional volume correlation operations are required with all possible target sub-volumes within the corresponding search region one by one, so that it is not difficult to find that the correlation operations are much more computationally intensive than the search calculations of the digital image correlation method under similar circumstances. For example, when the sub-volume size is 32 × 32 × 32 voxels and the search region is 62 × 62 × 62 voxels, at least 30 × 30 × 30 ═ 27000 three-dimensional volume correlation operations are required between the reference sub-volume and the target sub-volume, and at least three-dimensional summation operations are required for each volume correlation calculation. Rough estimates indicate that the digital volume correlation computation in this case is approximately three orders of magnitude greater than the digital image correlation computation in a similar case (when the sub-region size is 32 x 32 pixels and the search region size is 62 x 62 pixels). The complexity of the above conventional digital volume correlation method in terms of calculation has become a major obstacle that restricts the further widespread application of the conventional digital volume correlation method, and particularly, the calculation efficiency of the conventional digital volume correlation method cannot meet the requirement in the case of large sample statistical analysis.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a method for measuring the deformation of a three-dimensional digital volume image with high calculation efficiency and sub-voxel measurement accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme: a three-dimensional digital volume image deformation measurement method comprises the following steps: (1) taking the three-dimensional digital volume image without deformation as a reference volume image F (x, y, z), and taking the three-dimensional digital volume image with deformation as a deformation volume image G (x, y, z); selecting a series of discrete sampling points on a reference volume image F (x, y, z), and setting the coordinate of any one sampling point as (alpha, beta, A); (2) respectively selecting L from the sampling points in the step (1)x×Ly×LzA rectangular parallelepiped of individual voxels as a reference subvolume f (x, y, z); in the deformed volume image G (x, y, z), M is selected from the sample point coordinates (alpha, beta, A) as the centerx×My×MzA rectangular parallelepiped of each voxel is used as a search region g (x, y, z), and Mx=My=Mz>Lx=Ly=Lz(ii) a (3) For each sample point (α, β, γ),and respectively carrying out three-dimensional image correlation operation on the corresponding reference sub-volume f (x, y, z) and the corresponding search area g (x, y, z), wherein the operation result P is as follows:
Figure BSA00000320344900031
wherein represents a complex conjugate; FT3[·]And
Figure BSA00000320344900032
respectively representing three-dimensional fast Fourier forward transformation and inverse transformation;
Figure BSA00000320344900033
and
Figure BSA00000320344900034
representing a reference subvolume f (x, y, z) image grayscale three-dimensional matrix and a search area g (x, y, z) volume image grayscale three-dimensional matrix respectively; (4) respectively establishing a three-dimensional summation table S of the gray levels of the deformed digital volume images by utilizing a fast recursion method according to the image gray level value of the search area g (x, y, z) corresponding to each sampling point (alpha, beta, gamma)gAnd image energy three-dimensional summation table(5) The three-dimensional summation table S established by the step (4)gAndperforming fast table lookup operation, and calculating a three-dimensional zero-mean normalized cross-correlation coefficient matrix [ C (u, v, w) ] between the reference subvolume f (x, y, z) corresponding to the sampling point (alpha, beta, gamma) and the corresponding search area g (x, y, z) according to the correlation operation result of the step (3)](α,β,γ)(ii) a (6) Performing sub-voxel interpolation operation near the position of the maximum peak of the zero-mean normalized cross-correlation coefficient matrix by adopting a gradient-based three-dimensional sub-voxel displacement positioning algorithm to obtain the accurate position coordinates (U, V, W) of the sampling points (alpha, beta, gamma) in the reference digital volume image in the corresponding deformed volume image, wherein U is U + delta U, V is V + delta V, W is W + delta W, and (u, v, w) represents a matrix of correlation coefficients [ C (u, v, w) normalized by the three-dimensional zero mean](α,β,γ)The integral voxel displacement value determined by the medium maximum element (delta u, delta v, delta w) is a sub-voxel displacement value calculated by a three-dimensional sub-voxel displacement positioning algorithm based on gradient; (7) repeating the steps (3) to (6), and calculating the accurate positions of the sampling points in all the reference volume images in the deformed digital volume image so as to obtain a three-dimensional displacement field of the whole deformed volume image relative to the reference volume image; (8) calculating a three-dimensional strain field of the digital volume image according to a Lagrange strain tensor theory in continuous medium mechanics:wherein, i is more than or equal to 1, j, m is less than or equal to 3, U1=U,U2=V,U3=W。
In the step (2), in the step Lx×Ly×LzIn a rectangular parallelepiped of individual voxels, Lx=Ly=LzAnd the integers are integers of 30-60, and the unit is a voxel; at the Mx×My×MzIn a rectangular parallelepiped of individual voxels, Mx=My=MzAnd they take integers between 35 and 100, the unit is voxel, and Mx=My=Mz>Lx=Ly=Lz
In the step (4), the three-dimensional summation table SgAnd
Figure BSA00000320344900041
respectively:
sg(x,y,z)=g(x,y,z)+sg(x,y-1,z)+sg(x-1,y,z)+sg(x-1,y-1,z-1)
+sg(x,y,z-1)-sg(x,y-1,z-1)-sg(x-1,y,z-1)-sg(x-1,y-1,z)’
s g 2 ( x , y , z ) = [ g ( x , y , x ) ] 2 + s g 2 ( x , y - 1 , z ) + s g 2 ( x - 1 , y , z ) + s g 2 ( x - 1 , y - 1 , z - 1 )
+ s g 2 ( x , y , z - 1 ) - s g 2 ( x , y - 1 , z - 1 ) - s g 2 ( x - 1 , y , z - 1 ) - s g 2 ( x - 1 , y - 1 , z ) ,
wherein, the three-dimensional summation table SgAnd
Figure BSA00000320344900044
respectively comprise Mx×My×MzAn element; g (x, y, z) represents the gray value corresponding to the voxel with the central coordinate (x, y, z) in the search area; and x, y, z are integers, and when x, y, z is less than or equal to 0, S g = S g 2 = 0 .
in the step (5), the three-dimensional zero-mean normalized cross-correlation coefficient matrix [ C (u, v, w)](α,β,γ)Comprises the following steps:
<math><mrow><msub><mrow><mo>[</mo><mi>C</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>]</mo></mrow><mrow><mo>(</mo><mi>&alpha;</mi><mo>,</mo><mi>&beta;</mi><mo>,</mo><mi>&gamma;</mi><mo>)</mo></mrow></msub><mo>=</mo><mfrac><mrow><mi>P</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>-</mo><mi>Q</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow></mrow><mrow><msqrt><mi>F</mi></msqrt><msqrt><mi>G</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow></msqrt></mrow></mfrac><mo>,</mo></mrow></math>
wherein, <math><mrow><mi>P</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mo>{</mo><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>&times;</mo><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>}</mo><mo>,</mo></mrow></math>
<math><mrow><mi>Q</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><mover><mi>f</mi><mo>&OverBar;</mo></mover><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><msub><mi>L</mi><mi>y</mi></msub><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo><mo>&times;</mo><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo><mo>,</mo></mrow></math>
<math><mrow><mi>G</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msup><mrow><mo>[</mo><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>-</mo><msub><mover><mi>g</mi><mo>&OverBar;</mo></mover><mi>uvw</mi></msub><mo>]</mo></mrow><mn>2</mn></msup><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><msup><mi>g</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><msub><mi>L</mi><mi>y</mi></msub><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><msup><mrow><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo></mrow><mn>2</mn></msup><mo>,</mo></mrow></math>
<math><mrow><mi>F</mi><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mrow><mo>[</mo><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mover><mi>f</mi><mo>&OverBar;</mo></mover><msup><mo>]</mo><mn>2</mn></msup><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msup><mi>f</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><msub><mi>L</mi><mi>y</mi></msub><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><msup><mrow><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo></mrow><mn>2</mn></msup><mo>,</mo></mrow></mrow></math>
and is <math><mrow><mover><mi>f</mi><mo>&OverBar;</mo></mover><mo>=</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>y</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>;</mo></mrow></math> <math><mrow><msub><mover><mi>g</mi><mo>&OverBar;</mo></mover><mi>uvw</mi></msub><mo>=</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>y</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>;</mo></mrow></math>
P (u, v, w) is rapidly solved by the three-dimensional fast Fourier transform method in the step (3), i.e.Solving the triple summation items in Q (u, v, w) and G (u, v, w) by a quick table look-up mode in the step (5); f is solved by arithmetic operations.
Due to the adoption of the technical scheme, the invention has the following advantages: 1. according to the invention, a three-dimensional fast Fourier transform method is adopted to carry out fast cross correlation operation on the reference sub-volumes and the corresponding search areas, a three-dimensional image gray level summation table and a three-dimensional image energy summation table are established in a fast recursion mode, and then a table look-up mode is used to quickly calculate the triple summation expression, so that efficient solution of the three-dimensional zero-mean normalized cross correlation coefficient is realized, and further the integer element displacement result of the sampling point is quickly obtained. 2. The invention adopts a fast table look-up mode to calculate the triple summation expression in the three-dimensional zero mean value normalization cross correlation coefficient, and introduces the three-dimensional fast Fourier transform technology to carry out correlation operation, thereby effectively reducing the complexity of the displacement calculation of the sampling point, and particularly improving the solving efficiency of the digital volume image correlation operation under the condition of carrying out large-scale volume correlation operation or having harsh requirements on the calculation time. 3. According to the invention, the three-dimensional sub-voxel positioning algorithm based on the spatial gradient is adopted to carry out sub-voxel displacement solving on the digital volume image, so that the solving precision of the displacement field can reach the sub-voxel magnitude, and the sub-voxel positioning algorithm does not need to carry out complicated iterative calculation, so that the calculation efficiency is higher. The invention can be widely applied to the deformation measurement analysis of the three-dimensional digital volume image.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a schematic diagram of the digital volume image correlation principle of the present invention;
FIG. 3 is a schematic diagram of three-dimensional digital volume image effects before and after deformation after undergoing vertical single-cycle compression according to an embodiment of the present invention;
FIGS. 4 a-4 c are schematic diagrams of full-field three-dimensional displacement field effects calculated according to a first embodiment of the present invention;
fig. 5a to 5b are schematic diagrams illustrating comparison of three-dimensional deformation effect of agarose-fluorescence digital volume images captured by a confocal laser microscope and measured by the present invention according to a first embodiment of the present invention, wherein the units are voxels, fig. 5a is a three-dimensional agarose-fluorescence substrate digital volume deformation image captured by a confocal laser microscope, and fig. 5b is a displacement field calculated by the three-dimensional digital volume image deformation measuring method according to the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
As shown in fig. 1 and fig. 2, the three-dimensional digital volume image deformation measurement method of the present invention is based on the inherent characteristics of digital volume image three-dimensional deformation search calculation, and by introducing a three-dimensional fast fourier transform method and combining a three-dimensional summation table method, the steps are as follows:
1) taking the three-dimensional digital volume image without deformation as a reference volume image F (x, y, z), and taking the three-dimensional digital volume image with deformation as a deformation volume image G (x, y, z); selecting a series of discrete sampling points in a reference volume image F (x, y, z) according to a certain rule (for example, selecting grid intersection points as sampling points by a method of dividing a stereo grid), and setting any one of the sampling points to be represented by coordinates (alpha, beta, gamma);
2) respectively selecting a sample containing L by taking a series of sampling points in the step 1) as a centerx×Ly×LzA rectangular parallelepiped of individual voxels serves as a reference subvolume f (x, y, z), wherein L is usually givenx=Ly=LzAnd they generally take integers between 30 and 60, and the unit is voxel; similarly, in the deformed digital volume image (also called deformed volume image) G (x, y, z), the coordinates (α, β, γ) of the sampling points previously selected in the reference volume image are respectively selected to include Mx×My×MzA rectangular parallelepiped of individual voxels serves as the search region g (x, y, z), where M is usually assumedx=My=MzAnd they generally take integers between 35 and 100 in units of voxels, and Mx=My=Mz>Lx=Ly=Lz(as shown in FIG. 2);
3) for each sampling point (α, β, γ), performing three-dimensional image correlation operation on the corresponding reference sub-volume f (x, y, z) and the corresponding search area g (x, y, z), wherein an operation result P is:
P = FT 3 - 1 [ FT 3 ( f ) FT 3 * ( g ) ] , - - - ( 1 )
wherein represents a complex conjugate; FT3[·]And
Figure BSA00000320344900062
respectively representing three-dimensional fast Fourier forward transformation and inverse transformation;
Figure BSA00000320344900063
and
Figure BSA00000320344900064
representing a reference subvolume f (x, y, z) image grayscale three-dimensional matrix and a search area g (x, y, z) volume image grayscale three-dimensional matrix respectively;
the present invention uses the above-described three-dimensional fast fourier transform method to perform three-dimensional image correlation operations on the reference subvolume f (x, y, z) and the search area g (x, y, z), which can reduce the computational complexity of the prior art from 0[ (L)x×Ly×Lz)×(Mx-Lx+1)×(My-Ly+1)×(Mz-Lz+1)]Reduced to O [ (M)x×My×Mz)×log2(Mx×My×Mz)];
4) Respectively establishing a three-dimensional summation table S of the gray levels of the deformed digital volume images by utilizing a fast recursion method according to the image gray level value of the search area g (x, y, z) corresponding to each sampling point (alpha, beta, gamma)gAnd image energy three-dimensional summation table
Figure BSA00000320344900065
Namely:
sg(x,y,z)=g(x,y,z)+sg(x,y-1,z)+sg(x-1,y,z)+sg(x-1,y-1,z-1)
(2)
+sg(x,y,z-1)-sg(x,y-1,z-1)-sg(x-1,y,z-1)-sg(x-1,y-1,z),
s g 2 ( x , y , z ) = [ g ( x , y , z ) ] 2 + s g 2 ( x , y - 1 , z ) + s g 2 ( x - 1 , y , z ) + s g 2 ( x - 1 , y - 1 , z - 1 ) ,
(3)
+ s g 2 ( x , y , z - 1 ) - s g 2 ( x , y - 1 , z - 1 ) - s g 2 ( x - 1 , y , z - 1 ) - s g 2 ( x - 1 , y - 1 , z )
in the formula, a three-dimensional summation table SgAndrespectively comprise Mx×My×MzAn element; g (x, y, z) here represents the gray value corresponding to the voxel with central coordinate (x, y, z) in the search area; x, y and z are integers, and when x, y and z are less than or equal to 0, S g = S g 2 = 0 ;
5) the three-dimensional summation table S established by the step 4)gAnd
Figure BSA000003203449000610
by performing a fast table lookup operation, the following triple summation expression can be obtained quickly, that is:
<math><mrow><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow></mrow></math>
= s g ( L x + u , L y + v , L z + w ) + s g ( L x + u , v , w ) + s g ( u , L y + v , w ) + s g ( u , v , L z + w ) , - - - ( 4 )
- s g ( L x + u , v , L z + w ) - s g ( u , L y + v , L z + w ) - s g ( L x + u , L y + v , w ) - s g ( u , v , w )
<math><mrow><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msup><mi>g</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><msup><mi>g</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow></mrow></math>
= s g 2 ( L x + u , L y + v , L z + w ) + s g 2 ( L x + u , v , w ) + s g 2 ( u , L y + v , w ) + s g 2 ( u , v , L z + w ) , - - - ( 5 )
- s g 2 ( L x + u , v , L z + w ) - s g 2 ( u , L y + v , L z + w ) - s g 2 ( L x + u , L y + v , w ) - s g 2 ( u , v , w )
accordingly, according to the correlation operation result in the step 3), calculating a three-dimensional zero-mean normalized cross-correlation coefficient matrix [ C (u, v, w) ] between the reference sub-volume f (x, y, z) corresponding to the sampling point (alpha, beta, gamma) and the corresponding search area g (x, y, z)](α,β,γ)(ii) a The position of the maximum peak value of the matrix is the integral voxel displacement of the current sampling point (alpha, beta, gamma);
6) performing gradient-based three-dimensional sub-voxel displacement positioning algorithm near the position of the maximum peak value of the zero-mean normalized cross-correlation coefficient matrixPerforming sub-voxel interpolation operation to obtain accurate position coordinates (U, V, W) of sampling points (alpha, beta, gamma) in the reference digital volume image in the volume image after corresponding deformation; where U + Δ U, V + Δ V, W + W, and (U, V, W) represent a matrix of correlation coefficients normalized by a three-dimensional zero mean value [ C (U, V, W)](αβγ)The global voxel displacement value determined by the medium maximum element, (Δ u, Δ v, Δ w) is a sub-voxel displacement value calculated by a gradient-based three-dimensional sub-voxel displacement localization algorithm, and can be directly calculated by the following analytical formula:
<math><mrow><mfenced open='[' close=']'><mtable><mtr><mtd><mi>&Delta;u</mi></mtd></mtr><mtr><mtd><mi>&Delta;v</mi></mtd></mtr><mtr><mtd><mi>&Delta;w</mi></mtd></mtr></mtable></mfenced><mo>=</mo><msup><mfenced open='[' close=']'><mtable><mtr><mtd><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msubsup><mi>g</mi><mi>x</mi><mn>2</mn></msubsup></mtd><mtd><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msub><mi>g</mi><mi>x</mi></msub><msub><mi>g</mi><mi>y</mi></msub></mtd><mtd><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msub><mi>g</mi><mi>x</mi></msub><msub><mi>g</mi><mi>z</mi></msub></mtd></mtr><mtr><mtd><mrow><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msub><mi>g</mi><mi>x</mi></msub><msub><mi>g</mi><mi>y</mi></msub></mrow></mtd><mtd><mrow><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msubsup><mi>g</mi><mi>y</mi><mn>2</mn></msubsup></mrow></mtd><mtd><mrow><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msub><mi>g</mi><mi>y</mi></msub><msub><mi>g</mi><mi>z</mi></msub></mrow></mtd></mtr><mtr><mtd><mrow><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msub><mi>g</mi><mi>x</mi></msub><msub><mi>g</mi><mi>z</mi></msub></mrow></mtd><mtd><mrow><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msub><mi>g</mi><mi>y</mi></msub><msub><mi>g</mi><mi>z</mi></msub></mrow></mtd><mtd><mrow><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msubsup><mi>g</mi><mi>z</mi><mn>2</mn></msubsup></mrow></mtd></mtr></mtable></mfenced><mrow><mo>-</mo><mn>1</mn></mrow></msup><mfenced open='[' close=']'><mtable><mtr><mtd><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mrow><mo>(</mo><mi>f</mi><mo>-</mo><mi>g</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><msub><mi>g</mi><mi>x</mi></msub></mtd></mtr><mtr><mtd><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mrow><mo>(</mo><mi>f</mi><mo>-</mo><mi>g</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><msub><mi>g</mi><mi>y</mi></msub></mtd></mtr><mtr><mtd><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mrow><mo>(</mo><mi>f</mi><mo>-</mo><mi>g</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><msub><mi>g</mi><mi>z</mi></msub></mtd></mtr></mtable></mfenced><mo>,</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow></mrow></math>
in the formula, gx、gvAnd gzCan be solved by the gray-scale weighted calculation of the adjacent voxels of the digital volume image respectively, namely:
g x = [ g ( x - 2 , y , z ) - 8 g ( x - 1 , y , z ) + 8 g ( x + 1 , y , z ) - g ( x + 2 , y , z ) ] / 12 g y = [ g ( x , y - 2 , z ) - 8 g ( x , y - 1 , z ) + 8 g ( x , y + 1 , z ) - g ( x , y + 2 , z ) ] / 12 g z = [ g ( x , y , z - 2 ) - 8 g ( x , y , z - 1 ) + 8 g ( x , y , z + 1 ) - g ( x , y , z + 2 ) ] / 12 ; - - - ( 7 )
7) repeating the steps 3) to 6), and calculating the accurate positions of the sampling points in all the reference volume images in the deformed digital volume image so as to obtain a three-dimensional displacement field of the whole deformed volume image relative to the reference volume image;
8) calculating a three-dimensional strain field of the digital volume image according to a Lagrange strain tensor theory in continuous medium mechanics:
<math><mrow><msub><mi>&epsiv;</mi><mi>ij</mi></msub><mo>=</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><mo>[</mo><mfrac><mrow><mo>&PartialD;</mo><msub><mi>U</mi><mi>i</mi></msub></mrow><mrow><mo>&PartialD;</mo><msub><mi>x</mi><mi>j</mi></msub></mrow></mfrac><mo>+</mo><mfrac><mrow><mo>&PartialD;</mo><msub><mi>U</mi><mi>j</mi></msub></mrow><mrow><mo>&PartialD;</mo><msub><mi>x</mi><mi>i</mi></msub></mrow></mfrac><mo>+</mo><munderover><mi>&Sigma;</mi><mrow><mi>m</mi><mo>=</mo><mn>1</mn></mrow><mn>3</mn></munderover><mfrac><mrow><mo>&PartialD;</mo><msub><mi>U</mi><mi>m</mi></msub></mrow><mrow><mo>&PartialD;</mo><msub><mi>x</mi><mi>i</mi></msub></mrow></mfrac><mfrac><mrow><mo>&PartialD;</mo><msub><mi>U</mi><mi>m</mi></msub></mrow><mrow><mo>&PartialD;</mo><msub><mi>x</mi><mi>j</mi></msub></mrow></mfrac><mo>]</mo><mo>,</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow></mrow></math>
wherein, i is more than or equal to 1, j, m is less than or equal to 3, U1=U,U2=V,U3=W。
In the above step 5), the three-dimensional zero-mean normalized cross-correlation coefficient matrix [ C (u, v, w)](α,β,γ)Obtained by the following steps:
the three-dimensional zero-mean normalized cross-correlation coefficient matrix between the reference subvolume f (x, y, z) corresponding to the sampling point (α, β, γ) and the corresponding search area g (x, y, z) has the basic expression:
<math><mrow><msub><mrow><mo>[</mo><mi>C</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>]</mo></mrow><mrow><mo>(</mo><mi>&alpha;</mi><mo>,</mo><mi>&beta;</mi><mo>,</mo><mi>&gamma;</mi><mo>)</mo></mrow></msub><mo>=</mo><mfrac><mrow><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mo>[</mo><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mover><mi>f</mi><mo>&OverBar;</mo></mover><mo>]</mo><mo>[</mo><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>-</mo><msub><mover><mi>g</mi><mo>&OverBar;</mo></mover><mi>uvw</mi></msub><mo>]</mo></mrow><mrow><msqrt><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msup><mrow><mo>[</mo><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mover><mi>f</mi><mo>&OverBar;</mo></mover><mo>]</mo></mrow><mn>2</mn></msup></msqrt><msqrt><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msup><mrow><mo>[</mo><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>-</mo><msub><mover><mi>g</mi><mo>&OverBar;</mo></mover><mi>uvw</mi></msub><mo>]</mo></mrow><mn>2</mn></msup></msqrt></mrow></mfrac><mo>,</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>9</mn><mo>)</mo></mrow></mrow></math>
wherein: <math><mrow><mover><mi>f</mi><mo>&OverBar;</mo></mover><mo>=</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>y</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>;</mo></mrow></math> <math><mrow><msub><mover><mi>g</mi><mo>&OverBar;</mo></mover><mi>uvw</mi></msub><mo>=</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>y</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>;</mo></mrow></math>
order to <math><mrow><mi>P</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mo>{</mo><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>&times;</mo><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>}</mo><mo>,</mo></mrow></math>
<math><mrow><mi>Q</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><mover><mi>f</mi><mo>&OverBar;</mo></mover><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><msub><mi>L</mi><mi>y</mi></msub><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo><mo>&times;</mo><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo><mo>,</mo></mrow></math>
<math><mrow><mi>G</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msup><mrow><mo>[</mo><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>-</mo><msub><mover><mi>g</mi><mo>&OverBar;</mo></mover><mi>uvw</mi></msub><mo>]</mo></mrow><mn>2</mn></msup><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><msup><mi>g</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><msub><mi>L</mi><mi>y</mi></msub><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><msup><mrow><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo></mrow><mn>2</mn></msup><mo>,</mo></mrow></math>
<math><mrow><mi>F</mi><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mrow><msup><mrow><mo>[</mo><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mover><mi>f</mi><mo>&OverBar;</mo></mover><mo>]</mo></mrow><mn>2</mn></msup><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msup><mi>f</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><msub><mi>L</mi><mi>y</mi></msub><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><msup><mrow><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo></mrow><mn>2</mn></msup><mo>,</mo></mrow></mrow></math>
Equation (9) can be simplified to:
<math><mrow><msub><mrow><mo>[</mo><mi>C</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>]</mo></mrow><mrow><mo>(</mo><mi>&alpha;</mi><mo>,</mo><mi>&beta;</mi><mo>,</mo><mi>&gamma;</mi><mo>)</mo></mrow></msub><mo>=</mo><mfrac><mrow><mi>P</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>-</mo><mi>Q</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow></mrow><mrow><msqrt><mi>F</mi></msqrt><msqrt><mi>G</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow></msqrt></mrow></mfrac><mo>;</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>10</mn><mo>)</mo></mrow></mrow></math>
secondly, the expression P (u, v, w) in the step (i) can be directly solved by the three-dimensional fast Fourier transform method in the step (3), namelyThe triple summation items in the expressions Q (u, v, w) and G (u, v, w) can be solved by the fast table look-up method in the step 5); in addition, the above expression F can be solved by direct arithmetic operation, and since the expression F only needs to be calculated once for any sampling point (α, β, γ), the calculation time cost can be approximately ignored.
The measuring method of the present invention is further described below by way of specific examples.
The first embodiment is as follows: as shown in fig. 3, it is assumed that the vertical single-cycle uniform compressive strain previously applied by human is 2.0%, 5.0%, and 7.0%, respectively. As shown in fig. 4a to 4c, fig. 5a and 5b, the full-field three-dimensional displacement field calculated by the method for measuring the deformation of the three-dimensional digital volume image of the present invention, the three-dimensional agarose-fluorescence-based digital volume deformation image photographed by the confocal laser microscope, and the displacement field calculated by the method for measuring the deformation of the three-dimensional digital volume image of the present invention are shown. The error between the artificially pre-strained and the actual calculated strain is shown in table 1.
TABLE 1 artificial prestrain field and three-dimensional digital volume image deformation measuring method of the invention
Calculating to obtain the average error and standard deviation between strain fields
Figure BSA00000320344900091
After the three-dimensional summation table method is adopted, the acceleration ratio (i.e. the ratio of the calculation efficiency) of the triple summation operation in the three-dimensional zero-mean normalized cross-correlation coefficient is obtained as shown in table 2.
TABLE 2 acceleration ratio of triple summation operation after the use of a summation table
Figure BSA00000320344900092
Therefore, the three-dimensional digital volume image deformation measurement method can efficiently carry out digital volume image correlation operation, and accurately acquire the three-dimensional deformation field of the digital volume image, and mainly comprises a three-dimensional displacement field and a three-dimensional strain field.
The above embodiments are only for illustrating the present invention, and the scope of the present invention is not limited thereto, and any modifications and equivalent changes of the individual steps according to the principle of the present invention should not be excluded from the scope of the present invention based on the technical method of the present invention.

Claims (5)

1. A three-dimensional digital volume image deformation measurement method comprises the following steps:
(1) taking the three-dimensional digital volume image without deformation as a reference volume image F (x, y, z), and taking the three-dimensional digital volume image with deformation as a deformation volume image G (x, y, z); selecting a series of discrete sampling points on a reference volume image F (x, y, z), and setting the coordinates of any one sampling point as (alpha, beta, gamma);
(2) respectively selecting L from the sampling points in the step (1)x×Ly×LzA rectangular parallelepiped of individual voxels as a reference subvolume f (x, y, z); in the deformed volume image G (x, y, z), M is selected from the sample point coordinates (alpha, beta, gamma) as the centerx×My×MzA rectangular parallelepiped of each voxel is used as a search region g (x, y, z), and Mx=My=Mz>Lx=Ly=Lz
(3) For each sampling point (α, β, γ), performing three-dimensional image correlation operation on the corresponding reference sub-volume f (x, y, z) and the corresponding search area g (x, y, z), wherein an operation result P is:
P = FT 3 - 1 [ FT 3 ( f ) FT 3 * ( g ) ] ,
wherein represents a complex conjugate; FT3[·]Andrespectively representing three-dimensional fast Fourier forward transformation and inverse transformation;
Figure FSA00000320344800013
andrepresenting a reference subvolume f (x, y, z) image grayscale three-dimensional matrix and a search area g (x, y, z) volume image grayscale three-dimensional matrix respectively;
(4) respectively establishing a three-dimensional summation table S of the gray levels of the deformed digital volume images by utilizing a fast recursion method according to the image gray level value of the search area g (x, y, z) corresponding to each sampling point (alpha, beta, gamma)gAnd image energy three-dimensional summation table
Figure FSA00000320344800015
(5) The three-dimensional summation table S established by the step (4)gAnd
Figure FSA00000320344800016
performing fast table lookup operation, and calculating a three-dimensional zero-mean normalized cross-correlation coefficient matrix [ C (u, v, w) ] between the reference subvolume f (x, y, z) corresponding to the sampling point (alpha, beta, gamma) and the corresponding search area g (x, y, z) according to the correlation operation result of the step (3)](α,β,γ)
(6) Performing sub-voxel interpolation operation near the position of the maximum peak of the zero-mean normalized cross-correlation coefficient matrix by adopting a gradient-based three-dimensional sub-voxel displacement positioning algorithm to obtain the accurate position coordinates (U, V, W) of the sampling points (alpha, beta, gamma) in the reference digital volume image in the corresponding deformed volume image, wherein U is U + delta U, V is V + delta V, W is W + delta W, and U, V, W represent the three-dimensional zero-mean normalized correlation coefficient matrix [ C (U, V, W)](α,β,γ)The integral voxel displacement value determined by the medium maximum element (delta u, delta v, delta w) is a sub-voxel displacement value calculated by a three-dimensional sub-voxel displacement positioning algorithm based on gradient;
(7) repeating the steps (3) to (6), and calculating the accurate positions of the sampling points in all the reference volume images in the deformed digital volume image so as to obtain a three-dimensional displacement field of the whole deformed volume image relative to the reference volume image;
(8) calculating a three-dimensional strain field of the digital volume image according to a Lagrange strain tensor theory in continuous medium mechanics:
<math><mrow><msub><mi>&epsiv;</mi><mi>ij</mi></msub><mo>=</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><mo>[</mo><mfrac><mrow><mo>&PartialD;</mo><msub><mi>U</mi><mi>i</mi></msub></mrow><mrow><mo>&PartialD;</mo><msub><mi>x</mi><mi>j</mi></msub></mrow></mfrac><mo>+</mo><mfrac><mrow><mo>&PartialD;</mo><msub><mi>U</mi><mi>j</mi></msub></mrow><mrow><mo>&PartialD;</mo><msub><mi>x</mi><mi>i</mi></msub></mrow></mfrac><mo>+</mo><munderover><mi>&Sigma;</mi><mrow><mi>m</mi><mo>=</mo><mn>1</mn></mrow><mn>3</mn></munderover><mfrac><mrow><mo>&PartialD;</mo><msub><mi>U</mi><mi>m</mi></msub></mrow><mrow><mo>&PartialD;</mo><msub><mi>x</mi><mi>i</mi></msub></mrow></mfrac><mfrac><mrow><mo>&PartialD;</mo><msub><mi>U</mi><mi>m</mi></msub></mrow><mrow><mo>&PartialD;</mo><msub><mi>x</mi><mi>j</mi></msub></mrow></mfrac><mo>]</mo><mo>,</mo></mrow></math>
wherein, i is more than or equal to 1, j, m is less than or equal to 3, U1=U,U2=V,U3=W。
2. A method of deformation measurement of a three-dimensional digital volumetric image as defined in claim 1, characterized by: in the step (2), in the step Lx×Ly×LzIn a rectangular parallelepiped of individual voxels, Lx=Ly=LzAnd the integers are integers of 30-60, and the unit is a voxel; at the Mx×MY×MzIn a rectangular parallelepiped of individual voxels, Mx=Mv=MzAnd they take integers between 35 and 100, the unit is voxel, and Mx=My=Mz>Lx=Ly=Lz
3. A method of deformation measurement of a three-dimensional digital volumetric image as defined in claim 1, characterized by: in the step (4), the three-dimensional summation table SgAnd
Figure FSA00000320344800022
respectively:
sg(x,y,z)=g(x,y,z)+sg(x,y-1,z)+sg(x-1,y,z)+sg(x-1,y-1,z-1)+sg(x,y,z-1)-sg(x,y-1,z-1)-sg(x-1,y,z-1)-sg(x-1,y-1,z),
s g 2 ( x , y , z ) = [ g ( x , y , x ) ] 2 + s g 2 ( x , y - 1 , z ) + s g 2 ( x - 1 , y , z ) + s g 2 ( x - 1 , y - 1 , z - 1 )
+ s g 2 ( x , y , z - 1 ) - s g 2 ( x , y - 1 , z - 1 ) - s g 2 ( x - 1 , y , z - 1 ) - s g 2 ( x - 1 , y - 1 , z ) ,
wherein, the three-dimensional summation table SgAnd
Figure FSA00000320344800025
respectively comprise Mx×My×MzAn element; g (x, y, z) represents the gray value corresponding to the voxel with the central coordinate (x, y, z) in the search area; and x, y, z are integers, and when x, y, z is less than or equal to 0, S g = S g 2 = 0 .
4. a method of deformation measurement of a three-dimensional digital volumetric image as defined in claim 2, characterized in that: in the step (4), the three-dimensional summation tables Sg and Sg
Figure FSA00000320344800027
Respectively:
sg(x,y,z)=g(x,y,z)+sg(x,y-1,z)+sg(x-1,y,z)+sg(x-1,y-1,z-1)+sg(x,y,z-1)-sg(x,y-1,z-1)-sg(x-1,y,z-1)-sg(x-1,y-1,z)’
s g 2 ( x , y , z ) = [ g ( x , y , x ) ] 2 + s g 2 ( x , y - 1 , z ) + s g 2 ( x - 1 , y , z ) + s g 2 ( x - 1 , y - 1 , z - 1 )
+ s g 2 ( x , y , z - 1 ) - s g 2 ( x , y - 1 , z - 1 ) - s g 2 ( x - 1 , y , z - 1 ) - s g 2 ( x - 1 , y - 1 , z ) ,
wherein, the three-dimensional summation table SgAnd
Figure FSA000003203448000210
respectively comprise Mx×My×MzAn element; g (x, y, z) represents the gray value corresponding to the voxel with the central coordinate (x, y, z) in the search area; and x, y, z are integers, and when x, y, z is less than or equal to 0, S g = S g 2 = 0 .
5. a method of deformation measurement of a three-dimensional digital volume image as claimed in claim 1 or 2 or 3 or 4, characterized by: in the step (5), the three-dimensional zero-mean normalized cross-correlation coefficient matrix [ C (u, v, w)(α,β,γ)Comprises the following steps:
<math><mrow><msub><mrow><mo>[</mo><mi>C</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>]</mo></mrow><mrow><mo>(</mo><mi>&alpha;</mi><mo>,</mo><mi>&beta;</mi><mo>,</mo><mi>&gamma;</mi><mo>)</mo></mrow></msub><mo>=</mo><mfrac><mrow><mi>P</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>-</mo><mi>Q</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow></mrow><mrow><msqrt><mi>F</mi></msqrt><msqrt><mi>G</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow></msqrt></mrow></mfrac><mo>,</mo></mrow></math>
wherein, <math><mrow><mi>P</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mo>{</mo><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>&times;</mo><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>}</mo><mo>,</mo></mrow></math>
<math><mrow><mi>Q</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><mover><mi>f</mi><mo>&OverBar;</mo></mover><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><msub><mi>L</mi><mi>y</mi></msub><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo><mo>&times;</mo><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo><mo>,</mo></mrow></math>
<math><mrow><mi>G</mi><mrow><mo>(</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>,</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msup><mrow><mo>[</mo><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>-</mo><msub><mover><mi>g</mi><mo>&OverBar;</mo></mover><mi>uvw</mi></msub><mo>]</mo></mrow><mn>2</mn></msup><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><msup><mi>g</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><msub><mi>L</mi><mi>y</mi></msub><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><msup><mrow><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>u</mi></mrow><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>+</mo><mi>u</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>v</mi></mrow><mrow><msub><mi>L</mi><mi>y</mi></msub><mo>+</mo><mi>v</mi></mrow></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn><mo>+</mo><mi>w</mi></mrow><mrow><msub><mi>L</mi><mi>z</mi></msub><mo>+</mo><mi>w</mi></mrow></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo></mrow><mn>2</mn></msup><mo>,</mo></mrow></math>
<math><mrow><mi>F</mi><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mrow><msup><mrow><mo>[</mo><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mover><mi>f</mi><mo>&OverBar;</mo></mover><mo>]</mo></mrow><mn>2</mn></msup><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><msup><mi>f</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>-</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><msub><mi>L</mi><mi>y</mi></msub><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><msup><mrow><mo>[</mo><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>]</mo></mrow><mn>2</mn></msup><mo>,</mo></mrow></mrow></math>
and is <math><mrow><mover><mi>f</mi><mo>&OverBar;</mo></mover><mo>=</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>y</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>,</mo><mi>z</mi><mo>)</mo></mrow><mo>;</mo></mrow></math> <math><mrow><msub><mover><mi>g</mi><mo>&OverBar;</mo></mover><mi>uvw</mi></msub><mo>=</mo><mfrac><mn>1</mn><mrow><msub><mi>L</mi><mi>x</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>y</mi></msub><mo>&times;</mo><msub><mi>L</mi><mi>z</mi></msub></mrow></mfrac><munderover><mi>&Sigma;</mi><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>x</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>y</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>y</mi></msub></munderover><munderover><mi>&Sigma;</mi><mrow><mi>z</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>L</mi><mi>z</mi></msub></munderover><mi>g</mi><mrow><mo>(</mo><mi>x</mi><mo>+</mo><mi>u</mi><mo>,</mo><mi>y</mi><mo>+</mo><mi>v</mi><mo>,</mo><mi>z</mi><mo>+</mo><mi>w</mi><mo>)</mo></mrow><mo>;</mo></mrow></math> P (u, v, w) is rapidly solved by the three-dimensional fast Fourier transform method in the step (3), i.e.
Figure FSA00000320344800038
Solving the triple summation items in Q (u, v, w) and G (u, v, w) by a quick table look-up mode in the step (5); f is solved by arithmetic operations.
CN 201010520673 2010-10-20 2010-10-20 Three-dimensional digital volume image distortion measuring method Pending CN101980304A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010520673 CN101980304A (en) 2010-10-20 2010-10-20 Three-dimensional digital volume image distortion measuring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010520673 CN101980304A (en) 2010-10-20 2010-10-20 Three-dimensional digital volume image distortion measuring method

Publications (1)

Publication Number Publication Date
CN101980304A true CN101980304A (en) 2011-02-23

Family

ID=43600806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010520673 Pending CN101980304A (en) 2010-10-20 2010-10-20 Three-dimensional digital volume image distortion measuring method

Country Status (1)

Country Link
CN (1) CN101980304A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129686A (en) * 2011-03-24 2011-07-20 西北工业大学 Method for detecting sub-voxel surface based on voxel level outline rough positioning
CN103591904A (en) * 2013-09-18 2014-02-19 中国矿业大学(北京) Method for measuring three-dimensional deformation field inside object by using two steps of three-dimensional Fourier transformation
CN103743621A (en) * 2014-01-03 2014-04-23 东南大学 Ectopic digital volume correlation method based on image registration
CN105232087A (en) * 2015-11-05 2016-01-13 无锡祥生医学影像有限责任公司 Ultrasonic elastic imaging real-time processing system
CN109385460A (en) * 2018-11-12 2019-02-26 中国科学技术大学 Method and system based on single fluorescent particle sizing cell three-dimensional tractive force
CN110751620A (en) * 2019-08-28 2020-02-04 宁波海上鲜信息技术有限公司 Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN112082496A (en) * 2020-09-08 2020-12-15 西安建筑科技大学 Concrete internal deformation measurement method and system based on improved digital volume image correlation method
CN114549614A (en) * 2021-12-21 2022-05-27 北京大学 Digital volume correlation method, device, equipment and medium based on deep learning
CN114777709A (en) * 2022-05-05 2022-07-22 东南大学 DVC (dynamic voltage waveform) microcrack characterization method based on daughter block separation
CN115100100A (en) * 2022-05-07 2022-09-23 高速铁路建造技术国家工程实验室 Phase-dependent displacement field acquisition method, electronic device, and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101813693A (en) * 2010-05-06 2010-08-25 北京大学 Cell in-situ active deformation measurement method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101813693A (en) * 2010-05-06 2010-08-25 北京大学 Cell in-situ active deformation measurement method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Experimental Mechanics》 20070308 C. Franck et al Three-dimensional Full-field Measurements of Large Deformations in Soft Materials Using Confocal Microscopy and Digital Volume Correlation 第429页右栏第1段及第430页左栏第1段第1-6行 1-5 第47卷, 第3期 2 *
《Journal of Biomechanical Engineering》 20041231 Blayne A. Roeder et al Local, Three-Dimensional Strain Measurements Within Largely Deformed Extracellular Matrix Constructs 第701页右栏最后两行及第702页左栏1-4行 1-5 第126卷, 2 *
《Journal of Microscopy》 20081231 A. Rack et al Analysis of Spatial Cross-Correlations in Multi-Constituent Volume Data 全文 1-5 第232卷, 2 *
《Optics and Lasers in Engineering》 20100113 Jianyong Huang et al High-Efficiency Cell-Substrate Displacement Acquisition Via Digital Image Correlation Method 全文 1-5 第48卷, 2 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129686A (en) * 2011-03-24 2011-07-20 西北工业大学 Method for detecting sub-voxel surface based on voxel level outline rough positioning
CN102129686B (en) * 2011-03-24 2013-02-20 西北工业大学 Method for detecting sub-voxel surface based on voxel level outline rough positioning
CN103591904A (en) * 2013-09-18 2014-02-19 中国矿业大学(北京) Method for measuring three-dimensional deformation field inside object by using two steps of three-dimensional Fourier transformation
CN103591904B (en) * 2013-09-18 2016-08-17 中国矿业大学(北京) A kind of method of two step three-dimensional Fourier transform Measuring Object interior three-dimensional deformation fields
CN103743621A (en) * 2014-01-03 2014-04-23 东南大学 Ectopic digital volume correlation method based on image registration
CN105232087A (en) * 2015-11-05 2016-01-13 无锡祥生医学影像有限责任公司 Ultrasonic elastic imaging real-time processing system
CN105232087B (en) * 2015-11-05 2018-01-09 无锡祥生医疗科技股份有限公司 Ultrasonic elastograph imaging real time processing system
CN109385460A (en) * 2018-11-12 2019-02-26 中国科学技术大学 Method and system based on single fluorescent particle sizing cell three-dimensional tractive force
CN110751620A (en) * 2019-08-28 2020-02-04 宁波海上鲜信息技术有限公司 Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN110751620B (en) * 2019-08-28 2021-03-16 宁波海上鲜信息技术有限公司 Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN112082496A (en) * 2020-09-08 2020-12-15 西安建筑科技大学 Concrete internal deformation measurement method and system based on improved digital volume image correlation method
CN114549614A (en) * 2021-12-21 2022-05-27 北京大学 Digital volume correlation method, device, equipment and medium based on deep learning
CN114549614B (en) * 2021-12-21 2024-11-05 北京大学 Digital volume correlation method, device, equipment and medium based on deep learning
CN114777709A (en) * 2022-05-05 2022-07-22 东南大学 DVC (dynamic voltage waveform) microcrack characterization method based on daughter block separation
CN114777709B (en) * 2022-05-05 2024-04-19 东南大学 DVC microcrack characterization method based on sub-block separation
CN115100100A (en) * 2022-05-07 2022-09-23 高速铁路建造技术国家工程实验室 Phase-dependent displacement field acquisition method, electronic device, and storage medium
CN115100100B (en) * 2022-05-07 2024-08-20 高速铁路建造技术国家工程实验室 Phase-related displacement field acquisition method, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN101980304A (en) Three-dimensional digital volume image distortion measuring method
US10401156B2 (en) System and method for quantifying deformation, disruption, and development in a sample
Gao et al. Estimating zero-strain states of very soft tissue under gravity loading using digital image correlation
Verhulp et al. A three-dimensional digital image correlation technique for strain measurements in microstructures
Hildebrand et al. A new method for the model‐independent assessment of thickness in three‐dimensional images
Lombaert et al. Human atlas of the cardiac fiber architecture: study on a healthy population
Rannou et al. Three dimensional experimental and numerical multiscale analysis of a fatigue crack
Yang et al. Augmented lagrangian digital volume correlation (ALDVC)
Fletcher et al. Gaussian distributions on Lie groups and their application to statistical shape analysis
CN102622759B (en) A kind of combination gray scale and the medical image registration method of geological information
Huang et al. A digital volume correlation technique for 3-D deformation measurements of soft gels
Phatak et al. Strain measurement in the left ventricle during systole with deformable image registration
CN103337065A (en) Non-rigid registering method of mouse three-dimensional CT image
Honarvar et al. Sparsity regularization in dynamic elastography
CN107708550A (en) For the surface modeling for the segmentation acoustic echo structure for detecting and measuring anatomic abnormalities
Charles et al. Determining subject-specific lower-limb muscle architecture data for musculoskeletal models using diffusion tensor imaging
Rouwane et al. Adjusting fictitious domain parameters for fairly priced image-based modeling: Application to the regularization of Digital Image Correlation
CN113034461A (en) Pancreas tumor region image segmentation method and device and computer readable storage medium
Oguz et al. Cortical correspondence using entropy-based particle systems and local features
Liu et al. Highly accelerated MR parametric mapping by undersampling the k-space and reducing the contrast number simultaneously with deep learning
Arnould et al. Fitting smooth paths on Riemannian manifolds: endometrial surface reconstruction and preoperative MRI-based navigation
CN108280806B (en) DVC (digital video coding) measuring method for internal deformation of object
Ruff et al. Volume estimation from sparse planar images using deformable models
WO2009053103A1 (en) Spherical harmonics 3d active contours for membrane bilayer-bound surfaces
CN111968113B (en) Brain image two-dimensional convolution deep learning method based on optimal transmission mapping

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110223