CN1920882A - System and method for salient region feature based 3d multi modality registration of medical images - Google Patents
System and method for salient region feature based 3d multi modality registration of medical images Download PDFInfo
- Publication number
- CN1920882A CN1920882A CNA2006101262079A CN200610126207A CN1920882A CN 1920882 A CN1920882 A CN 1920882A CN A2006101262079 A CNA2006101262079 A CN A2006101262079A CN 200610126207 A CN200610126207 A CN 200610126207A CN 1920882 A CN1920882 A CN 1920882A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- theta
- associating
- similarity measurement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
A method for aligning a pair of images includes providing a pair of images, identifying 60 salient feature regions in both a first image and a second image, wherein each region is associated with a spatial scale, representing 64 feature regions by a center point of each region, registering 65 the feature points of one image with the feature points of the other image based on local intensities, ordering said feature pairs by a similarity measure, and optimizing a joint correspondence set of feature pairs by refining the center points to sub-pixel accuracy.
Description
Technical field
The present invention relates to 3-dimensional digital Medical image registration technology.
Background technology
The application has required denomination of invention that people such as Xu submit on August 24th, the 2005 U.S. Provisional Application No.60/710 for " Method and System for Robust Salient Region BasedRegistration of 3D Medical Images (based on the method and system of the 3 d medical images registration of stable marking area) ", 834 right of priority, the content of this U.S. Provisional Application is incorporated herein by reference.
In Medical Image Processing, registration has become the basic task that produces mapping between the space orientation of two width of cloth or multiple image, and can be used in the various application.The major requirement of aiming at conversion is the optimum corresponding picture material that covers.Existing method for registering can be classified as based on the method for registering of feature (for example, monumented point (landmark), edge, mark), based on the method for registering of intensity or in conjunction with the mixed method of the many aspects of above-mentioned two kinds of methods.Can cause away from the aligning of poor quality in the zone of feature based on the method for monumented point.Only based on the method for image intensity since the characteristic of optimizing process easily be trapped in the local optimum of never perfect solution.Hybrid technology is used multiple such combination of attributes and especially is preferable over image registration from different modalities, and wherein image intensity or geological information do not provide Fundamentals of Measurement accurately separately.For example and the mutual information of additional information passage pairing can improve the registration results of MR and PET image, described additional information passage is made up of zone marker information.Mixing registration technology is known in abdomen area, plasmagel electrophoresis or the protein imaging etc. of low contrast for example.Other application for example is to create collection of illustrative plates or standard database, these collection of illustrative plates or standard database be suitable for image or object analysis, permission doctor obtain to research between the same patient of the understanding of progression of disease or patient or during treatment of cancer time-based follow.
Use different imaging systems can obtain more information to same person under inspection, but this requirement is used for the multimode registration technology of suitable explanation on the other hand.The increase of side information is utilized by various medical image system, these medical image systems roughly can be divided into two big classes: the anatomy imaging of extraction morphologic information (for example, X ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound wave (US)), functional imaging (for example, single photon emission computerized tomography,SPECT (SPECT), positron emission computerized tomography (PET), functional MR I (fMRI)) with the metabolic information of visual lower floor anatomical structure.In the multi-mode image registration, the combination of dissimilar images is favourable to the doctor.For example, the CT image is characterised in that good spatial resolution, and the PET iamge description function of lower-hierarchy.Therefore, by lacking the corresponding PET image co-registration of spatial resolution with part, the deficiency of function information can be remedied in the CT image.
Summary of the invention
Illustrative embodiments of the present invention as the described herein generally includes the method and system that is used for method for registering, above-mentioned method for registering extracts 3D region automatically from two width of cloth images, between this two width of cloth image, find reply mutually, and set up the Rigid Registration conversion, so that make the fusion results in the medical application visual.The yardstick of those inherent features, translation and rotational invariance are suitable for three-dimensional, with the single mode of estimation lower floor or the conversion between the multimode 3 d medical images.Combine based on feature with based on the favourable aspect of intensity method according to the method for embodiment of the present invention, and comprise the estimation of the automatic extraction of the three-dimensional marking area characteristic set on each width of cloth image, corresponding set and the accurate refinement of sub-pixel thereof, above-mentioned refinement comprises gets rid of outlier (outlier).Method based on region growing is used to extract three-dimensional marking area feature, and this has just reduced the complexity of feature clustering and corresponding search volume.
Being feature by the quick clustering that uses the kD tree construction to carry out marking area, and this algorithm uses the registration in the three-dimensional notable feature zone that local strength drives to improve optimization according to the algorithm of embodiment of the present invention.Be included in according to the supplementary features of the algorithm of embodiment of the present invention and use iterative closest point (iterative closest point) algorithm to carry out initial attitude on the barycenter of two pairs of salient regions to estimate and use registration to carry out the local refinement of barycenter to obtain subpixel accuracy based on local strength.
Algorithm according to embodiment of the present invention is full automatic, stable, multimode rigid image registration algorithm, and this algorithm can be at successful registration 3-D view under any attitude.Under the constraint of geometric configuration constraint condition, pass through the constant three-dimensional feature of various yardsticks to image modeling.Utilize fully relative configuration constraint condition between the feature on the same image proceed a plurality of three-dimensional notable features between the associating correspondence.With independent feature between consistance compare, make by the geometrical constraint of uniting the strictness that correspondence applies and extremely the mistake coupling can not take place.Increase gradually can not be new during to further raisings by increasing up to the global image alignment quality when associating is corresponding, and the conversion that estimation obtains from unite corresponding set converges on global optimum.This obtains by suitable convergence.
According to aspects of the present invention, provide a kind of method that is used to aim at a pair of image, this method comprises the steps: to provide a pair of image with first image and second image, and wherein said image comprises a plurality of intensity corresponding to the pixel domain in the three dimensions; All identify the notable feature zone in first image and second image, wherein each zone is all relevant with space scale; Represent characteristic area by each regional central point; According to local strength the unique point of piece image and the unique point of another width of cloth image are carried out registration; Come described feature sorting by similarity measurement; Optimize the corresponding set of the right associating of feature by central point being refine to subpixel accuracy.
According to a further aspect in the invention, this method comprises: the notable feature regional center point of expression piece image in the kD tree; Each feature of inquiring about described kD tree finds one group of feature the most contiguous; And from described tree, remove those the adjacent features with low conspicuousness value and in the yardstick of described each feature, have those adjacent features of central point, wherein obtain notable feature zone distributing uniformly basically in described image.
According to a further aspect in the invention, described space scale is the radius that comprises the sphere of described characteristic area.
According to a further aspect in the invention, the image pixel index that described kD tree is used described notable feature regional center point is as leaf, and wherein the distance from characteristic area to adjacent features zone is unit with the image index.
According to a further aspect in the invention, according to local strength unique point being carried out registration also comprises: use the iterative closest point conversion between described first image and described second image to estimate initial registration; All eigentransformations of described second image are arrived the coordinate space of described first image; The characteristic storage of described institute conversion in the kD tree, and is inquired about each feature in described first image of described tree to select those the adjacent features in described second image according to predetermined selection criteria; And right translation invariance, rotational invariance and the global image similarity measurement of selected feature of the described feature of test wherein comes selected feature sorting by the right global image similarity measurement value of described selected feature in described first image and described second image.
According to a further aspect in the invention, described iterative closest point conversion minimizes the square error between every group of feature point;
According to a further aspect in the invention, test described translation invariance and comprise estimation
P wherein
iAnd p
jBe the center position coordinates of individual second characteristics of image of i first characteristics of image and j in physical space.
According to a further aspect in the invention, test described rotational invariance and comprise estimation
(f wherein
i, f
j) represent that respectively the described feature in described first image and described second image is right,
Wherein H represents with respect to the spherical adjacent features zone f around voxel location x and described space scale s
sThe entropy of interior image intensity value, H is defined as
Wherein (i, s x) are the probability density function that is comprised in the image intensity value i among the f, wherein H (f to p
i, f
j) be the associating differential entropy, H (f
i, f
j) be defined as:
P (f wherein
i, f
j) be characteristic area f
iAnd f
jIn the joint probability density of image intensity, and I and J value in the set of the possible intensity level in described first and second images respectively.
According to a further aspect in the invention, test described global image similarity measurement and comprise estimation
I wherein
rRepresent described first image,
Represent the described conversion of described second image to the coordinate space of described first image.
Wherein H represents that H is defined as with respect to the entropy of the image intensity value in the piece image in the described image around voxel location x and the described space scale s:
H(s,x)=-∫
Ip(i,s,x)log
2p(i,s,x)di,
Wherein (i, s x) are the probability density function that is comprised in the image intensity value i among the I, wherein to p
Be the associating differential entropy,
Be defined as:
Wherein
It is image I
rWith
In the joint probability density of image intensity, and I and J value in the set of the possible intensity level in described first and second images respectively, and L wherein
GlobalEvaluated on the whole overlapping territory of described first and second images.
According to a further aspect in the invention, optimizing the corresponding set of the right associating of feature also comprises: utilize according to described similarity measurement and to coming initialization is carried out in the corresponding set of described associating for the most similar feature; Estimate described similarity measurement at the union (union) that the corresponding set of described associating and each feature in being included in not that described associating is corresponding and gathering are right; The feature of the similarity measurement of the described union of selection maximization is right, if the similarity measurement of wherein described maximization feature pair and the corresponding union of sets collection of described associating is greater than the similarity measurement of the corresponding set of this associating, then described maximization feature is to utilizing local rigid transformation and carry out registration with subpixel accuracy and be added to during described associating correspondence gathering.
According to a further aspect in the invention, by use iterative closest point process calculated characteristics between registration transformation make the similarity measurement maximization.
According to a further aspect in the invention, if described maximization feature pair and the similarity measurement of the corresponding union of sets collection of described associating are less than or equal to the similarity measurement of the corresponding set of this associating, registration transformation then is provided, described registration transformation by the maximization described similarity measurement feature between registration transformation calculate.
According to a further aspect in the invention, provide a kind of computer-readable program storage device, this computer-readable program storage device has positively comprised and can be used to aim at the method step of a pair of image with execution by the programmed instruction of computing machine execution.
Description of drawings
Fig. 1 (a)-(e) illustrates the exemplary collection in notable feature zone according to the embodiment of the present invention.
Fig. 2 (a)-(d) illustrates the section (left part) of CT volume according to the embodiment of the present invention, above-mentioned section by translation, rotate and be covered on the original slice.
Fig. 3 illustrates the process flow diagram that is used to optimize the EM type algorithm of uniting corresponding set and registration transformation according to the embodiment of the present invention.
Fig. 4 (a)-(c) is the form that shows all measuring distances and standard deviation according to the embodiment of the present invention, described measuring distance respectively at PET-CT, CT-CT and SPECT-CT volume on x, y and z direction, being that unit provides with cm.
Fig. 5 (a)-(c) illustrate use according to the algorithm of embodiment of the present invention obtain from three sections of merging the registration results image, above-mentioned fusion registration results image respectively from the PET-CT image to, have the CT-CT image of intensity pseudomorphism to right with the SPECT-CT image.
Fig. 6 is the process flow diagram according to the three-dimensional registration process of embodiment of the present invention.
Fig. 7 is the block diagram that is used to realize according to the exemplary computer system of the three-dimensional registration process of embodiment of the present invention.
Embodiment
Illustrative embodiments of the present invention as the described herein generally include be used for that the yardstick that automatically extracts is constant, the system and method for the configuration coupling of invariable rotary and translation invariant three-dimensional marking area feature.Therefore, though the present invention allows various modifications and replacement form, still its embodiment is shown and will describes this embodiment in detail at this by the example in the accompanying drawing.But, should be appreciated that intention does not limit the invention to particular forms disclosed, but opposite, the present invention will be contained all modifications, equivalent and the substitute mode that falls in the spirit and scope of the present invention.
As used in this, term " image " is meant the multidimensional data of being made up of discrete picture element (for example, the voxel of the pixel of two dimensional image and 3-D view).For example, image can be the person under inspection's that gathered by computed tomography, magnetic resonance imaging, ultrasound wave or any other a medical image system well-known to those skilled in the art medical image.Image also can provide from the non-medical environment, such as being provided by remote sensing system, electron microscope technique etc.Although image can be counted as from R
3To the function of R, but the inventive method is not limited on such image, but can be applied to the image (for example, two-dimension picture or three-D volumes) of any dimension.For two dimension or 3-D view, image area is two dimension or three-dimensional rectangular array normally, and wherein each pixel or voxel can address about the set of 2 or 3 mutually orthogonal axles.Term " numeral " and " digitizing " refer to the numeral that (as suitably) obtained by the digital collection system or by the analog image conversion or the image or the volume of digitized format as used in this.
According to the embodiment of the present invention, conspicuousness is described (saliency description) and be used to extract automatically feature from 3-D view.In Fig. 6, present the process flow diagram of having summarized according to the three-dimensional registration algorithm of embodiment of the present invention.Hereinafter can the details of wherein step be described in detail.With reference now to this figure,, in step 60, by forming spheric region, three-dimensional marking area feature is detected.Resulting each marking area feature all has following attribute: (1) regional center, the conspicuousness value in (2) regional scale (radius) and (3) zone.
In step 61, store in the kD tree construction that allows the fast query locus by characteristic area central point a set, obtain the even distribution of marking area feature on image space.In step 62, come tree is inquired about at each feature, to find adjacent features.Because be the zone to be described, so in step 63, remove those features with low conspicuousness and the central point that is positioned at the yardstick of current feature by its yardstick (radius).The subclass of resulting cluster notable feature is removed from whole set.There is not the marking area feature to be positioned to have yardstick than the marking area feature of highly significant value.This method has been avoided the cluster of the marking area feature that can occur in image-region, above-mentioned image-region has big and is local maximum conspicuousness value.Therefore, separating deviation may appear and lead to errors in the image registration subsequently according to those features.
The parameter of rotational invariance is estimated by the registration part, that intensity drives in three-dimensional notable feature zone.The parameter search space of partial transformation is limited on the rotation parameter.Make up the consistance of finding between the marking area feature and have high computation complexity by testing each at these, these marking area features are comprised in two incoherent set.Under the situation of two three-dimensional marking area characteristic sets, in step 64, these features are reduced to its central point.In step 65, utilize iterative closest point (ICP) algorithm that resulting two some clouds are carried out registration, the square error between the set of this iterative closest point (ICP) algorithmic minimizing.In step 66, the characteristics of image set that resulting conversion is used to aim at is mapped in the reference picture space.In step 67, the alignment characteristics of institute's conversion can be stored in the kD tree construction, so that allow nearest N the alignment characteristics that be transformed of fast query with respect to reference picture marking area feature.Produce the hypothesis of the stable mapping between the point set according to initial ICP conversion, can substitute the whole set in alignment image notable feature zone and be reduced to N adjacent feature of each fixed reference feature the conforming test of being supposed.
In order between the consistance of notable feature zone, to realize bigger precision,, aim at these feature centers with subpixel accuracy by the registration that the local strength between the notable feature zone drives in step 68.In this case, the parameter search space at expection maximization optimizing process comprises translation, rotation and scale parameter.
The principle of marking area feature is the expression with respect to certain yardstick of a large amount of local unpredictabilities or signal complexity, and its mesoscale is meant the radius of the spheric region around the voxel.This method is distinguished point-of-interest according to Shannon (Shannon) entropy of the border circular areas of different scale, and supposes to have similar conspicuousness value from the voxel of different corresponding anatomical structures or functional structure.The local attribute that conspicuousness is described provides following major advantage for image registration: even image is not overlapping, corresponding marking area feature is constant with respect to the gross space conversion between different images.
Compare with the CT image that conforms to, SPECT and PET image observation represent that the peaked position of local conspicuousness often can be by local translation in corresponding structures of interest.This can solve by the local stiffness step of registration, and this local stiffness step of registration is included in that the correlativity according to local strength moves to corresponding position exactly with the regional center sub-pixel under the situation that picture material do not deform.This step has kept the basic assumption at the similar remarkable value of top mentioned corresponding feature.
At image intensity scope D, as the conspicuousness of giving a definition:
A
D(s
p,x)=H
D(s
p,x)·W
D(s
p,x),
H wherein
DExpression is with respect to the spherical adjacent domain R around voxel location x and yardstick (radius) s
sThe entropy of interior image intensity value i ∈ D:
Here, (i, s are at being comprised in R x) to p
sIn the probability density function (PDF) of descriptor i of image intensity value.W
D(s is that similarity between the PDF is with respect to the tolerance of yardstick x).W
D(s, x) along with the increase of the dissimilarity of PDF and increase:
Form H at the x place
DThe yardstick s of local peaking
pProvide by following formula:
Afterwards at each voxel solving equation formula (1), the result is and equal-sized two intermediate images of input picture, these two intermediate images must be analyzed: an image comprises actual conspicuousness value, and another image comprises the scale-value from equation (2).Utilize the region growing searching method, can from the conspicuousness image, extract local optimum and the descriptive marking area point of tool.For reducing of search volume, overall conspicuousness threshold value δ is used as lower limit.According to the embodiment of the present invention, overall conspicuousness threshold value δ is set as half of average conspicuousness
The experience setting obtained good result for getting rid of meaningless zone.
Closest point algorithms based on the kD tree structured of provincial characteristics can be determined the peaked position of local conspicuousness, and this has formed the tabulation of voxel location, and these voxel locations sort according to its conspicuousness value.This method has been avoided the cluster of local maximum, if global threshold is employed and only extract feature according to the conspicuousness value of descending, the cluster of local maximum then for example occurs.The index that uses the peaked regional center of being extracted of local conspicuousness is created the kD tree as leaf.By the regional center index of query characteristics, the K of a special characteristic arest neighbors can be found effectively.Therefore, not to be dimension to the distance of the feature of being returned, but be dimension with the image index with the physical unit.Scale parameter can be used as minimal distance requirement: distance is equal to or less than the yardstick of the feature of being inquired about and has the feature of being returned to some extent of hanging down conspicuousness and removed from feature set.This restriction can be applied in the whole set, so that remove cluster areas.If results set requires specific size, can in tabulation, add so and satisfy feature criterion distance, that low conspicuousness is arranged.If the feature center is not positioned at the zone that has than the feature of highly significant value, this feature is retained in this set so.The set of resulting three-dimensional marking area feature evenly distributes, and provides all right initial sets for the corresponding search of subsequently feature.
Fig. 1 (a)-(e) illustrates the exemplary collection in notable feature zone.The notable feature zone is represented with white circle in Fig. 1 (a)-(b).Fig. 1 (a) illustrates the effect of the set of marking area feature being carried out cluster, and Fig. 1 (b) illustrates by the selected marking area feature of method according to the embodiment of the present invention simultaneously.Fig. 1 (c), 1 (d) and 1 (e) illustrate as extracting afterwards by visual most important three-dimensional marking area feature from CT image, PET image and MR image respectively.The marking area feature shows as spheric bulb in Fig. 1 (c)-(e).Utilize specific transport function to come this volume of amplitude limit,, on whole strength range, be performed and extract self so that in three-dimensional, the position of feature is carried out visually.
The set of the correspondence between the feature of two width of cloth images that next step of three-dimensional registration (be known as the zone and divide flux matched step) estimation is supposed.Suppose I
rBe meant reference picture and I
tBe meant template image, suppose N
rBe from I
rIn the number and the N of the feature extracted
tBe from I
tIn the number of the feature extracted.The set of all hypothesis feature correspondences is C={c
I, j, wherein i ∈ [1, N
r], j ∈ [1, N
t], | C|=N
r* N
tAnd c wherein
I, j=(f
i, f
j) be a pair of I
rIn feature f
iAnd I
tIn feature f
j
Parameter sets Θ defines the conversion T that aims at two width of cloth images, and can be according to f
iAnd f
jBetween translation, yardstick and rotational invariance estimate.f
iAnd f
jBetween translating sections can be according to the following formula direct estimation:
P wherein
iAnd p
jBe i the fixed reference feature in the physical space and the center of j template characteristic.Yardstick unchangeability in this case is optional, because for the 3 d medical images according to embodiment of the present invention, voxel size is provided in the header of DICOM (digital imaging and communications in medicine).In order to realize rotational invariance,, estimate rotation parameter by the local rigid body registration in these three-dimensional notable feature zones according to the intensity level in three-dimensional notable feature zone.Optimize and be limited in rotation parameter subspace Θ
RIn, and drive by the intensity similarity measurement, the entropy correction coefficient is determined by following formula as the particular form of standardization mutual information:
Wherein unite differential entropy H (A B) can be defined by:
H(A,B)=-∫
A,Bp(A,B)log
2p(A,B)dIdJ,
Wherein limit of integration is in region R
iAnd R
j ΘOn, (A be the joint probability density of image intensity in regional A and B B), and I and J adopts I respectively to p
fAnd I
mIn the set of possible intensity level in value.This coefficient provides the stability that improves for overlapping territory, and some other favourable attributes: the correlativity that increases between the value representation image of increase, the correlativity that reduces between the value representation image that vice versa promptly reduces.Therefore, rotational invariance can be formulated as Θ
ROptimization problem:
Global image similarity measurement L
GlobalBe used to estimate every pair quality of M centering:
L wherein
GlobalOn the whole overlapping territory of two images, estimated, rather than just in time on the local feature zone, estimated.
The feature that separates the large space distance is not to being corresponding and can being removed that this has just reduced the corresponding search volume of associating from the corresponding set of being supposed.Therefore, corresponding search volume can be reduced to the only combination between the adjacent features in part from all right combinations.By set being regarded as regional center position point cloud on every side calculates the ICP conversion between reference zone characteristic set and the set of template region characteristic of field, contiguous set can be estimated.This ICP algorithm utilizes local least mean-square error (MSE) to come the alignment characteristics set.This result is used to all template characteristic are transformed into the coordinate space of reference picture, and all template characteristic are stored in the new kD tree.Then, at each notable feature in the reference picture, roughly the most contiguous feature can be determined in the quick search of tree.The number N of the adjacent features of the template characteristic of institute's conversion
nBe the much smaller value of whole radix than set: N
n<<N
t, described adjacent features and each fixed reference feature are combined.According to the initial ICP conversion hypothesis that is the good approximation of actual aligning conversion, and if according to the feature bigger distance more impossible corresponding hypothesis of these features so of being separated by, this is reduced to N with complexity
r* N
nN
r* N
nIndividual feature is at its translation invariance
Rotational invariance
With global image similarity measurement L
Global(c
I, j) the basis on test described translation invariance at its corresponding mass
Rotational invariance
With global image similarity measurement L
Global(c
I, j) can realize by on the image of whole aligning, applying the local feature conversion.
In the experiment of carrying out according to the embodiment of the present invention, N
n=(1/10) %N
tThe neighborhood size successfully be used for setting up the corresponding initial ranging space of associating.In addition, suppose the corresponding overall similarity measurement L of passing through
GlobalSort, this causes outlier still less in estimated correspondence set.
Fig. 2 (a)-(d) illustrates the section (left part) of CT volume, this section by translation, rotate and cover on the original slice.Circle represents to have the notable feature zone of particular dimensions.According to L
GlobalCarried out ordering, and whole set comprises outlier seldom.For the sake of clarity, at every kind of method preceding four correspondences only are shown.
Size is the supposition correspondence of M
Set be used to estimate conversion T between two width of cloth images, described set is come out with the corresponding searching and computing of the feature of algorithm.This conversion is not very accurate, because its parameter is calculated according to the feature that is constrained on the discrete picture grid position.As mentioned earlier, some features are on the locus that is not placed on accurate correspondence.Therefore, resulting set can comprise outlier and make the error of conversion generation deviation on opposite direction.In following step, Θ and C sub-pixel accurately in the iterative process by refinement aim at more accurately so that realize.
Utilize J C and n[M to optimize the corresponding set of associating
Be to be worth expectation, the corresponding set of this associating comprise sub-pixel accurately aligning feature to and do not comprise outlier in the ideal case.The element of the corresponding set of the associating after the optimization is used as the input of ICP algorithm, so that calculate the conversion that makes global image similarity maximum:
For the number that feature is right remains on low value and keeps registration efficient, use expectation maximization (EM) class algorithm with finite iteration number of steps.In each iteration, from being gathered J by the associating of progressively refinement correspondence
kCalculate conversion T among the _ J
JkL
GlobalThe convergence that is used as thinning process.In case provincial characteristics to subpixel accuracy by local registration, this specific right following registration does not strengthen this corresponding quality, and can be left in the basket.Therefore, during each iterative step, the feature of being added by iteration by a refinement is to the position, and can be saved computing time.
The process flow diagram of the EM class algorithm of corresponding set of optimization associating shown in Figure 3 and registration transformation.Described algorithm utilization comprises the corresponding set of two the right associatings in foremost among the C:
Come initialization.Initially corresponding for these, use best two couple of feature centering of ordered set usually, described feature is to being acquired in back.The local stiffness registration is used to the marking area feature with subpixel accuracy refinement correspondence.With reference now to this figure,, in step 31, the current set of sub-pixel refinement feature correspondence is provided.In step 32 (estimating step), at all c
I, j∈ C ∧ c
I, j J
*, calculate overall similarity measurement L
Global(J
*∪ c
I, j).In step 33 (maximization steps), select to make L
Global(J
*∪ c
I, j) maximized
Then, in step 34, if obtain maximum L
Global(J
*∪ c
I, j) [L
Global(J
*), conversion T so
J*Be returned and finish this algorithm in step 35.Otherwise in step 36, it is right to utilize local rigid transformation to come the maximization feature with subpixel accuracy
Carry out registration, with refinement feature center.In step 37, the feature after the refinement is right
Be added to set J
*:
In, and conversion T
J*In step 38, reruned.Then, step 32-38 is repeated until convergence.
Algorithm according to embodiment of the present invention is tested various same patients' 3 d medical images.To 11 PET-CT volumes obtaining at different time to the CT volume of, 3 different treatment stages and 10 from the SPECT-CT volume of mixed sweep instrument to having carried out measurement.Method according to embodiment of the present invention must be competed mutually with the image intensity pseudomorphism of different modalities, noise, the variation visual field and some PET-CT centerings, and the some of them section has the varying strength yardstick that is not corrected during importing.Fig. 4 (a)-(c) is the form that shows whole measuring distances and standard deviation, and these measuring distances are respectively at being that the unit x, the y that provide and PET-CT, CT-CT on the z direction and SPECT-CT volume are right with cm.Evaluate the quality of registration of PET-CT and CT-CT by the medical expert by measuring distance between some point-of-interests, these point-of-interests are: right point of lung and left side point, the apex of the heart, liver circular edge (liverr oundend), upper left and upper right and lower-left and bottom right kidney edge.When 10 SPECT-CT images were obtained by the mixed sweep instrument of prior art, doctor's strictness was along with registration (deregister) SPECT image is manually cancelled in the rotation that changes from 10 to 50mm variation and in each from 5 to 60 degree scope on x, y and the z direction.After registration, some discernible monumented points are selected by the medical expert on CT and SPECT image.
Test comprising on the real-time medical image of noise or pseudomorphism, this noise or pseudomorphism are because the variation of intensity calibration between the section.These problems were not solved before registration, so that test algorithm according to embodiment of the present invention with such data.Fig. 5 (a)-(c) illustrates from three sections of merging the registration results image, these merge registration results images and use algorithm according to the embodiment of the present invention to get access to, and respectively from the PET-CT image to, have the CT-CT image of intensity pseudomorphism to right with the SPECT-CT image.Although use the limited visual field to obtain as the latter's CT image and comprise much noise in above-mentioned image pair, the registration that is proposed has obtained acceptable accuracy.Remaining mismatching will definitely be solved with the embodiment with non-rigid transformation of the present invention.
The medical expert is used for special-purpose visual software visual and that measure and evaluates these results.At this assessment, the medical expert can make a choice between the direct mark point of barycenter that uses three dimensional area of interest and monumented point position.By fusion visualization and some additional survey instruments are integrated in the reproduction software, this task has obtained support.Under the PET-CT situation, the higher standard deviation on the z direction is conspicuous.The reason that this situation occurs may derive from the difference between the collection model.The CT image demonstrates the respiration snapshot, and the PET image is then gathered on a plurality of respiratory cycles, and has described average respiratory movement more or less.Because this motion of barrier film, some organs in the abdomen area are lifted and reduce, and this has just caused the bigger deviation seen in data sampling.Only carry out rigid transformation is carried out modeling at the employed algorithm of this experiment, and such local deformation is not carried out modeling according to embodiment of the present invention.For the CT-CT data, this effect is no longer preponderated, and is identical because ideally the patient is air-breathing in two kinds of collections.The SPECT-CT data are mated fine inherently, and the user-defined rigid transformation on the SPECT is not introduced local deformation.Therefore, can anticipate for these situations and have good registration results.
In all results, because the medical expert must come assigned address manually by the position of clicking in the various slice view, so specific measuring error is introduced into.But in such assessment experiment of having carried out, (between the observer and same observer) specifies the mean difference of the distance of point-of-interest to be no more than 3mm in some measuring processs.
The various forms that should be appreciated that the enough hardware of the present invention's energy, software, firmware, dedicated process or its combination is realized.In one embodiment, the present invention can be implemented in software as application program, and this application program is comprised on the computer-readable program storage device really.Application program can be uploaded to the machine that comprises any appropriate structure, and is carried out by this machine.
Fig. 7 is the block diagram that is used to the exemplary computer system of the three-dimensional registration process according to the embodiment of the present invention that realizes.With reference now to Fig. 7,, is used to realize that computer system 71 of the present invention especially can comprise: CPU (central processing unit) (CPU) 72, storer 73 and I/O (I/O) interface 74.Computer system 71 is connected with various input medias 76 such as mouse and keyboard by I/O interface 74 and display 75 usually.Described support circuit can comprise the circuit such as cache memory, power supply, clock circuit and communication bus.Storer 73 can comprise random-access memory (ram), ROM (read-only memory) (ROM), disc driver, tape drive etc. or its combination.The present invention can be implemented as routine 77, and this routine 77 is stored in the storer 73 and by CPU 72 and carries out, to handle the signal from signal source 78.Thereby this computer system 71 is general-purpose computing systems, and this general-purpose computing system becomes dedicated computer system when carrying out routine 77 of the present invention.
Described computer system 71 also comprises operating system and micro-instruction code.Various process described herein and function also can be the parts (or its combination) of the part of described micro-instruction code or the application program carried out by operating system.In addition, various other peripheral units can be connected on this computer platform, be connected to this computer platform such as additional data storage device and printing equipment.
It is also understood that so the actual connection between the system unit (or process steps) can be different, this depends on the mode that the present invention is programmed because some described in the accompanying drawing are formed system unit and method step can be used software implementation.Be given in the instruction of the present invention that this provides, those skilled in the relevant art can be susceptible to these and similarly embodiment or configuration of the present invention.
Though with reference to preferred implementation the present invention is had been described in detail, it will be appreciated by those skilled in the art that under the situation of the spirit and scope of the present invention of in not deviating from, being set forth and to carry out various modifications and replacement to it as claims.
Claims (27)
1. method of aiming at a pair of image, this method comprises the steps:
A pair of image with first image and second image is provided, and wherein said image comprises a plurality of intensity corresponding to the pixel domain in the three dimensions;
All identify the notable feature zone in first image and second image, wherein each zone is all relevant with space scale;
Come the representation feature zone by each regional central point;
According to local strength the unique point of piece image and the unique point of another width of cloth image are carried out registration;
Measure to described feature to sorting by similarity; And
Optimize the corresponding set of the right associating of feature by central point being refine to subpixel accuracy.
2. method according to claim 1 also comprises: the notable feature regional center point of expression piece image in the kD tree; Each feature of inquiring about described kD tree finds one group of feature the most contiguous; And from described tree, remove those the adjacent features with low conspicuousness value and in the yardstick of described each feature, have those adjacent features of central point, wherein obtain notable feature zone distributing uniformly basically in described image.
3. method according to claim 1, wherein, described space scale is the radius that comprises the sphere of described characteristic area.
4. method according to claim 2, wherein, the image pixel index that described kD tree is used described notable feature regional center point is as leaf, and wherein, and the distance from characteristic area to adjacent features zone is unit with the image index.
5. method according to claim 1, wherein, according to local strength unique point is carried out registration and also comprise:
Use the iterative closest point conversion between described first image and described second image to estimate initial registration:
With all eigentransformations of described second image in the coordinate space of described first image;
The characteristic storage of described institute conversion in the kD tree, and is inquired about each feature in described first image of described tree according to predetermined selection criteria, to select those the adjacent features in described second image; With
Right translation invariance, rotational invariance and the global image similarity measurement of selected feature of the described feature of test in described first image and described second image, wherein, come described selected feature sorting by the right global image similarity measurement value of described selected feature.
6. method according to claim 5, wherein, described iterative closest point conversion has minimized the square error between every group of feature point.
7. method according to claim 5 wherein, is tested described translation invariance and is comprised estimation
P wherein
iAnd p
jBe i first characteristics of image and j the center position coordinates of second characteristics of image in physical space.
8. method according to claim 5 wherein, is tested described rotational invariance and is comprised estimation
(f wherein
i, f
j) represent that respectively the described feature in described first image and described second image is right,
Wherein H represents with respect to the spherical adjacent features zone f around voxel location x and described space scale s
sThe entropy of interior image intensity value, H is defined as
Wherein (i, s x) are the probability density function that is comprised in the image intensity value i among the f, wherein H (f to p
i, f
j) be the associating differential entropy, H (f
i, f
j) be defined as:
P (f wherein
i, f
j) be characteristic area f
iAnd f
jIn the joint probability density of image intensity, and I and J value in the set of the possible intensity level in described first and second images respectively.
9. method according to claim 5 wherein, is tested described global image similarity measurement and is comprised estimation
I wherein
rRepresent described first image,
Represent the described conversion of described second image to the coordinate space of described first image,
Wherein H represents that H is defined as with respect to the entropy of the image intensity value in the piece image in the described image around voxel location x and the described space scale s:
H(s,x)=-∫
Ip(i,s,x)log
2p(i,s,x)di,
Wherein (i, s x) are the probability density function that is comprised in the image intensity value i among the I, wherein to p
Be the associating differential entropy,
Be defined as:
10. method according to claim 1, wherein, optimize the corresponding set of the right associating of feature and also comprise:
Utilization is carried out initialization to coming to the corresponding set of described associating for the most similar feature according to described similarity measurement;
Estimate described similarity measurement at the union that the corresponding set of described associating and each feature in being included in not that described associating is corresponding and gathering are right;
The feature of the similarity measurement of the described union of selection maximization is right, wherein, if the similarity measurement of described maximization feature pair and the corresponding union of sets collection of described associating is greater than the similarity measurement of the corresponding set of this associating, then described maximization feature is to utilizing local rigid transformation and carry out registration with subpixel accuracy and be added to during described associating correspondence gathering.
11. method according to claim 10, wherein, by use iterative closest point process calculated characteristics between registration transformation make described similarity measurement maximization.
12. method according to claim 10, wherein, if described maximization feature pair and the similarity measurement of the corresponding union of sets collection of described associating are less than or equal to the similarity measurement of the corresponding set of this associating, registration transformation then is provided, described registration transformation by the maximization described similarity measurement feature between registration transformation calculate.
13. a method of aiming at a pair of image, this method comprises the steps:
A pair of image with first image and second image is provided, and wherein, described image comprises a plurality of intensity corresponding to the pixel domain in the three dimensions;
All identify the notable feature zone in first image and second image, wherein each zone is all relevant with space scale;
Use the iterative closest point conversion between described first image and described second image to estimate initial registration;
All eigentransformations of described second image are arrived the coordinate space of described first image;
The characteristic storage of described institute conversion in the kD tree, and is inquired about each feature in described first image of described tree according to predetermined selection criteria, to select those the adjacent features in described second image;
Right translation invariance, rotational invariance and the global image similarity measurement of selected feature of the described feature of test in described first image and described second image; And
Come described selected feature sorting by the global image similarity measurement value that described selected feature is right.
14. method according to claim 13 also comprises: by each regional central point representation feature zone; The described characteristic area central point of piece image is stored in the kD tree; Each feature of inquiring about described kD tree finds one group of adjacent features; And from described tree, remove those the adjacent features with low conspicuousness value and in the yardstick of described each feature, have those adjacent features of central point, wherein, obtain notable feature zone distributing uniformly basically in described image.
15. method according to claim 13 also comprises:
Utilization is carried out initialization to coming to uniting corresponding set for the most similar feature according to described similarity measurement;
Estimate described similarity measurement at the union that the corresponding set of described associating and each feature in being included in not that described associating is corresponding and gathering are right;
The feature of the similarity measurement of the described union of selection maximization is right, wherein, if the similarity measurement of described maximization feature pair and the corresponding union of sets collection of described associating is greater than the similarity measurement of the corresponding set of this associating, then described maximization feature is to utilizing local rigid transformation and carry out registration with subpixel accuracy and be added to during described associating correspondence gathering;
Wherein, described global image similarity measurement is defined as
I wherein
rRepresent described first image,
Represent the described conversion of described second image to the coordinate space of described first image,
Wherein H represents that H is defined as with respect to the entropy of the image intensity value in the piece image in the described image around voxel location x and the described space scale s:
H(s,x)=-∫
Ip(i,s,x)log
2p(i,s,x)di,
Wherein (i, s x) are the probability density function that is comprised in the image intensity value i among the I, wherein to p
Be the associating differential entropy,
Be defined as:
16. a computer-readable program memory storage, it has positively comprised the programmed instruction that can be carried out by computing machine, is used to aim at the method step of a pair of image with execution, and described method comprises the steps:
A pair of image with first image and second image is provided, and wherein said image comprises a plurality of intensity corresponding to the pixel domain in the three dimensions;
All identify the notable feature zone in first image and second image, wherein each zone is all relevant with space scale;
Represent characteristic area by each regional central point;
According to local strength the unique point of piece image and the unique point of another width of cloth image are carried out registration;
Measure to described feature to sorting by similarity; And
Optimize the corresponding set of the right associating of feature by central point being refine to subpixel accuracy.
17. computer-readable program memory storage according to claim 16, described method also comprises: the notable feature regional center point of expression piece image in the kD tree; Each feature of inquiring about described kD tree finds one group of adjacent features; And from described tree, remove those the adjacent features with low conspicuousness value and in the yardstick of described each feature, have those adjacent features of central point, wherein, obtain notable feature zone distributing uniformly basically in described image.
18. computer-readable program memory storage according to claim 16, wherein, described space scale is the radius that comprises the sphere of described characteristic area.
19. computer-readable program memory storage according to claim 17, wherein, the image pixel index that described kD tree is used described notable feature regional center point is as leaf, and wherein, and the distance from characteristic area to adjacent features zone is unit with the image index.
20. computer-readable program memory storage according to claim 16 wherein, carries out registration according to local strength to unique point and also comprises:
Use the iterative closest point conversion between described first image and described second image to estimate initial registration:
With all eigentransformations of described second image in the coordinate space of described first image;
The characteristic storage of described institute conversion in the kD tree, and is inquired about each feature in described first image of described tree according to predetermined selection criteria, to select those the adjacent features in described second image; With
Right translation invariance, rotational invariance and the global image similarity measurement of selected feature of the described feature of test in described first image and described second image, wherein, come described selected feature sorting by the right global image similarity measurement value of described selected feature.
21. computer-readable program memory storage according to claim 20, wherein, described iterative closest point conversion minimizes the square error between every group of feature point.
22. computer-readable program memory storage according to claim 20 wherein, is tested described translation invariance and is comprised estimation
P wherein
iAnd p
jBe i first characteristics of image and j the center position coordinates of second characteristics of image in physical space.
23. computer-readable program memory storage according to claim 20 wherein, is tested described rotational invariance and is comprised estimation
(f wherein
i, f
j) represent that respectively the described feature in described first image and described second image is right,
Wherein H represents with respect to the spherical adjacent features zone f around voxel location x and described space scale s
sThe entropy of interior image intensity value, H is defined as
Wherein (i, s x) are the probability density function that is comprised in the image intensity value i among the f, wherein H (f to p
i, f
j) be the associating differential entropy, H (f
i, f
j) be defined as:
P (f wherein
i, f
j) be characteristic area f
iAnd f
jIn the joint probability density of image intensity, and I and J value in the set of the possible intensity level in described first and second images respectively.
24. computer-readable program memory storage according to claim 20 wherein, is tested described global image similarity measurement and is comprised estimation
I wherein
rRepresent described first image,
Represent the described conversion of described second image to the coordinate space of described first image,
Wherein H represents that it is defined as with respect to the entropy of the image intensity value in the piece image in the described image around voxel location x and the described space scale s:
H(s,x)=-∫
Ip(i,s,x)log
2p(i,s,x)di,
Wherein (i, s x) are the probability density function that is comprised in the image intensity value i among the I, wherein to p
Be the associating differential entropy,
Be defined as:
Wherein
It is image I
rWith
In the joint probability density of image intensity, and I and J value in the set of the possible intensity level in described first and second images respectively, and L wherein
GlobalEvaluated on the whole overlapping territory of described first and second images.
25. computer-readable program memory storage according to claim 16 wherein, is optimized the corresponding set of the right associating of feature and is also comprised:
Utilization is carried out initialization to coming to the corresponding set of described associating for the most similar feature according to described similarity measurement;
Estimate described similarity measurement at the union that the corresponding set of described associating and each feature in being included in not that described associating is corresponding and gathering are right;
The feature of the similarity measurement of the described union of selection maximization is right, wherein, if the similarity measurement of described maximization feature pair and the corresponding union of sets collection of described associating is greater than the similarity measurement of the corresponding set of this associating, then described maximization feature is to utilizing local rigid transformation and carry out registration with subpixel accuracy and be added to during described associating correspondence gathering.
26. computer-readable program memory storage according to claim 25, wherein, by use iterative closest point process calculated characteristics between registration transformation make described similarity measurement maximization.
27. computer-readable program memory storage according to claim 25, wherein, if described maximization feature pair and the similarity measurement of the corresponding union of sets collection of described associating are less than or equal to the similarity measurement of the corresponding set of this associating, registration transformation then is provided, described registration transformation by the maximization described similarity measurement feature between registration transformation calculate.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US71083405P | 2005-08-24 | 2005-08-24 | |
US60/710834 | 2005-08-24 | ||
US11/380673 | 2006-04-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN1920882A true CN1920882A (en) | 2007-02-28 |
Family
ID=37778600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2006101262079A Pending CN1920882A (en) | 2005-08-24 | 2006-08-24 | System and method for salient region feature based 3d multi modality registration of medical images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1920882A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100460813C (en) * | 2007-05-10 | 2009-02-11 | 上海交通大学 | Three-D connection rod curve matching rate detection method |
CN101763633B (en) * | 2009-07-15 | 2011-11-09 | 中国科学院自动化研究所 | Visible light image registration method based on salient region |
CN102395999A (en) * | 2009-04-15 | 2012-03-28 | 皇家飞利浦电子股份有限公司 | Quantification of medical image data |
CN102436670A (en) * | 2010-09-29 | 2012-05-02 | 株式会社日立制作所 | Computer system and method of matching for images and graphs |
CN102629376A (en) * | 2011-02-11 | 2012-08-08 | 微软公司 | Image registration |
CN102883651A (en) * | 2010-01-28 | 2013-01-16 | 宾夕法尼亚州研究基金会 | Image-based global registration system and method applicable to bronchoscopy guidance |
CN102938013A (en) * | 2011-08-15 | 2013-02-20 | 株式会社东芝 | Medical image processing apparatus and medical image processing method |
CN103679699A (en) * | 2013-10-16 | 2014-03-26 | 南京理工大学 | Stereo matching method based on translation and combined measurement of salient images |
TWI494083B (en) * | 2012-05-31 | 2015-08-01 | Univ Nat Yunlin Sci & Tech | Magnetic resonance measurement of knee cartilage with ICP and KD-TREE alignment algorithm |
CN106725564A (en) * | 2015-11-25 | 2017-05-31 | 东芝医疗系统株式会社 | Image processing apparatus and image processing method |
CN107920722A (en) * | 2013-05-29 | 2018-04-17 | 卡普索影像公司 | Rebuild for the image captured from capsule cameras by object detection |
CN110447053A (en) * | 2017-03-27 | 2019-11-12 | 微软技术许可有限责任公司 | The selectivity of sparse peripheral display based on element conspicuousness is drawn |
CN111161234A (en) * | 2019-12-25 | 2020-05-15 | 北京航天控制仪器研究所 | Discrete cosine transform measurement basis sorting method |
CN115597494A (en) * | 2022-12-15 | 2023-01-13 | 安徽大学绿色产业创新研究院(Cn) | Precision detection method and system for prefabricated part preformed hole based on point cloud |
-
2006
- 2006-08-24 CN CNA2006101262079A patent/CN1920882A/en active Pending
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100460813C (en) * | 2007-05-10 | 2009-02-11 | 上海交通大学 | Three-D connection rod curve matching rate detection method |
CN102395999A (en) * | 2009-04-15 | 2012-03-28 | 皇家飞利浦电子股份有限公司 | Quantification of medical image data |
CN101763633B (en) * | 2009-07-15 | 2011-11-09 | 中国科学院自动化研究所 | Visible light image registration method based on salient region |
CN102883651B (en) * | 2010-01-28 | 2016-04-27 | 宾夕法尼亚州研究基金会 | Can be applicable to the global registration system and method based on image that bronchoscope guides |
CN102883651A (en) * | 2010-01-28 | 2013-01-16 | 宾夕法尼亚州研究基金会 | Image-based global registration system and method applicable to bronchoscopy guidance |
CN102436670B (en) * | 2010-09-29 | 2013-11-06 | 株式会社日立制作所 | Computer system and position alignment method |
CN102436670A (en) * | 2010-09-29 | 2012-05-02 | 株式会社日立制作所 | Computer system and method of matching for images and graphs |
CN102629376B (en) * | 2011-02-11 | 2016-01-20 | 微软技术许可有限责任公司 | Image registration |
US9710730B2 (en) | 2011-02-11 | 2017-07-18 | Microsoft Technology Licensing, Llc | Image registration |
CN102629376A (en) * | 2011-02-11 | 2012-08-08 | 微软公司 | Image registration |
CN102938013A (en) * | 2011-08-15 | 2013-02-20 | 株式会社东芝 | Medical image processing apparatus and medical image processing method |
TWI494083B (en) * | 2012-05-31 | 2015-08-01 | Univ Nat Yunlin Sci & Tech | Magnetic resonance measurement of knee cartilage with ICP and KD-TREE alignment algorithm |
CN107920722B (en) * | 2013-05-29 | 2020-01-24 | 卡普索影像公司 | Reconstruction by object detection for images captured from a capsule camera |
CN107920722A (en) * | 2013-05-29 | 2018-04-17 | 卡普索影像公司 | Rebuild for the image captured from capsule cameras by object detection |
CN103679699B (en) * | 2013-10-16 | 2016-09-21 | 南京理工大学 | A kind of based on notable figure translation and the solid matching method of combined measure |
CN103679699A (en) * | 2013-10-16 | 2014-03-26 | 南京理工大学 | Stereo matching method based on translation and combined measurement of salient images |
CN106725564A (en) * | 2015-11-25 | 2017-05-31 | 东芝医疗系统株式会社 | Image processing apparatus and image processing method |
CN110447053A (en) * | 2017-03-27 | 2019-11-12 | 微软技术许可有限责任公司 | The selectivity of sparse peripheral display based on element conspicuousness is drawn |
CN110447053B (en) * | 2017-03-27 | 2023-08-11 | 微软技术许可有限责任公司 | Selective rendering of sparse peripheral display based on elemental saliency |
CN111161234A (en) * | 2019-12-25 | 2020-05-15 | 北京航天控制仪器研究所 | Discrete cosine transform measurement basis sorting method |
CN111161234B (en) * | 2019-12-25 | 2023-02-28 | 北京航天控制仪器研究所 | Discrete cosine transform measurement basis sorting method |
CN115597494A (en) * | 2022-12-15 | 2023-01-13 | 安徽大学绿色产业创新研究院(Cn) | Precision detection method and system for prefabricated part preformed hole based on point cloud |
CN115597494B (en) * | 2022-12-15 | 2023-03-10 | 安徽大学绿色产业创新研究院 | Precision detection method and system for prefabricated part preformed hole based on point cloud |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1920882A (en) | System and method for salient region feature based 3d multi modality registration of medical images | |
US7583857B2 (en) | System and method for salient region feature based 3D multi modality registration of medical images | |
Sharp et al. | Vision 20/20: perspectives on automated image segmentation for radiotherapy | |
Oliveira et al. | Medical image registration: a review | |
US8150132B2 (en) | Image analysis apparatus, image analysis method, and computer-readable recording medium storing image analysis program | |
US9454823B2 (en) | Knowledge-based automatic image segmentation | |
Rundo et al. | Multimodal medical image registration using particle swarm optimization: A review | |
CN111311655B (en) | Multi-mode image registration method, device, electronic equipment and storage medium | |
KR102394321B1 (en) | Systems and methods for automated distortion correction and/or co-registration of 3D images using artificial landmarks along bones | |
Schreibmann et al. | Multiatlas segmentation of thoracic and abdominal anatomy with level set‐based local search | |
CN1408102A (en) | Automated image fusion/alignment system and method | |
CN104021547A (en) | Three dimensional matching method for lung CT | |
CN104637024A (en) | Medical image processing apparatus and medical image processing method | |
CN1862596A (en) | System and method for fused PET-CT visualization for heart unfolding | |
CN1918601A (en) | Apparatus and method for registering images of a structured object | |
CN1299642C (en) | Multiple modality medical image registration method based on mutual information sensitive range | |
CN104281856A (en) | Image preprocessing method and system for brain medical image classification | |
Pan et al. | Medical image registration using modified iterative closest points | |
Wang et al. | Dual-modality multi-atlas segmentation of torso organs from [18 F] FDG-PET/CT images | |
CN106709867A (en) | Medical image registration method based on improved SURF and improved mutual information | |
Kumar et al. | A graph-based approach to the retrieval of volumetric PET-CT lung images | |
Zheng et al. | Adaptive segmentation of vertebral bodies from sagittal MR images based on local spatial information and Gaussian weighted chi-square distance | |
CN114159085B (en) | PET image attenuation correction method and device, electronic equipment and storage medium | |
Murphy et al. | Fast, simple, accurate multi-atlas segmentation of the brain | |
Jiang et al. | Regions of interest extraction from SPECT images for neural degeneration assessment using multimodality image fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |