Nothing Special   »   [go: up one dir, main page]

CN115908515B - Image registration method, training method and device of image registration model - Google Patents

Image registration method, training method and device of image registration model Download PDF

Info

Publication number
CN115908515B
CN115908515B CN202211413390.6A CN202211413390A CN115908515B CN 115908515 B CN115908515 B CN 115908515B CN 202211413390 A CN202211413390 A CN 202211413390A CN 115908515 B CN115908515 B CN 115908515B
Authority
CN
China
Prior art keywords
image
sample
registration
registered
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211413390.6A
Other languages
Chinese (zh)
Other versions
CN115908515A (en
Inventor
尚方信
杨叶辉
王晓荣
黄海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211413390.6A priority Critical patent/CN115908515B/en
Publication of CN115908515A publication Critical patent/CN115908515A/en
Application granted granted Critical
Publication of CN115908515B publication Critical patent/CN115908515B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The disclosure provides an image registration method, a training method and a training device of an image registration model, relates to the technical field of image processing and artificial intelligence AI, and particularly relates to deep learning and AI medical treatment. The specific implementation scheme is as follows: the method comprises the steps of acquiring an image to be registered and a registration reference image, wherein the image to be registered and the registration reference image respectively comprise target objects, respectively extracting masks corresponding to the image to be registered and the registration reference image, generating a target registration image of the image to be registered according to the masks corresponding to the respective images, and being applicable to a multi-mode image registration scene without being strongly dependent on a single-mode image.

Description

Image registration method, training method and device of image registration model
Technical Field
The present disclosure relates to the field of image processing and artificial intelligence (Artificial Intelligence, AI) technologies, and in particular, to deep learning and AI medical treatment, and more particularly, to an image registration method, a training method of an image registration model, and a training device of the image registration model.
Background
The image registration can be applied to different scenes, such as medical image registration scenes, and how to improve the accuracy of image registration becomes a problem to be solved urgently.
In some embodiments, image registration may be achieved by training a neural network model, e.g., training an image registration model, to achieve image registration based on the image registration model.
Disclosure of Invention
The disclosure provides an image registration method, a training method and a training device of an image registration model, which are used for improving the accuracy and reliability of image registration.
According to a first aspect of the present disclosure, there is provided an image registration method, including:
acquiring an image to be registered and a registration reference image, wherein the image to be registered and the registration reference image respectively comprise a target object;
respectively extracting masks corresponding to the image to be registered and the registration reference image;
and generating target registration images of the images to be registered according to the masks corresponding to the target registration images.
According to a second aspect of the present disclosure, there is provided a training method of an image registration model, including:
obtaining a sample data set, wherein the sample data set comprises a sample image to be registered and a sample registration reference image, and the sample image to be registered and the sample registration reference image respectively comprise a target object;
Respectively extracting sample masks corresponding to the sample to-be-registered image and the sample registration reference image;
and training to obtain an image registration model according to the respective corresponding sample masks, wherein the image registration model is used for determining a target registration image of the image to be registered.
According to a third aspect of the present disclosure, there is provided an image registration apparatus, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a first registration unit, wherein the first acquisition unit is used for acquiring an image to be registered and a registration reference image, and the image to be registered and the registration reference image respectively comprise a target object;
the first extraction unit is used for respectively extracting masks corresponding to the image to be registered and the registration reference image;
and the generating unit is used for generating the target registration image of the image to be registered according to the masks corresponding to the generating unit.
According to a fourth aspect of the present disclosure, there is provided a training apparatus for an image registration model, including:
the second acquisition unit is used for acquiring a sample data set, wherein the sample data set comprises a sample image to be registered and a sample registration reference image, and the sample image to be registered and the sample registration reference image respectively comprise a target object;
The second extraction unit is used for respectively extracting sample masks corresponding to the sample to-be-registered image and the sample registration reference image;
the training unit is used for training to obtain an image registration model according to the respective corresponding sample masks, wherein the image registration model is used for determining a target registration image of the images to be registered.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first or second aspect.
According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to the first or second aspect.
According to a seventh aspect of the present disclosure, there is provided a computer program product comprising: a computer program stored in a readable storage medium from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the method of the first or second aspect.
The image registration method, the training method and the training device for the image registration model provided by the disclosure comprise the following steps: the method comprises the steps of obtaining an image to be registered and a registration reference image, wherein the image to be registered and the registration reference image respectively comprise target objects, respectively extracting masks corresponding to the image to be registered and the registration reference image respectively, generating a target registration image of the image to be registered according to the masks corresponding to the respective images to be registered, determining technical features of the target registration image based on the masks corresponding to the respective images to be registered, and not needing to depend on Shan Motai (which can be understood as the same mode or the same source) images, so that the method can be applied to multi-mode image registration scenes, improves the application flexibility and diversity of the image registration method applied to different image registration scenes, reduces the resource consumption in the multi-mode image registration scenes, improves the image registration efficiency and accuracy, realizes image registration from the dimension focusing on global information, and improves the comprehensiveness, effectiveness and reliability of image registration.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic illustration of the effect of image registration according to the present disclosure;
FIG. 4 is a second effect diagram of image registration according to the present disclosure;
FIG. 5 is a mask schematic diagram one according to the present disclosure;
FIG. 6 is a second mask schematic according to the present disclosure;
FIG. 7 is an effect diagram III of image registration according to the present disclosure;
FIG. 8 is an effect diagram of image registration according to the present disclosure;
FIG. 9 is a schematic diagram of a third embodiment of the present disclosure;
FIG. 10 is a schematic diagram of an image registration method according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 12 is a schematic diagram according to a fifth embodiment of the present disclosure;
FIG. 13 is a schematic diagram according to a sixth embodiment of the disclosure;
FIG. 14 is a schematic diagram according to a seventh embodiment of the present disclosure;
FIG. 15 is a schematic diagram according to an eighth embodiment of the present disclosure;
FIG. 16 is a schematic diagram according to a ninth embodiment of the disclosure;
FIG. 17 is a schematic diagram according to a tenth embodiment of the present disclosure;
Fig. 18 is a block diagram of an electronic device used to implement the image registration method, training method of the image registration model of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
For the reader to understand the implementation principle of the present disclosure more deeply, at least some of the terms involved in the embodiments of the present disclosure are now explained as follows:
voxel is an abbreviation of a whole volume element, is a concept in three-dimensional space, and refers to a minimum unit of index data on three-dimensional space division.
A pixel is a concept in two-dimensional space, which refers to the smallest unit of data in two-dimensional space division, and the understanding of a pixel can be understood as dividing an image into many tiles, where a pixel is that tile.
The foreground region refers to an environmental constituent element in front of a subject (subject), and can be understood as a region of a scene in front of the subject.
The background area refers to an environmental constituent element behind a subject (subject), such as an area that can be understood as a subject behind the subject.
Image registration, the one image is sought for a spatial transformation (or series) that is spatially consistent with the corresponding point on the other image. This coincidence is understood to mean that the same anatomical point on the human body has the same spatial position on both images.
Accordingly, medical image registration, which is an image registration in the medical field, refers to that one medical image is subjected to a (or a series of) spatial transformation so that the spatial transformation is consistent with the corresponding point on another medical image. This coincidence is understood to mean that the same anatomical point on the human body has the same spatial position on both medical images.
Mask, which refers to a binary image consisting of 0 or 1. Wherein, in the mask, the 1-value region may also be referred to as a processed region, and the region participates in the image registration process; the 0-valued areas may also be referred to as mask areas, which do not participate in the image registration process.
The region of interest (Region of Interest, ROI) of the image is a region selected from the image, which is the focus of interest in analyzing the image.
Loss function (loss function) refers to the degree of inconsistency between a predicted value of a model (i.e., an output value of the model) and a true value (e.g., a pre-calibrated true value) used to measure the model during model training. In contrast, the smaller the loss function, the better the robustness of the model.
Computer-aided imaging, also known as computed tomography (Computer tomography, CT), refers to a modern medical imaging technique in which X-rays are scanned through a body tomographic to reconstruct medical images by computer processing.
Magnetic resonance (Magnetic Resonance Imaging, MRI), also known as nuclear magnetic resonance, refers to the recording of nuclear movements of hydrogen atoms in tissue organs under the action of a strong magnetic field, and the medical images of the tissue organs are obtained after calculation and processing.
With the development of science and technology, medical images based on computer-aided imaging, magnetic resonance and other technologies have become an indispensable examination means in clinical diagnosis and treatment processes. Such as two-dimensional (2D) or three-dimensional (3D) medical images that a user may take at multiple points in time using a variety of imaging devices (CT, MRI, X-ray, ultrasound, etc.).
The spatial positions of the same human tissue and organ in different medical images can be different under the influence of factors such as human body posture, equipment parameters and the like during shooting. For ease of follow-up and observation of the course of the disease, these medical images may be subjected to medical image registration. I.e. a plurality of medical images are mapped to the same spatial coordinate system such that each spatial grid (2D pixels or 3D voxels) corresponds to the same physical spatial location of a specific tissue organ.
In some embodiments, image registration may be achieved based on registration methods of image feature matching. Exemplary:
the algorithm researchers pre-select specific image parameters for modeling so as to realize image registration based on an image registration model obtained by modeling.
The image parameters may be parameters of the image itself, or parameters extracted from image features, such as parameters provided by an imaging device forming the image, and rotation and translation parameters of the image relative to a physical space. Alternatively, the image parameters may be information such as "velocity field" (which may be understood as a flow field composed of velocity at discrete points in the image), and "coupling field" (which may be implemented based on a multi-physical field coupling imaging technique) extracted from the image.
For example, the image registration model may employ a "simplex search" to search for the best match between two sets of images. "simplex search method" refers to a straightforward method of unconstrained optimization.
However, the above registration method is limited to the selection and modeling of image parameters. It is difficult for image parameters to adapt to images of all modes, so that such methods also have difficulty in cross-mode universal registration capability. The modality herein may be understood as the type of imaging device. Accordingly, a single modality means that two images are acquired with the same imaging device, a multiple modality means that two images are acquired with different imaging devices,
that is, one image parameter is difficult to adapt to images acquired by different types of imaging devices, and is difficult to apply to registration based on images acquired by different types of imaging devices.
In other embodiments, image registration may be achieved based on a registration method of a deep learning model, which may be a learning framework (VoxelMorph) for deformable image registration. Exemplary:
first, a modality conversion generator that trains two images (e.g., CT medical images based on computer tomography, MRI medical images based on magnetic resonance) against a generation network may be employed to convert the CT medical images to MRI medical images (which may be referred to as converted CT medical images) based on the modality conversion generator.
A single-modality registration network based on a neural network model may then be applied to output a registration confidence (which may be understood as consistency) between two medical images of the same modality (MRI medical images, converted CT medical images based on modality conversion generator).
Finally, the registration confidence level, the MRI medical image and the converted CT medical image obtained based on the modal conversion generator can be input into the multi-modal registration network together to output a registration result.
However, this method has a long process, and is prone to an unsatisfactory registration result of the final medical image due to accumulated errors in the serial process.
It should be understood that the above examples are merely exemplary illustrations taking images as medical images and are not to be construed as limiting the content of the images.
For example, the content of the image may be different for different application image registration scenes of the image registration method. For example, if the image registration method is applied to a medical image registration scene, the image may be a medical image; if the image registration method is applied to other image registration scenes, such as an autopilot image scene, the image may be a driving environment image, etc., which are not listed here.
In order to avoid at least one of the above technical problems, the present disclosure provides an inventive technical idea: for two images acquired for a target object, extracting masks corresponding to the two images respectively so as to realize image registration based on the masks corresponding to the two images respectively.
Based on the technical conception, the disclosure provides an image registration method, a training method and a training device of an image registration model, which are applied to the technical fields of image processing and artificial intelligence, and particularly relate to deep learning and AI medical treatment so as to improve the accuracy and reliability of image registration.
Fig. 1 is a schematic diagram of a first embodiment of the present disclosure, and as shown in fig. 1, an image registration method of an embodiment of the present disclosure includes:
s101: and acquiring an image to be registered and a registration reference image.
The image to be registered and the registration reference image respectively comprise a target object.
For example, the execution body of the embodiment may be an image registration device (hereinafter referred to as a registration device), and the registration device may be a server, a computer, a terminal device, a processor, a chip, or the like, which are not listed here.
If the registration device is a server, the registration device may be a local server, a cloud server, an independent server, or a server cluster, which is not limited in this embodiment.
In combination with the above analysis, the image registration method of the present disclosure may be applied to different image registration scenes, and accordingly, in different image registration scenes, the content of the target object is different.
For example, if the image registration method of the present disclosure is applied to a medical image registration scenario, the target object may be a tissue organ, such as a brain, a lung, and the like, which is not limited in this embodiment.
Correspondingly, if the target object is a brain, the image to be registered comprises the brain, and the registration reference image also comprises the brain.
As another example, if the image registration method of the present disclosure is applied to an image registration scene of autopilot, the target object may be an obstacle on a road surface, such as a traffic light.
Correspondingly, if the target object is a traffic light, the image to be registered comprises the traffic light, and the registration reference image also comprises the traffic light.
If the image registration method disclosed by the disclosure is applied to a medical image registration scene, the image to be registered and the registration reference image may be two medical images with the same source, or may be two medical images with different sources, where the "source" may be a source of equipment (i.e. imaging equipment) for acquiring the two medical images, or may be a source of a user for which the two medical images are aimed.
Taking the device source as an example, the image to be registered and the registration reference image may be two medical images acquired by the same type of imaging device (may be referred to as two medical images of a single modality), or may be two medical images acquired by different types of imaging devices (may be referred to as two medical images of multiple modalities).
For example, in combination with the above example, if the image to be registered and the registration reference image are two medical images acquired by the same type of imaging device, the image to be registered and the registration reference image may be CT medical images acquired by CT devices based on computer tomography, any one of the two CT medical images is the image to be registered, and the other of the two CT medical images is the registration reference image.
If the image to be registered and the registration reference image are two medical images acquired by different types of imaging equipment, the image to be registered can be a CT medical image acquired by CT equipment based on computer tomography, and the registration reference image can be an MRI medical image acquired by equipment based on magnetic resonance technology; alternatively, the image to be registered may be an MRI medical image acquired by a device based on a magnetic resonance technique, and the registration reference image may be a CT medical image acquired by a CT device based on computer tomography.
Taking a user source as an example, the image to be registered and the registration reference image may be two medical images of the same user or two medical images of different users.
For example, in combination with the above example, if the image to be registered and the registration reference image are two medical images of the same user, the image to be registered and the registration reference image may be two medical images of the user a.
In combination with the above analysis, the two medical images of the user a may be two medical images acquired by the same type of imaging device, or may be two medical images acquired by different types of imaging devices, which will not be described herein.
If the image to be registered and the registration reference image are two medical images of different users, if the different users are a user A and a user B respectively, the image to be registered can be the medical image of the user A, and the registration reference image can be the medical image of the user B; alternatively, the image to be registered may be a medical image of user B; the registration reference image may be a medical image of user a.
Similarly, the imaging time, imaging spatial position (e.g. where to obtain the image to be registered and/or the registration reference image), etc. of the image to be registered and the registration reference image are not limited in this embodiment.
That is, the imaging sources, imaging time, imaging spatial positions, and the like of the image to be registered and the reference image to be registered in this embodiment are not limited, and may be determined based on the requirements, the history, and the experiments, and the like, and the image to be registered and the reference image to be registered may include the target object, respectively, if the image to be registered and the reference image to be registered include the same tissue organ.
The following examples may be used for acquiring the image to be registered and the registration reference image:
in one example, the registration device may be coupled to the image acquisition device and receive the image to be registered and the registration reference image transmitted by the image acquisition device.
In another example, the registration device may provide an image-loading tool through which a user may transmit an image to be registered and a registration reference image to the registration device.
The image loading tool can be an interface used for being connected with external equipment, such as an interface used for being connected with other storage equipment, and the image to be registered and the registration reference image transmitted by the external equipment are obtained through the interface; the image loading tool may also be a display device, for example, the registration device may input an interface for loading an image function on the display device, through which a user may import the image to be registered and the registration reference image into the registration device, and the registration device acquires the image to be registered and the registration reference image.
S102: and respectively extracting masks corresponding to the image to be registered and the registration reference image.
The sequence of extracting the mask by the registration device is not limited in this embodiment. For example, the registration device may first extract a mask corresponding to the image to be registered, and then extract a mask corresponding to the registration reference image; the registration device can also extract the mask corresponding to the registration reference image firstly and then extract the mask corresponding to the image to be registered; the registration device can also simultaneously extract masks corresponding to the images to be registered and the registration reference images.
The method of extracting the mask is not limited in this embodiment, and for example, the mask may be extracted by distinguishing the foreground region and the background region.
For example, the foreground region and the background region of the image to be registered may be distinguished, and the mask corresponding to the image to be registered may be extracted according to the foreground region and the background region of the image to be registered.
Similarly, the foreground region and the background region of the registration reference image can be distinguished, and the mask corresponding to the registration reference image is extracted according to the foreground region and the background region of the registration reference image.
For example, the mask may be extracted based on a neural network model.
For example, a segmentation network model may be trained in advance, a foreground region and a background region of the image to be registered may be determined based on the segmentation network model, and a mask corresponding to the image to be registered may be extracted according to the foreground region and the background region of the image to be registered.
Similarly, the foreground region and the background region of the registration reference image can be determined based on the segmentation network model, and the mask corresponding to the registration reference image is extracted according to the foreground region and the background region of the registration reference image.
The training method of the segmentation network model can be obtained by training the obtained sample image in an iterative optimization mode on the basic network model, and the embodiment is not limited.
S103: and generating target registration images of the images to be registered according to the corresponding masks.
Because images of different modes have different pixel value distribution characteristics, in the above embodiment, in order to realize image registration between images of different modes, the mode conversion generator is trained in advance to convert the images of different modes into images of the same mode based on the mode conversion generator, so as to realize image registration on the basis of the same mode.
It should be noted that, although the mode conversion may be performed on images of different modes based on the mode conversion generator, for example, the image to be registered acquired by the imaging device based on the computer layer imaging is converted into the registration reference image acquired by the imaging device based on the magnetic resonance technology, it is essential that one image of two images of different modes is first converted into an image of the same mode as the other image, so as to perform image registration under the same model.
That is, the image registration in the above embodiment is strongly dependent on the images of the same modality, and for the images of different modalities, the images of the same modality need to be converted first to perform image registration on the basis of the images of the same modality.
In this embodiment, the masks corresponding to the image to be registered and the registration reference image are extracted to determine the target registration image based on the extracted masks corresponding to each other, so that the two images for image registration can be images of different modes without being strongly dependent on the same mode, thereby realizing cross-mode universality of image registration, avoiding a complicated mode conversion process, and improving efficiency and accuracy of image registration.
In this embodiment, the mask is extracted to convert the focus on the local information of the target object into the focus on the global information of the target object, so that the excessive fitting of the local information is avoided, and the accuracy and reliability of image registration are improved.
It should be understood that the above description of modalities is for exemplary purposes only, and that possible modalities are not to be construed as limitations of modalities.
Based on the above analysis, the disclosure provides an image registration method, which includes: the method comprises the steps of obtaining an image to be registered and a registration reference image, wherein the image to be registered and the registration reference image respectively comprise target objects, respectively extracting masks corresponding to the image to be registered and the registration reference image respectively, and generating a target registration image of the image to be registered according to the masks corresponding to the respective images to be registered.
In order for the reader to more fully understand the principles of implementation of the present disclosure, a detailed description of the medical image registration method of the present disclosure will now be provided in connection with fig. 2. Fig. 2 is a schematic diagram according to a second embodiment of the disclosure, and as shown in fig. 2, an image registration method according to an embodiment of the disclosure includes:
s201: and acquiring an image to be registered and a registration reference image.
The image to be registered and the registration reference image respectively comprise a target object.
It should be understood that, in order to avoid the cumbersome statement, the technical features of this embodiment that are the same as those of the above embodiment are not repeated.
S202: and extracting a mask to be registered of the target object in the image to be registered and a registration reference mask of the target object in the registration reference image.
Illustratively, the target object is a lung lobe, the image to be registered comprises a lung lobe, and the registration reference image comprises a lung lobe. The mask of the lung lobes is extracted from the image to be registered, and is called as the mask to be registered for the convenience of distinguishing. From the registration reference image, a mask of lung lobes is extracted, which is referred to as a registration reference mask for ease of distinction.
Wherein the mask characterizes the image region of interest. For example, brain tissue, lung lobes, lung airways and other organs marked by a mask are foreground regions, and pixel/voxel points not marked by a mask represent background regions where no attention is paid.
For example, an intensity window for the extraction mask may be preset, and may be understood as the size of the window for the extraction mask, or may be understood as the region for the extraction mask, so that a pixel/voxel grid with a gray value greater than 0 is set as a foreground region, and correspondingly, a pixel/voxel grid with a gray value less than or equal to 0 is set as a background region within the intensity window. Wherein the intensity window may be an area of the target object.
As shown in fig. 3, the target object is a lung lobe, and the view angle of referring to fig. 3 is taken as a reference, the image on the left side of fig. 3 is an image before extracting the mask, such as an image to be registered, and the image on the right side of fig. 3 is an image after extracting the mask, such as an image to be registered.
Accordingly, as shown in fig. 4, the target object is a lung lobe, and the view angle of fig. 4 is referred to, the image on the left side of fig. 4 is an image before extracting the mask, such as a registration reference image, and the image on the right side of fig. 4 is an image after extracting the mask, such as a registration reference mask.
As can be seen from fig. 3 and fig. 4, when the mask is extracted from the image to be registered and the reference image to be registered respectively, and medical image registration is performed based on the mask corresponding to each mask, the focus on the local information of the lung lobes can be converted into the focus on the global information of the lung lobes, for example, medical image registration of excessively focusing on one or more pixel/voxel grids in the image to be registered and the reference image to be registered is avoided, so that accuracy, reliability and effectiveness of medical image registration are improved.
Based on the above analysis, in other embodiments, the masks corresponding to the image to be registered and the reference image to be registered may be extracted by dividing the network model, and in the masks corresponding to each other, different gray values may be used to characterize the lung lobes.
Illustratively, the target object is a lung lobe, the mask to be registered of the image to be registered extracted based on the segmentation network model is shown in fig. 5, and the registration reference mask of the registration reference image extracted based on the segmentation network model is shown in fig. 6.
S203: and determining a target deformation field according to the mask to be registered and the registration reference mask. The target deformation field is used for representing target movement information of pixel points in the images to be registered.
For example, image registration may be understood as aligning one image to another such that the two images are as similar as possible.
Accordingly, in this embodiment, image registration may be understood as aligning one mask to another mask, so that the two aligned masks are as similar as possible, and thus the images corresponding to the two masks are as similar as possible.
The mask to be registered may be understood as a floating image, the registration reference mask may be understood as a fixed image, a mapping from the floating image to the fixed image may be predicted, the mapping may be referred to as a deformation field, also may be referred to as a flow field, a registration field, or the like, and accordingly, the deformation field may be referred to as a target deformation field for convenience of distinguishing from other deformation fields.
The target deformation field has a property of differential stratosphere, namely the overlapping degree between the mask to be registered and the registration reference mask is relatively high.
In some embodiments, S203 may include the steps of:
a first step of: and generating an initial deformation field according to the mask to be registered and the registration reference mask. The initial deformation field is used for representing initial movement information of pixel points in the image to be registered.
And a second step of: and according to the initial deformation field prediction, obtaining a target deformation field, wherein the similarity between the mask to be registered and the registration reference mask is larger than a preset first threshold value.
The first threshold may be determined based on a requirement, a history, a test, and the like, which is not limited in this embodiment.
For example, the first threshold may be a relatively large value for relatively high precision image registration scenes, whereas the first threshold may be a relatively small value for relatively low precision image registration scenes.
For example, initial movement information of the pixel points in the image to be registered may be determined according to the mask to be registered and the registration reference mask, and prediction may be performed based on the initial movement information to obtain movement information with higher overlap ratio between the mask to be registered and the registration reference mask relative to the initial movement information, and so on until the target deformation field is obtained.
In this embodiment, by determining the initial deformation field first to predict based on the initial deformation field to obtain the target deformation field, the target deformation field with relatively high contact ratio between the mask to be registered and the registration reference mask can be obtained, so as to improve accuracy and reliability of the obtained target deformation field.
S204: and carrying out moving operation on the pixel points of the image to be registered according to the target deformation field to obtain a target registration image.
In combination with the above analysis, since the target deformation field has higher accuracy and reliability, when the target registration image is determined based on the target deformation field, the target registration image can be made to have higher accuracy and reliability.
As shown in fig. 7, the method according to the embodiment may obtain the target registration image based on the image to be registered and the registration reference image, where the target registration image has higher integrity and reliability than the image to be registered.
For example, as shown in fig. 7, the target registration image includes a left upper corner missing content of the image to be registered, with reference to the view angle of fig. 7.
Similarly, in combination with the above analysis, the method of this embodiment is implemented based on a mask extraction method, as shown in fig. 8, if a target registration mask is obtained based on the mask to be registered and the registration reference mask, the target registration mask has higher integrity and reliability than the mask to be registered.
For example, as shown in fig. 8, the target registration mask includes a left upper corner missing content of the mask to be registered, with reference to the view angle of fig. 8.
It should be noted that, in some embodiments, in order to further improve the effectiveness and reliability of image registration, the image to be registered, the reference image to be registered, and masks corresponding to the image to be registered and the reference image to be registered are respectively preprocessed images.
For example, after the image to be registered and the registration reference image are acquired, the image to be registered and the registration reference image may be preprocessed to obtain a preprocessed image to be matched and a preprocessed registration reference image.
Correspondingly, when extracting the mask, extracting the mask corresponding to the preprocessed image to be matched and the preprocessed registration reference image.
Accordingly, after the target registration image is obtained, an operation opposite to the preprocessing needs to be performed on the target registration image.
Similarly, after extracting the mask to be registered and the registration reference mask, the mask to be registered and the registration reference mask may be preprocessed respectively, so as to obtain a preprocessed mask to be registered and a preprocessed registration reference mask.
In some embodiments, the preprocessing includes one or more of preset pixel value interval processing, image black edge cutting processing, region of interest extraction processing, and resolution adjustment processing.
The pixel value may also be referred to as a gray value, and the preset pixel value interval processing may be understood as setting a gray value interval, for example, a gray value lower than a certain lower threshold (which may be determined by means of demand, history, and experiment, etc.), and a gray value higher than an upper threshold (which may be determined by means of demand, history, and experiment, etc.), and a gray value higher than 255.
The image black edge removal process may be referred to as an image black edge removal process, and may be understood as removing black edges in an image. The implementation method of the black edge processing of the cut image is not limited, for example, the black edge processing can be implemented by a symbology, a custom removing method, a tool removing method, and the like, which are not listed here.
The process of extracting the region of interest can be understood as extracting a part of the region of interest, such as a tissue organ region, from the image.
The resolution is adjusted by adjusting the resolution of the images respectively, so that the two images are better, for example, the definition of the images is improved.
In this embodiment, the pretreatment is performed on the mask to be registered, the reference mask to be registered, the mask to be registered and the reference mask to be registered by adopting the pretreatment mode, so that the four preprocessed images have the same size and are relatively more excellent, and the duty ratio of the target object (such as a tissue organ) in the images is relatively larger, thereby facilitating the registration and the analysis after the registration.
In other embodiments, image registration may also be implemented by using a neural network model, for example, an image registration model may be trained in advance to determine a target registration image of the images to be registered based on the image registration model. An exemplary description will now be given in connection with fig. 9.
Fig. 9 is a schematic diagram of a third embodiment of the disclosure, and as shown in fig. 9, an image registration method of an embodiment of the disclosure includes:
s901: and acquiring an image to be registered and a registration reference image.
The image to be registered and the registration reference image respectively comprise a target object.
Similarly, in order to avoid the tedious statement, the technical features of this embodiment that are the same as those of the above embodiment are not repeated.
Illustratively, the image to be registered passes through I mov Representing, registering reference images through I fix The representation, correspondingly, as shown in FIG. 10, acquires I mov ,I fix
S902: and extracting a mask to be registered of the target object in the image to be registered and a registration reference mask of the target object in the registration reference image.
Illustratively, the mask to be registered passes through M mov Representing, registering the reference image through M fix Representation, correspondingly, as shown in FIG. 10, extract M mov ,M fix
S903: and inputting the mask to be registered and the registration reference mask into a pre-trained image registration model, and outputting a target deformation field.
Illustratively, in connection with the above analysis, it can be seen that in extracting M mov ,M fix Thereafter, can be applied to M mov ,M fix Preprocessing, and extracting M as shown in FIG. 10 mov ,M fix Thereafter, data preprocessing is also included.
As shown in FIG. 10, M after pretreatment mov ,M fix Input to image registration model to obtain output target deformation field as shown in figure 10
The image registration model is trained based on a sample data set, wherein the sample data set comprises a sample image to be registered and a sample registration reference image, and the sample image to be registered and the sample registration reference image comprise target objects.
For example, the base network model may be trained based on the sample image to be registered and the sample registration reference image to train the base network model to have the ability to predict the target deformation field, thereby obtaining an image registration model having the ability to predict the target deformation field.
Accordingly, when the mask to be registered and the registration reference mask are input to the image registration model, the image registration model can predict and output the target deformation field based on its ability to predict the target deformation field.
In this embodiment, the target deformation field is determined by combining with the neural network model, so that the efficiency and the intellectualization of determining the target deformation field can be improved.
In some embodiments, the image registration model includes a convolutional neural network and a deformation field transformation layer; s903 may include the steps of:
a first step of: and inputting the mask to be registered and the registration reference mask into a convolutional neural network, and outputting an initial deformation field. The initial deformation field is used for representing initial movement information of the pixel point.
And a second step of: the initial deformation field is input to the deformation field transformation layer, and the target deformation field is output.
The convolutional neural Network is exemplified by a U-shaped convolutional neural Network (U-shape Network), and the mask to be registered and the registration reference mask pass through the U-shaped convolutional neural Network to output an initial deformation fieldInitial deformation field->Outputting a target deformation field via the deformation field transformation layer>
For example, the mask to be registered is a two-dimensional grid of pixels shaped as H, W, C, or a three-dimensional grid of voxels shaped as D, H, W, C. Wherein DHW represents a spatial dimension, C represents a channel number, W (width) is a width, H (height) is a height, and D (depth) is a depth.
Correspondingly, the shape of the mask to be registered is the deformation field of H, W,2 or D, H, W,3It can be understood that: for each point on the pixel/voxel grid, the position coordinates to which it is to be moved are indicated. For a pixel grid, this coordinate value consists of 2 values; for a voxel grid, this coordinate value consists of 3 values.
Target deformation field output through deformation field conversion layerHaving a "differential stratospheric" character, i.e. if based on the initial deformation field->The overlap ratio between the adjusted mask to be registered and the registration reference mask is called a first overlap ratio and is based on the target deformation field +.>The adjusted overlap ratio between the mask to be registered and the registration reference mask is called second overlap ratio, and the second overlap ratio is higher than the first overlap ratio.
The implementation principle of the deformation field transformation layer can be understood as follows: according to the initial deformation fieldPredicting a next deformation field that causes the mask to be registered to be closer to the registration reference mask; calculating that the mask to be registered is closer to the similarity loss function between registration reference masks under the next deformation field, and so on untilUntil the target deformation field is obtained->In the deformation field->Next, the similarity loss function between the mask to be registered and the registration reference mask approaches 0.
In this embodiment, by combining the convolutional neural network and the deformation field transformation layer, a target deformation field with a similarity loss function approaching 0 between the mask to be registered and the registration reference mask can be obtained, so that the target deformation field has higher accuracy and effectiveness.
In some embodiments, the first step may comprise the following sub-steps:
a first substep: and stacking the mask to be registered and the registration reference mask in the dimension of the image channel of the image registration model to obtain stacking information.
Illustratively, the mask to be registered and the registration reference mask are overlapped along the dimension of the image channel of the medical image registration model to obtain stacking information comprising the image features of the mask to be registered and stacking information comprising the image features of the registration reference mask.
The stacking manner is not limited in this embodiment, for example, the stacking manner may be implemented by a stitching manner, for example, the image feature of the mask to be registered (for convenience of distinguishing, the image feature is referred to as a first image feature), the image feature of the reference mask to be registered (for convenience of distinguishing, the image feature is referred to as a second image feature), and the first image feature and the second image feature are stitched to obtain the stitched image feature (i.e. stacking information).
A second substep: the stacking information is input to a convolutional neural network, and an initial deformation field is output.
In this embodiment, two masks (i.e., a mask to be registered and a registration reference mask) are stacked in the image channel dimension to obtain stacking information, so as to determine an initial deformation field based on the stacking information, which is equivalent to determining the initial deformation field from the image features corresponding to the two masks, so that the determined initial deformation field has higher accuracy and reliability.
S904: and carrying out moving operation on the pixel points of the image to be registered according to the target deformation field to obtain a target registration image.
Exemplary, target registration image pass I moved Representation, correspondingly, as shown in FIG. 10, of the combined target deformation fieldAnd I mov Output I moved
The implementation principle of this step may refer to the above embodiment, and will not be described herein.
In combination with the above analysis, the image registration model may be pre-trained to achieve image registration based on the image registration model. In order for the reader to understand the principles of training an image registration model, an exemplary illustration of the training method of the image registration model of the present disclosure will now be described in connection with fig. 11. Fig. 11 is a schematic diagram according to a fourth embodiment of the disclosure, and as shown in fig. 11, a training method of an image registration model according to an embodiment of the disclosure includes:
S1101: a sample dataset is acquired.
The sample data set comprises a sample to-be-registered image and a sample registration reference image, and the sample to-be-registered image and the sample registration reference image comprise a target object.
The execution subject of the training method of the image registration model in this embodiment may be a training device of the image registration model (hereinafter simply referred to as a training device), and the training device and the registration device may be the same device or different devices, which is not limited in this embodiment.
For example, if the training device and the registration device are different devices, a communication link may be established between the training device and the registration device, after the training device trains to obtain the image registration model, the image registration model may be transmitted to the registration device through the communication link, and the registration device receives the image registration model transmitted by the training device, thereby implementing the image registration method described in any embodiment above.
It should be appreciated that the number of samples in the sample dataset is not limited in this embodiment, and may be determined based on requirements, history, and experimentation. That is, the number of images to be registered of the sample and the number of reference images registered of the sample are not limited in this embodiment.
Similarly, regarding the technical features of the present embodiment that are the same as or similar to those of the above embodiment, the description of the present embodiment is omitted.
For example, the target objects may be the same tissue organ; as another example, the imaging time, imaging source, imaging spatial position, and the like of the image to be registered of the sample and the reference image to be registered of the sample are not limited in this embodiment; etc., and are not listed here.
S1102: and respectively extracting sample masks corresponding to the sample to-be-registered image and the sample registration reference image.
Regarding the manner of extracting the sample mask, reference may be made to the implementation principle of extracting the mask in the above embodiment, which is not described herein.
S1103: and training to obtain an image registration model according to the respective corresponding sample masks.
The image registration model is used for determining a target registration image of the image to be registered.
Similarly, in the embodiment, through extracting sample masks corresponding to the sample to-be-registered image and the sample registration reference image respectively, an image registration model is obtained based on the extracted sample masks corresponding to the sample to-be-registered image, and images of the same mode are not required to be strongly relied on, for example, two medical images for image registration can be images of different modes, so that cross-mode universality of image registration is realized, a complicated mode conversion process is avoided, and efficiency, accuracy and reliability of completing image registration based on the image registration model are improved.
In this embodiment, the image registration model is obtained by extracting the respective corresponding sample masks to train based on the respective corresponding sample masks, so that the defect that the image registration model transits to pay attention to local features of two images of the sample to be registered and the sample registration reference image and ignores global features under the condition that the data volume in the sample data set is relatively small is avoided, and the defect that effective image registration cannot be achieved due to the image registration model pursuing the minimization of local pixel value difference is avoided, thereby improving the training effectiveness and reliability.
Correspondingly, through the training method of the image registration model, the image registration model corresponding to each tissue organ can be trained, so that the pertinence and the accuracy of image registration can be realized, and the image registration between any modes can be realized.
To facilitate the reader to more deeply understand the implementation principle of the training method of the image registration model of the present disclosure, a detailed description will be made with reference to fig. 12. Fig. 12 is a schematic diagram according to a fifth embodiment of the disclosure, and as shown in fig. 12, a training method of an image registration model according to an embodiment of the disclosure includes:
S1201: a sample dataset is acquired.
The sample data set comprises a sample to-be-registered image and a sample registration reference image, and the sample to-be-registered image and the sample registration reference image respectively comprise a target object.
Similarly, regarding the technical features of the present embodiment that are the same as or similar to those of the above embodiment, the description of the present embodiment is omitted.
In some embodiments, the image registration model is applied to a medical image registration scene, and the target object is a tissue organ.
S1202: and extracting a sample to-be-registered mask of the target object in the sample to-be-registered image and a sample registration reference mask of the target object in the sample registration reference image.
Regarding the implementation principle of S1202, reference may be made to the implementation principle of S202, which is not described herein.
Similarly, in this embodiment, in extracting masks for the sample to-be-registered image and the sample registration reference image respectively, so as to perform training of the image registration model based on the respective corresponding sample masks, the focus on the local information of the lung lobes can be converted into the focus on the global information of the lung lobes, for example, the excessive focus on the image registration of one or more pixel/voxel grids in the sample to-be-registered image and the sample registration reference image is avoided, so that the accuracy, reliability and effectiveness of training of the image registration model are improved.
S1203: and determining a sample target deformation field according to the sample mask to be registered and the sample registration reference mask.
The sample target deformation field is used for representing sample target movement information of pixel points in a mask to be registered of the sample.
Regarding the implementation principle of S1203, reference may be made to the implementation principle of S203, which is not described herein.
In some embodiments, S1203 may include the steps of:
a first step of: and generating a sample initial deformation field according to the sample mask to be registered and the sample registration reference mask, wherein the sample initial deformation field is used for representing sample initial movement information of pixel points in the sample mask to be registered.
And a second step of: and predicting a sample target deformation field with similarity between the sample mask to be registered and the sample registration reference mask larger than a preset second threshold according to the initial deformation field of the sample.
Similarly, the preset second threshold may be determined based on a requirement, a history, and a test, which is not limited in this embodiment. In some embodiments, the preset first threshold may be equal to the preset first threshold.
Regarding the implementation principles of the first step and the second step in S1203, reference may be made to the implementation principles of the first step and the second step in S203, which are not described herein.
Similarly, in this embodiment, by determining the initial deformation field of the sample first to predict based on the initial deformation field of the sample to obtain the deformation field of the sample target, the deformation field of the sample target with relatively high degree of coincidence between the mask to be registered of the sample and the reference mask to be registered of the sample can be obtained, so as to improve accuracy and reliability of the obtained deformation field of the sample target.
In some embodiments, the initial network model includes an initial convolutional neural network model and an initial deformation field transformation layer; generating a sample initial deformation field according to a sample mask to be registered and a sample registration reference mask, wherein the method comprises the following substeps:
a first substep: and (3) inputting the sample mask to be registered and the sample registration reference mask into an initial convolutional neural network model, and outputting a sample initial deformation field.
In some embodiments, the first sub-step may include the following refinement steps:
a first refinement step: and stacking the sample mask to be registered and the registration reference mask based on the image channel dimension of the sample image registration model to obtain sample stacking information.
And a second refinement step: and inputting the sample stacking information into an initial convolutional neural network model, and outputting a sample initial deformation field.
A second substep: and inputting the initial deformation field of the sample into the initial deformation field conversion layer, and outputting the target deformation field of the sample.
Regarding the implementation principles of the first sub-step and the second sub-step, reference may be made to the third embodiment, in which the initial deformation field is output based on the convolutional neural network model, and the implementation principles of the target deformation field is output based on the deformation field transformation layer, which will not be described herein.
Similarly, in this embodiment, by combining the initial convolutional neural network and the initial deformation field transformation layer, a sample target deformation field with a similarity loss function approaching 0 between the sample to-be-registered mask and the sample registration reference mask can be obtained, so that the sample target deformation field has higher accuracy and effectiveness.
S1204: and constructing a loss function according to the sample target deformation field, the sample mask to be registered and the sample registration reference mask, and optimizing the initial network model based on the loss function until convergence to obtain an image registration model.
In this embodiment, since the sample target deformation field has higher accuracy and reliability, and the sample registration mask and the sample registration reference mask are relatively more focused on the overall information of the image, the image registration model obtained by training by combining the sample target deformation field, the sample mask to be registered, and the sample registration reference mask has higher accuracy and reliability.
In some embodiments, the loss function L may be represented based on equation 1, equation 1:
wherein,for the sample target deformation field, for the movement operation of moving the pixels in the sample to-be-registered mask according to the sample deformation field, M fix For the sample registration reference mask, α is a preset weight coefficient (similarly, α is 0.01, which can be determined based on requirements, history, and experiments).
In some embodiments, optimizing the initial network model based on the loss function until convergence results in a medical image registration model, comprising:
and based on the loss function, adjusting parameters of the initial convolutional neural network model and the initial deformation field transformation layer until the preset iteration times are reached or the loss function is smaller than a preset third threshold value, and obtaining an image registration model.
Similarly, the preset iteration number and the third threshold may be determined based on a requirement, a history, a test, and the like, which is not limited in this embodiment.
For example, if the loss function is smaller than the third threshold, it indicates that the contact ratio between the current sample to-be-registered mask and the sample registration reference mask is higher, and the image registration model has a stronger capability of predicting the target deformation field, so that when the target registration image is determined based on the image registration model, the target registration image and the registration reference image are highly overlapped.
Similarly, in order to further improve the effectiveness and reliability of training of the image registration model, the sample image to be registered, the sample registration reference image, and sample masks corresponding to the sample image to be registered and the sample registration reference image are respectively preprocessed images.
The preprocessing comprises one or more of preset pixel value interval processing, image black edge cutting processing, region of interest extraction processing and resolution adjustment processing.
Regarding the implementation principle of the preprocessing, reference may be made to the implementation principle of the preprocessing in the second embodiment, which is not described herein.
According to another aspect of the present disclosure, the present disclosure also provides an image registration apparatus.
Referring to fig. 13, fig. 13 is a schematic diagram of a sixth embodiment of the disclosure, as shown in fig. 13, an image matching apparatus 1300 according to an embodiment of the disclosure includes:
a first obtaining unit 1301 is configured to obtain an image to be registered and a registration reference image, where the image to be registered and the registration reference image respectively include a target object.
The first extracting unit 1302 is configured to extract masks corresponding to the image to be registered and the registration reference image respectively.
And the generating unit 1303 is configured to generate a target registration image of the image to be registered according to the respective corresponding masks.
Referring to fig. 14, fig. 14 is a schematic diagram of a seventh embodiment of the disclosure, as shown in fig. 14, an image matching apparatus 1400 according to an embodiment of the disclosure includes:
a first obtaining unit 1401 is configured to obtain an image to be registered and a registration reference image, where the image to be registered and the registration reference image respectively include a target object.
A first extracting unit 1402, configured to extract masks corresponding to the image to be registered and the registration reference image respectively.
In some embodiments, the first extracting unit 1402 is configured to extract a mask to be registered of the target object in the image to be registered and a registration reference mask of the target object in the registration reference image.
Wherein the respective corresponding masks include the mask to be registered and the registration reference mask.
A generating unit 1403 is configured to generate a target registration image of the image to be registered according to the respective corresponding masks.
As can be seen in conjunction with fig. 14, in some embodiments, the generation unit 1403 may include:
a first determining subunit 14031, configured to determine a target deformation field according to the mask to be registered and the registration reference mask, where the target deformation field is used to characterize target movement information of a pixel point in the image to be registered.
In some embodiments, the first determining subunit 14031 comprises:
the first generation module is used for generating an initial deformation field according to the mask to be registered and the registration reference mask, wherein the initial deformation field is used for representing initial movement information of pixel points in the image to be registered.
And the first prediction module is used for predicting the target deformation field, wherein the similarity between the mask to be registered and the registration reference mask is larger than a preset first threshold value, according to the initial deformation field.
In some embodiments, the first determining subunit 14031 is configured to input the mask to be registered and the registration reference mask to a pre-trained image registration model, and output the target deformation field.
The image registration model is trained based on a sample data set, the sample data set comprises a sample image to be registered and a sample registration reference image, and the sample image to be registered and the sample registration reference image respectively comprise the target object.
In some embodiments, the image registration model includes a convolutional neural network and a deformation field transformation layer; the first determining subunit 14031 includes:
The first input module is used for inputting the mask to be registered and the registration reference mask into the convolutional neural network and outputting an initial deformation field, wherein the initial deformation field is used for representing initial movement information of pixel points in the image to be registered.
In some embodiments, the first input module comprises:
and the stacking sub-module is used for stacking the mask to be registered and the registration reference mask in the dimension of the image channel of the image registration model to obtain stacking information.
And the first input sub-module is used for inputting the stacking information into the convolutional neural network and outputting the initial deformation field.
And the second input module is used for inputting the initial deformation field into the deformation field transformation layer and outputting the target deformation field.
And the moving subunit 14032 is configured to perform a moving operation on the pixel points of the image to be registered according to the target deformation field, so as to obtain the target registration image.
In some embodiments, the image to be registered, the registration reference image, and the respective corresponding masks are each a preprocessed image.
Correspondingly, the image registration apparatus 1400 further comprises: an operation unit 1404 is configured to perform an operation opposite to the preprocessing on the target registration image.
In some embodiments, the preprocessing includes one or more of preset pixel value interval processing, image black edge cutting processing, region of interest extraction processing, and resolution adjustment processing.
In some embodiments, the image registration method is applied to a medical image registration scene, the target object being a tissue organ.
According to another aspect of the disclosure, the disclosure further provides a training device for an image registration model.
Referring to fig. 15, fig. 15 is a schematic diagram of an eighth embodiment of the disclosure, as shown in fig. 15, an image matching model training apparatus 1500 according to an embodiment of the disclosure includes:
a second obtaining unit 1501 is configured to obtain a sample dataset, where the sample dataset includes a sample to-be-registered image and a sample registration reference image, and the sample to-be-registered image and the sample registration reference image include a target object respectively.
The second extracting unit 1502 is configured to extract the sample masks corresponding to the image to be registered of the sample and the reference image for registration of the sample respectively.
And the training unit 1503 is configured to train to obtain an image registration model according to the respective corresponding sample masks, where the image registration model is used to determine a target registration image of the images to be registered.
Referring to fig. 16, fig. 16 is a schematic diagram of a training apparatus 1600 for image matching model according to a ninth embodiment of the disclosure, as shown in fig. 16, comprising:
a second acquiring unit 1601, configured to acquire a sample data set, where the sample data set includes a sample to-be-registered image and a sample registration reference image, and the sample to-be-registered image and the sample registration reference image respectively include a target object.
A second extracting unit 1602, configured to extract sample masks corresponding to the image to be registered of the sample and the reference image to be registered of the sample respectively.
In some embodiments, the second extracting unit 1602 is configured to extract a sample registration mask of the target object in the sample registration image and a sample registration reference mask of the target object in the sample registration reference image, where the respective corresponding sample masks include the sample registration mask and the sample registration reference mask.
In some embodiments, the image to be registered of the sample, the reference image registered of the sample, and the respective corresponding sample mask are each a preprocessed image.
And, the second extracting unit 1602 is configured to extract the image to be matched of the preprocessed sample and the sample mask corresponding to the preprocessed sample registration reference image, respectively.
The training unit 1603 is configured to train to obtain an image registration model according to the respective corresponding sample masks, where the image registration model is used to determine a target registration image of the images to be registered.
As can be seen in conjunction with fig. 16, in some embodiments, training unit 1603 comprises:
a second determining subunit 16031, configured to determine a sample target deformation field according to the sample mask to be registered and the sample registration reference mask, where the sample target deformation field is used to characterize sample target movement information of the pixel points in the sample mask to be registered.
In some embodiments, the second determining subunit 16031 comprises:
and the second generation module is used for generating a sample initial deformation field according to the sample mask to be registered and the sample registration reference mask, wherein the sample initial deformation field is used for representing sample initial movement information of pixel points in the sample mask to be registered.
And the second prediction module is used for predicting the sample target deformation field according to the initial deformation field of the sample, wherein the similarity between the sample to-be-registered mask and the sample registration reference mask is larger than a preset second threshold value.
In some embodiments, the initial network model includes an initial convolutional neural network model and an initial deformation field transformation layer; the second generation module is used for inputting the sample mask to be registered and the sample registration reference mask into the initial convolutional neural network model and outputting the sample initial deformation field.
And the second prediction module is used for inputting the initial deformation field of the sample into the initial deformation field transformation layer and outputting the target deformation field of the sample.
In some embodiments, the second generating module comprises:
and the second stacking sub-module is used for stacking the sample mask to be registered and the registration reference mask in the dimension of the image channel of the sample image registration model to obtain sample stacking information.
And the second input submodule is used for inputting the sample stacking information into the initial convolutional neural network model and outputting the sample initial deformation field.
A construction subunit 16032 is configured to construct a loss function according to the sample target deformation field, the sample registration mask, and the sample registration reference mask.
And the optimizing subunit 16033 is configured to optimize the initial network model based on the loss function until convergence, so as to obtain the image registration model.
In some embodiments, the optimizing subunit 16033 is configured to adjust parameters corresponding to the initial convolutional neural network model and the initial deformation field transformation layer respectively based on the loss function until a preset iteration number is reached or the loss function is smaller than a preset third threshold value, so as to obtain the image registration model.
In some embodiments, the image registration model is applied to a medical image registration scene, the target object being a tissue organ.
Fig. 17 is a schematic diagram according to a tenth embodiment of the present disclosure, as shown in fig. 17, an electronic device 1700 in the present disclosure may include: a processor 1701 and a memory 1702.
A memory 1702 for storing a program; the memory 1702 may include a volatile memory (english: volatile memory), such as a random-access memory (RAM), such as a static random-access memory (SRAM), a double data rate synchronous dynamic random-access memory (DDR SDRAM), etc.; the memory may also include a non-volatile memory (English) such as a flash memory (English). The memory 1702 is used to store computer programs (e.g., application programs, functional modules, etc. that implement the methods described above), computer instructions, etc., which may be stored in one or more of the memory 1702 in partitions. And the above-described computer programs, computer instructions, data, etc. may be invoked by the processor 1701.
The computer programs, computer instructions, etc., described above may be stored in partitions in one or more memories 1702. And the above-described computer programs, computer instructions, etc. may be invoked by the processor 1701.
A processor 1701 for executing a computer program stored in the memory 1702 to implement the steps of the method according to the above embodiment.
Reference may be made in particular to the relevant description of the previous method embodiments.
The processor 1701 and the memory 1702 may be separate structures or may be integrated structures integrated together. When the processor 1701 and the memory 1702 are separate structures, the memory 1702 and the processor 1701 may be coupled by a bus 1703.
The electronic device in this embodiment may execute the technical scheme in the above method, and the specific implementation process and the technical principle are the same, which are not described herein again.
It should be noted that, especially when the image registration method and the training method of the image registration model of the present disclosure are applied to a medical image registration scene, the image in the embodiment is not specific to the image of a specific user, and cannot reflect the personal information of a specific user. It should be noted that, the image in this embodiment is from the public data set.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information (such as the image comprising the target object) of the user are in accordance with the regulations of related laws and regulations, and the public order is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 18 illustrates a schematic block diagram of an example electronic device 1800 that may be used to implement embodiments of the disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 18, the apparatus 1800 includes a computing unit 1801 that can perform various appropriate actions and processes according to computer programs stored in a Read Only Memory (ROM) 1802 or computer programs loaded from a storage unit 1808 into a Random Access Memory (RAM) 1803. In the RAM 1803, various programs and data required for the operation of the device 1800 may also be stored. The computing unit 1801, ROM 1802, and RAM 1803 are connected to each other by a bus 1804. An input/output (I/O) interface 1805 is also connected to the bus 1804.
Various components in the device 1800 are connected to I/O interfaces 1805, including: an input unit 1806 such as a keyboard, a mouse, and the like; an output unit 1807 such as various types of displays, speakers, and the like; a storage unit 1808 such as a magnetic disk, an optical disk, or the like; and a communication unit 1809 such as a network card, modem, wireless communication transceiver, and the like. The communication unit 1809 allows the device 1800 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 1801 performs the respective methods and processes described above, such as an image registration method, a training method of an image registration model. For example, in some embodiments, the image registration method, the training method of the image registration model, may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1808. In some embodiments, some or all of the computer programs may be loaded and/or installed onto the device 1800 via the ROM 1802 and/or the communication unit 1809. When the computer program is loaded into the RAM 1803 and executed by the computing unit 1801, one or more steps of the image registration method, the training method of the image registration model described above may be performed. Alternatively, in other embodiments, the computing unit 1801 may be configured to perform the image registration method, the training method of the image registration model, in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (28)

1. An image registration method, comprising:
acquiring an image to be registered and a registration reference image, wherein the image to be registered and the registration reference image respectively comprise a target object;
respectively extracting masks corresponding to the image to be registered and the registration reference image; the masks corresponding to each other comprise a mask to be registered and a registration reference mask;
generating a target registration image of the image to be registered according to the respective corresponding mask;
Generating the target registration image of the image to be registered according to the masks corresponding to each other, including:
inputting the mask to be registered and the registration reference mask into a pre-trained image registration model, and outputting a target deformation field, wherein the target deformation field is used for representing target movement information of pixel points in the image to be registered;
performing moving operation on the pixel points of the image to be registered according to the target deformation field to obtain the target registration image;
the image registration model comprises a convolutional neural network and a deformation field transformation layer;
inputting the mask to be registered and the registration reference mask into a pre-trained image registration model, and outputting the target deformation field, wherein the method comprises the following steps:
inputting the mask to be registered and the registration reference mask into the convolutional neural network, and outputting an initial deformation field, wherein the initial deformation field is used for representing initial movement information of pixel points in the image to be registered;
and inputting the initial deformation field into the deformation field transformation layer, and outputting the target deformation field.
2. The method according to claim 1, wherein the extracting masks respectively corresponding to the image to be registered and the registration reference image includes:
And extracting a mask to be registered of the target object in the image to be registered and a registration reference mask of the target object in the registration reference image.
3. The method of claim 2, wherein the image registration model is trained based on a sample dataset comprising a sample image to be registered and a sample registration reference image, the sample image to be registered and the sample registration reference image comprising the target object, respectively.
4. A method according to claim 3, wherein said inputting the mask to be registered and the registration reference mask to the convolutional neural network outputs an initial deformation field, comprising:
stacking the mask to be registered and the registration reference mask in the dimension of the image channel of the image registration model to obtain stacking information;
and inputting the stacking information into the convolutional neural network, and outputting the initial deformation field.
5. The method of any of claims 1-4, wherein the image to be registered, the registration reference image, the respective corresponding mask are each a preprocessed image;
and after generating the target registration image of the image to be registered according to the masks corresponding to each other, further including: and performing an operation opposite to the preprocessing on the target registration image.
6. The method of claim 5, wherein the preprocessing includes one or more of preset pixel value interval processing, cut-out image black processing, region of interest extraction processing, resolution adjustment processing.
7. The method of any of claims 1-4,6, wherein the image registration method is applied to a medical image registration scene, the target object being a tissue organ.
8. A training method of an image registration model, comprising:
obtaining a sample data set, wherein the sample data set comprises a sample image to be registered and a sample registration reference image, and the sample image to be registered and the sample registration reference image respectively comprise a target object;
respectively extracting sample masks corresponding to the sample to-be-registered image and the sample registration reference image; the sample masks corresponding to the sample masks respectively comprise a sample mask to be registered and a sample registration reference mask;
determining a sample target deformation field according to the sample mask to be registered and the sample registration reference mask, wherein the sample target deformation field is used for representing sample target movement information of pixel points in the sample mask to be registered;
Constructing a loss function according to the sample target deformation field, the sample mask to be registered and the sample registration reference mask, and optimizing an initial network model based on the loss function until convergence to obtain an image registration model, wherein the image registration model is used for determining a target registration image of the image to be registered, and the initial network model comprises an initial convolutional neural network model and an initial deformation field transformation layer;
the determining a sample target deformation field according to the sample mask to be registered and the sample registration reference mask comprises the following steps:
inputting the sample mask to be registered and the sample registration reference mask into the initial convolutional neural network model, and outputting a sample initial deformation field, wherein the sample initial deformation field is used for representing sample initial movement information of pixel points in the sample mask to be registered;
and inputting the initial deformation field of the sample into the initial deformation field conversion layer, and outputting the target deformation field of the sample.
9. The method of claim 8, the extracting sample masks respectively corresponding to the sample to-be-registered image and the sample registration reference image, comprising:
And extracting a sample to-be-registered mask of the target object in the sample to-be-registered image and a sample registration reference mask of the target object in the sample registration reference image.
10. The method of claim 9, wherein the optimizing the initial network model based on the loss function until convergence results in the image registration model, comprising:
and based on the loss function, adjusting parameters corresponding to the initial convolutional neural network model and the initial deformation field transformation layer respectively until the preset iteration times are reached or the loss function is smaller than a preset third threshold value, so as to obtain the image registration model.
11. The method of claim 9 or 10, wherein the inputting the sample to-be-registered mask and the sample-registered reference mask into the initial convolutional neural network model, outputting the sample initial deformation field, comprises:
stacking the sample mask to be registered and the registration reference mask in the dimension of an image channel of the sample image registration model to obtain sample stacking information;
and inputting the sample stacking information into the initial convolutional neural network model, and outputting the sample initial deformation field.
12. The method according to any one of claims 8-10, wherein the sample image to be registered, the sample registration reference image, the respective corresponding sample mask are each a preprocessed image;
and extracting respective corresponding sample masks of the sample to-be-registered image and the sample registration reference image respectively, including: and respectively extracting the image to be matched of the preprocessed sample and the sample mask corresponding to the preprocessed sample registration reference image.
13. The method of any of claims 8-10, wherein the image registration model is applied to a medical image registration scene, the target object being a tissue organ.
14. An image registration apparatus comprising:
the device comprises a first acquisition unit, a second acquisition unit and a first registration unit, wherein the first acquisition unit is used for acquiring an image to be registered and a registration reference image, and the image to be registered and the registration reference image respectively comprise a target object;
the first extraction unit is used for respectively extracting masks corresponding to the image to be registered and the registration reference image; the masks corresponding to each other comprise a mask to be registered and a registration reference mask;
the generating unit is used for generating a target registration image of the image to be registered according to the corresponding masks;
The generation unit includes:
the first determining subunit is used for inputting the mask to be registered and the registration reference mask into a pre-trained image registration model and outputting a target deformation field, wherein the target deformation field is used for representing target movement information of pixel points in the image to be registered;
the moving subunit is used for carrying out moving operation on the pixel points of the image to be registered according to the target deformation field to obtain the target registration image;
the image registration model comprises a convolutional neural network and a deformation field transformation layer;
the first determining subunit includes:
the first input module is used for inputting the mask to be registered and the registration reference mask into the convolutional neural network and outputting an initial deformation field, wherein the initial deformation field is used for representing initial movement information of pixel points in the image to be registered;
and the second input module is used for inputting the initial deformation field into the deformation field transformation layer and outputting the target deformation field.
15. The apparatus according to claim 14, wherein the extracting unit is configured to extract a mask to be registered of the target object in the image to be registered and a registration reference mask of the target object in the registration reference image.
16. The apparatus of claim 15, wherein the image registration model is trained based on a sample dataset comprising a sample image to be registered and a sample registration reference image, the sample image to be registered and the sample registration reference image comprising the target object, respectively.
17. The apparatus of claim 16, wherein the first input module comprises:
the stacking sub-module is used for stacking the mask to be registered and the registration reference mask in the dimension of the image channel of the image registration model to obtain stacking information;
and the first input sub-module is used for inputting the stacking information into the convolutional neural network and outputting the initial deformation field.
18. The apparatus of any of claims 14-17, wherein the image to be registered, the registration reference image, the respective corresponding mask are each a preprocessed image;
and, the apparatus further comprises: and the operation unit is used for executing an operation opposite to the preprocessing on the target registration image.
19. The apparatus of claim 18, wherein the preprocessing comprises one or more of preset pixel value interval processing, cut-out image black processing, region of interest extraction processing, resolution adjustment processing.
20. The apparatus according to any one of claims 14-17, 19, wherein the image registration apparatus is applied to a medical image registration scene, the target object being a tissue organ.
21. A training device for an image registration model, comprising:
the second acquisition unit is used for acquiring a sample data set, wherein the sample data set comprises a sample image to be registered and a sample registration reference image, and the sample image to be registered and the sample registration reference image respectively comprise a target object;
the second extraction unit is used for respectively extracting sample masks corresponding to the sample to-be-registered image and the sample registration reference image; the sample masks corresponding to the sample masks respectively comprise a sample mask to be registered and a sample registration reference mask;
training unit, comprising:
a second determining subunit, configured to determine a sample target deformation field according to the sample mask to be registered and the sample registration reference mask, where the sample target deformation field is used to characterize sample target movement information of a pixel point in the sample mask to be registered;
a construction subunit, configured to construct a loss function according to the sample target deformation field, the sample mask to be registered, and the sample registration reference mask;
The optimizing subunit is used for optimizing the initial network model based on the loss function until convergence to obtain an image registration model, wherein the image registration model is used for determining a target registration image of an image to be registered, and the initial network model comprises an initial convolutional neural network model and an initial deformation field transformation layer;
the second determining subunit includes:
the second generation module is used for inputting the sample mask to be registered and the sample registration reference mask into the initial convolutional neural network model and outputting a sample initial deformation field, wherein the sample initial deformation field is used for representing sample initial movement information of pixel points in the sample mask to be registered;
and the second prediction module is used for inputting the initial deformation field of the sample into the initial deformation field transformation layer and outputting the target deformation field of the sample.
22. The apparatus of claim 21, the second extraction unit to extract a sample registration mask of the target object in the sample registration image and a sample registration reference mask of the target object in the sample registration reference image.
23. The apparatus of claim 22, wherein the optimizing subunit is configured to adjust, based on the loss function, parameters corresponding to the initial convolutional neural network model and the initial deformation field transform layer respectively until a preset number of iterations is reached or the loss function is less than a preset third threshold, to obtain the image registration model.
24. The apparatus of claim 22 or 23, wherein the second generation module comprises:
the second stacking sub-module is used for stacking the sample mask to be registered and the registration reference mask in the dimension of the image channel of the sample image registration model to obtain sample stacking information;
and the second input submodule is used for inputting the sample stacking information into the initial convolutional neural network model and outputting the sample initial deformation field.
25. The apparatus of any of claims 21-23, wherein the sample to-be-registered image, the sample registration reference image, the respective corresponding sample mask, are each a preprocessed image;
and the second extraction unit is used for respectively extracting the image to be matched of the preprocessed sample and the sample mask corresponding to the preprocessed sample registration reference image.
26. The apparatus of any of claims 21-23, wherein the image registration model is applied to a medical image registration scene, the target object being a tissue organ.
27. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7; or to enable the at least one processor to perform the method of any one of claims 8-13.
28. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7; alternatively, the computer instructions are for causing the computer to perform the method according to any one of claims 8-13.
CN202211413390.6A 2022-11-11 2022-11-11 Image registration method, training method and device of image registration model Active CN115908515B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211413390.6A CN115908515B (en) 2022-11-11 2022-11-11 Image registration method, training method and device of image registration model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211413390.6A CN115908515B (en) 2022-11-11 2022-11-11 Image registration method, training method and device of image registration model

Publications (2)

Publication Number Publication Date
CN115908515A CN115908515A (en) 2023-04-04
CN115908515B true CN115908515B (en) 2024-02-13

Family

ID=86473732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211413390.6A Active CN115908515B (en) 2022-11-11 2022-11-11 Image registration method, training method and device of image registration model

Country Status (1)

Country Link
CN (1) CN115908515B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118229543A (en) * 2024-02-01 2024-06-21 首都医科大学附属北京朝阳医院 Spinal multi-mode image fusion method based on CT and MRI images

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226837A (en) * 2013-05-21 2013-07-31 南方医科大学 Method for generating distribution image used for observing cervix tumour radiotherapy total dose
CN109767461A (en) * 2018-12-28 2019-05-17 上海联影智能医疗科技有限公司 Medical image registration method, device, computer equipment and storage medium
CN109767460A (en) * 2018-12-27 2019-05-17 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110992411A (en) * 2019-12-04 2020-04-10 图玛深维医疗科技(北京)有限公司 Training method and device of image registration model
CN111210467A (en) * 2018-12-27 2020-05-29 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113506331A (en) * 2021-06-29 2021-10-15 武汉联影智融医疗科技有限公司 Method, apparatus, computer device and storage medium for registering tissue and organ
CN113610752A (en) * 2021-06-15 2021-11-05 上海联影智能医疗科技有限公司 Mammary gland image registration method, computer device and storage medium
CN113989110A (en) * 2021-11-19 2022-01-28 武汉联影智融医疗科技有限公司 Lung image registration method and device, computer equipment and storage medium
CN114549594A (en) * 2022-02-22 2022-05-27 上海联影智能医疗科技有限公司 Image registration method and device and electronic equipment
CN114565554A (en) * 2022-01-11 2022-05-31 浙江工业大学 X-ray image registration method and device based on ultrasonic coronal plane image
CN115222780A (en) * 2022-07-28 2022-10-21 西安电子科技大学 Cross-mode large-deformation image registration method based on semantic mask

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489940B2 (en) * 2017-09-05 2019-11-26 Cardiovascular Imaging Technolgies, L.L.C. System and computer-implemented method for improving image quality

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226837A (en) * 2013-05-21 2013-07-31 南方医科大学 Method for generating distribution image used for observing cervix tumour radiotherapy total dose
CN109767460A (en) * 2018-12-27 2019-05-17 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN111210467A (en) * 2018-12-27 2020-05-29 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109767461A (en) * 2018-12-28 2019-05-17 上海联影智能医疗科技有限公司 Medical image registration method, device, computer equipment and storage medium
CN110992411A (en) * 2019-12-04 2020-04-10 图玛深维医疗科技(北京)有限公司 Training method and device of image registration model
CN113610752A (en) * 2021-06-15 2021-11-05 上海联影智能医疗科技有限公司 Mammary gland image registration method, computer device and storage medium
CN113506331A (en) * 2021-06-29 2021-10-15 武汉联影智融医疗科技有限公司 Method, apparatus, computer device and storage medium for registering tissue and organ
CN113989110A (en) * 2021-11-19 2022-01-28 武汉联影智融医疗科技有限公司 Lung image registration method and device, computer equipment and storage medium
CN114565554A (en) * 2022-01-11 2022-05-31 浙江工业大学 X-ray image registration method and device based on ultrasonic coronal plane image
CN114549594A (en) * 2022-02-22 2022-05-27 上海联影智能医疗科技有限公司 Image registration method and device and electronic equipment
CN115222780A (en) * 2022-07-28 2022-10-21 西安电子科技大学 Cross-mode large-deformation image registration method based on semantic mask

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
4D CT图像的肺部呼吸运动估计;苏坡;薛忠;杨建华;;数据采集与处理(第05期);全文 *
基于无监督学习的三维医学图像配准的研究;马英君;中国优秀硕士论文电子期刊网;全文 *

Also Published As

Publication number Publication date
CN115908515A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
US11954863B2 (en) Image segmentation method and apparatus, diagnosis system, storage medium, and computer device
US12112483B2 (en) Systems and methods for anatomic structure segmentation in image analysis
US10867436B2 (en) Systems and methods for reconstruction of 3D anatomical images from 2D anatomical images
US8358819B2 (en) System and methods for image segmentation in N-dimensional space
US7995810B2 (en) System and methods for image segmentation in n-dimensional space
CN110458939A (en) The indoor scene modeling method generated based on visual angle
KR20210002606A (en) Medical image processing method and apparatus, electronic device and storage medium
CN111105424A (en) Lymph node automatic delineation method and device
Khan et al. A methodological review of 3D reconstruction techniques in tomographic imaging
JP7214434B2 (en) MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING PROGRAM
CN111798424B (en) Medical image-based nodule detection method and device and electronic equipment
EP3555850A1 (en) System and method for image segmentation using a joint deep learning model
CN113450396A (en) Three-dimensional/two-dimensional image registration method and device based on bone features
CN115908515B (en) Image registration method, training method and device of image registration model
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
WO2024169341A1 (en) Registration method for multimodality image-guided radiotherapy
US11783501B2 (en) Method and apparatus for determining image depth information, electronic device, and media
CN114787867A (en) Organ deformation compensation for medical image registration
CN113129297B (en) Diameter automatic measurement method and system based on multi-phase tumor image
CN109872353A (en) Based on the white light data and CT Registration of Measuring Data method for improving iteration closest approach algorithm
CN114419375A (en) Image classification method, training method, device, electronic equipment and storage medium
US11295451B2 (en) Robust pulmonary lobe segmentation
CN114514558A (en) Segmenting tubular features
Jin et al. Mumford-shah on the move: region-based segmentation on deforming manifolds with application to 3-D reconstruction of shape and appearance from multi-view images
KR102689375B1 (en) Skeleton estimate apparatus using multiple x-ray views and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant