CN113129342A - Multi-modal fusion imaging method, device and storage medium - Google Patents
Multi-modal fusion imaging method, device and storage medium Download PDFInfo
- Publication number
- CN113129342A CN113129342A CN201911416974.7A CN201911416974A CN113129342A CN 113129342 A CN113129342 A CN 113129342A CN 201911416974 A CN201911416974 A CN 201911416974A CN 113129342 A CN113129342 A CN 113129342A
- Authority
- CN
- China
- Prior art keywords
- image
- registration
- punctuations
- modality
- ultrasonic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 43
- 230000004927 fusion Effects 0.000 title claims abstract description 38
- 239000000523 sample Substances 0.000 claims abstract description 68
- 238000002604 ultrasonography Methods 0.000 claims abstract description 61
- 210000000056 organ Anatomy 0.000 claims abstract description 40
- 238000007689 inspection Methods 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 17
- 238000003062 neural network model Methods 0.000 claims description 16
- 238000013528 artificial neural network Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 230000015654 memory Effects 0.000 description 16
- 238000002591 computed tomography Methods 0.000 description 15
- 238000013527 convolutional neural network Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000011176 pooling Methods 0.000 description 7
- 238000003745 diagnosis Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 201000010099 disease Diseases 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 238000002600 positron emission tomography Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 210000004204 blood vessel Anatomy 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 238000012285 ultrasound imaging Methods 0.000 description 2
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
- G06T2207/10136—3D ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention relates to the technical field of ultrasonic fusion imaging, in particular to a multi-mode fusion imaging method, a multi-mode fusion imaging device and a storage medium, wherein the multi-mode fusion imaging method comprises the following steps: acquiring a first modal image of a target organ to be scanned of an inspection object; acquiring an ultrasonic image of a target organ through an ultrasonic probe; determining at least two registration punctuations in the first modality image and determining initial punctuations corresponding to the registration punctuations in the ultrasound image according to the registration punctuations; acquiring position information and angle information of the registration punctuations and the initial punctuations in the same coordinate system; and guiding the ultrasonic probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to enable the ultrasonic image acquired by the ultrasonic probe to be matched and fused with the first modality image. The invention can fuse and display CT, MR and PET images and ultrasonic images, combines the real-time performance and high resolution of different images, and assists doctors in positioning and diagnosing focuses.
Description
Technical Field
The invention relates to the technical field of ultrasonic fusion imaging, in particular to a multi-mode fusion imaging method, a multi-mode fusion imaging device and a storage medium.
Background
Currently, medical imaging apparatuses of various modalities are widely used in clinical diagnosis and medical research, and imaging technologies related to these medical imaging apparatuses of various modalities mainly include Positron Emission Tomography (PET), Computed Tomography (CT), Magnetic Resonance imaging (MR), ultrasound imaging (US), and the like.
Unlike ultrasound images, nuclear Magnetic Resonance (MR) or Computed Tomography (CT) images enable the operator to clearly identify organs and diseases. However, since the MR or CT image cannot be acquired in real time during the surgery or the puncture, the MR or CT image cannot reflect the implementation state of the target organ of the patient during the surgery or the puncture. The advantages of high resolution of CT or MR and the real-time characteristics of ultrasound need to be fully exerted in the process of disease diagnosis.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a multi-modal fusion imaging method, a multi-modal fusion imaging device and a multi-modal fusion imaging storage medium, which can improve the resolution and the real-time performance of auxiliary diagnostic images.
As a first aspect of the present invention, there is provided a multi-modal fusion imaging method including:
acquiring a first modal image of a target organ to be scanned of an inspection object;
acquiring an ultrasonic image of the target organ through an ultrasonic probe;
determining at least two registration punctuations in the first modality image and determining an initial punctuation corresponding to the registration punctuation in the ultrasound image according to the registration punctuations;
acquiring position information and angle information of the registration mark and the initial mark in the same coordinate system;
and guiding the ultrasonic probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to enable the ultrasonic image acquired by the ultrasonic probe to be matched and fused with the first modality image.
Further, the determining an initial punctuation in the ultrasound image corresponding to the registration punctuation comprises:
and inputting the first modality image and the ultrasonic image into a trained recognition neural network model for processing to obtain an initial punctuation corresponding to the registration punctuation in the ultrasonic image.
Further, the inputting the first modality image and the ultrasound image into a trained recognition neural network model process includes:
inputting a first modality image into a first convolution neural network of the recognition neural network model for processing, and determining position information and angle information of the registration punctuations in the first modality image;
and inputting the ultrasonic image into a second convolution neural network of the recognition neural network model for processing to obtain the position information and the angle information of the initial punctuations corresponding to the registration punctuations in the ultrasonic image.
Further, the guiding an ultrasound probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to match and fuse the ultrasound image acquired by the ultrasound probe with the first modality image comprises:
determining a registration transformation matrix according to the position information and the angle information of the registration standard point and the initial standard point;
planning a guide path according to the registration transformation matrix;
guiding the ultrasonic probe to move according to the guide path so that an initial punctuation in the ultrasonic image is coincided with the registration punctuation;
and overlapping and fusing the ultrasonic image and the first modality image.
Further, still include:
acquiring a real-time position of the ultrasonic probe;
and judging whether the ultrasonic probe deviates from the guide path or not according to the real-time position of the ultrasonic probe, and if so, updating the guide path according to the real-time position.
Further, in the process of guiding the ultrasonic probe to move according to the guide path, operation prompt information is provided, and the operation prompt information includes: one or more of voice-operated prompts, visual-operated prompts, and tactile-operated prompts.
Further, the first modality image includes: a CT image, an MR image, a PET image, or a three-dimensional ultrasound image.
Further, the registration landmark is at a tissue organ contour, a vessel intersection, or a lesion center point.
As a second aspect of the present invention, the present invention also provides a multi-modality fusion imaging apparatus including:
the first acquisition unit is used for acquiring a first modal image of a target organ to be scanned of an inspection object;
a second acquisition unit that acquires an ultrasound image of the target organ by an ultrasound probe;
the determining unit is used for determining at least two registration punctuations in the first modality image and determining initial punctuations corresponding to the registration punctuations in the ultrasonic image according to the registration punctuations;
a third obtaining unit, configured to obtain position information and angle information of the registration mark and the initial mark in the same coordinate system;
and the registration fusion unit guides the ultrasonic probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to enable the ultrasonic image acquired by the ultrasonic probe to be matched and fused with the first modality image.
As a third aspect of the present invention, a computer storage medium,
the computer storage medium has stored therein a computer program which, when executed by a processor, is configured to implement the steps of the multimodal fusion imaging method as set forth in any one of the above.
The multi-modal fusion imaging method can determine at least two registration punctuations in the first modal image and determine initial punctuations corresponding to the registration punctuations in the ultrasound image according to the registration punctuations; and guiding the ultrasonic probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to enable the ultrasonic image acquired by the ultrasonic probe to be matched and fused with the first modality image, thereby improving the resolution and the real-time performance of the auxiliary diagnosis image and improving the diagnosis accuracy of a clinician.
Further, the registration punctuation of the invention and the corresponding initial punctuation do not need to be manually selected by a doctor,
the method is automatically identified and determined through a trained neural network model, and the speed is high.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of the operation of the multi-modal fusion imaging method of the present invention.
Fig. 2 is a flow chart of the operation of guiding the movement of the ultrasonic probe according to the present invention.
Fig. 3 is a schematic structural diagram of a first convolutional neural network according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a second convolutional neural network according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of the multi-modality fusion imaging apparatus of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art. Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
Auxiliary images such as Magnetic Resonance (MR) or Computed Tomography (CT) images enable an operator to clearly identify organs and diseases. However, since the MR or CT image cannot be acquired in real time during the surgical operation or the puncture because its radiation damages the body of the scanned object, the MR or CT image cannot reflect the implementation state of the target organ of the patient during the surgical operation or the puncture. Although the ultrasound image acquired by the ultrasound equipment is real-time and has no radiation, the resolution of the ultrasound image is low, and the requirements on the expertise and clinical experience of a clinician are high.
As a first aspect of the present invention, as shown in fig. 1, a multi-modality fusion imaging method is provided for clinical pain spots which cannot be achieved by auxiliary images in high resolution and real-time simultaneously, the method including:
step S100, acquiring a first modal image of a target organ to be scanned of an inspection object;
before an examination object needs to be subjected to surgical puncture, a first modality image of a target organ to be scanned is acquired, and the first modality image is a non-real-time image. The imaging may be performed by any one of a Computed Tomography (CT) imaging device, a Magnetic Resonance (MR) imaging device, an X-ray imaging device, a Single Photon Emission Computed Tomography (SPECT) imaging device, and a Positron Emission Tomography (PET) imaging device. In the following description, for convenience of description, the first modality image is the first modality image including: a CT image, an MR image, a PET image, or a three-dimensional ultrasound image, or other three-dimensional body examination device acquired modality image, exemplary embodiments are not limited thereto. It is to be understood that the term "first" is used purely as a label and is not intended to require a numerical requirement for their modification.
Step S200, acquiring an ultrasonic image of the target organ through an ultrasonic probe;
the clinician operates the ultrasound probe or operates the ultrasound probe through the robotic arm to acquire an ultrasound image distinct from the first modality image. The ultrasound probe is used for transmitting and receiving ultrasound waves, and the ultrasound probe is excited by a transmission pulse, transmits the ultrasound waves to a target tissue (for example, an organ, a tissue, a blood vessel, and the like in a human body or an animal body), receives an ultrasound echo with information of the target tissue reflected from a target region after a certain time delay, and converts the ultrasound echo back into an electric signal to obtain an ultrasound image. However, the ultrasound image acquired by the ultrasound probe may contain noise therein, and thus it may be difficult to identify the contour, internal structure, or disease of an organ.
Specifically, target organ information to be scanned of the detection object is input before fusion imaging, and the target organ information may be an input target organ name or an indication icon of a target organ on the ultrasound device. Target organ information can be input through an input unit on the ultrasonic equipment, so that the ultrasonic equipment can acquire a target organ to be scanned of a detection object so as to adjust imaging parameters such as the sending frequency and gain of the ultrasonic probe; the input unit can be a keyboard, a trackball, a mouse, a touch pad or the like or a combination thereof; the input unit may also be a voice recognition input unit, a gesture recognition input unit, or the like. It should be understood that the target organ to be scanned by the ultrasound probe can also be identified by machine vision or a trained identification network model. The ultrasound device may load the identified neural network model of the corresponding organ according to the target organ information.
Step S300, determining at least two registration punctuations in the first modality image and determining an initial punctuation corresponding to the registration punctuation in the ultrasound image according to the registration punctuations;
it is to be understood that the ultrasound image and the first modality image need to be registered prior to fusion imaging of the ultrasound image and the first modality image. The conventional registration method is:
acquiring a frame of ultrasonic image through an ultrasonic probe, selecting a first modality image with high similarity to the frame of ultrasonic image, manually selecting anatomical landmark points on the first modality image and the ultrasonic image respectively, and performing initial rigid registration on the first modality image and the ultrasonic image based on the selected anatomical landmark point set to obtain a transformation matrix from an ultrasonic image coordinate system to a first modality image coordinate system; and then performing superposition fusion.
Different from the existing pairing mode, the method inputs the first modal image and the ultrasonic image into a trained recognition neural network model for processing, and obtains the position information and the angle information of the registration punctuations and the position information and the angle information of the initial punctuations. Different from the existing mode of manually selecting the registration punctuations, the method processes the ultrasonic image and the first mode image through the trained recognition neural network model, and automatically extracts the position information and the angle information of the registration punctuations used for registration and the position information and the angle information of the initial punctuations. The neural network identification model comprises a first convolutional neural network and a second convolutional neural network, wherein the first convolutional neural network is a three-dimensional convolutional neural network, and the second convolutional neural network is a two-dimensional convolutional neural network. In particular, as shown in figure 3,
inputting a first modality image into a first convolution neural network of the recognition neural network model for processing, and determining position information and angle information of the registration punctuations in the first modality image;
in an embodiment, the first convolution neural network is realized by a three-dimensional full convolution neural network, and mainly comprises a plurality of three-dimensional convolutions, activation functions, pooling and full connection, three layers of convolutions are added after pooling, the first layer of convolution is used for predicting the category of each pixel on a current feature map, the second layer of convolution is used for predicting the matrix coordinate of each pixel on the current feature map, the third layer of convolution is used for predicting the key point coordinate of each pixel on the current feature map, the coordinate at the side is the size relative to the current feature map, when training and calculating the loss function and actual reasoning, the coordinate of the loss function and the coordinate of the actual reasoning need to be mapped back to the original image are multiplied by corresponding pooling multiples, the loss function of the category adopts cross entropy, the matrix coordinate and the loss function of the key point adopt smooth L1 loss, and the total loss is weighted summation of the three-dimensional convolutions, and the activation. The input of the first convolution neural network is a first modality image, and the output is a position (rectangular coordinate) of the first modality image, the position is a probability (category) of a target organ, and the target organ at the position is a registration punctuation coordinate, wherein the registration punctuation coordinate comprises position information and angle information of the registration punctuation. And outputting the convolution result after each pooling, inhibiting by a non-maximum value, and removing overlapped large rectangular frames, thereby obtaining the final rectangular coordinates and categories, and the position information and the angle information of the registration punctuations. Training data: a number of first modality images containing the target organ are marked by the medical professional with registration punctuation of the target organ.
Inputting the ultrasonic image into a second convolution neural network of the recognition neural network model for processing to obtain the position information and the angle information of the initial punctuations corresponding to the registration punctuations in the ultrasonic image
As shown in fig. 4, the second convolutional neural network of the present invention is implemented by a two-dimensional full convolutional neural network, and mainly includes a plurality of convolutions, activation functions, pooling, and full connections, three layers of convolutions are added after pooling, the first two layers of convolutions are used to predict the category of each pixel on the current feature map, the second layer of convolutions are used to predict the matrix coordinate of each pixel on the current feature map, the third layer of convolutions are used to predict the key point coordinate of each pixel on the current feature map, it is noted that the coordinate is the size of the current feature map, when training and calculating the loss function and actual inference, the coordinate that is mapped back to the original image needs to be multiplied by the corresponding pooling multiple, the loss function of the category adopts cross entropy, the loss functions of the matrix coordinate and the key point adopt smooth L1 loss, and the total loss is the weighted sum of the above. The input to the second convolutional neural network is an ultrasound image. And outputting to obtain an initial punctuation coordinate of the corresponding initial punctuation in the ultrasonic image, wherein the initial punctuation coordinate comprises position information and angle information.
When the training set is labeled, the registration punctuations are set to be on the organ contour of the tissue, the intersection point of the blood vessel or the central point of the focus. It is to be understood that the location of the registration punctuation identification lies in the feature corresponding to the location of the registration punctuation mark at the time of training.
The selection mode of the registration punctuations is automatically obtained through the trained recognition neural network model, the accuracy is high, errors caused by manual marking of a clinician can be effectively prevented, the effect and the accuracy of fusion of the first mode image and the ultrasonic image are further influenced, and the registration speed and the accuracy between different mode images are improved.
Step S400, acquiring position information and angle information of the registration mark and the initial mark in the same coordinate system;
the coordinate systems of the ultrasound image and the first modality image are different, and the position information and the angle information of the registration standard point and the position information and the angle information of the initial standard point need to be mapped to the same coordinate system. A spatial coordinate system may be established by a magnetic transmitter or a world coordinate system, such as a camera, by an optical positioning device when acquiring the ultrasound or first modality image.
And S500, guiding the ultrasonic probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to enable the ultrasonic image acquired by the ultrasonic probe to be matched and fused with the first modality image.
And matching and fusing the ultrasonic image to be acquired with the first modality image. It is understood that the first modality image acquired is an ultrasound image acquired requiring the ultrasound probe to be moved to a corresponding slice of the target organ. And the ultrasonic image of the current section acquired by the ultrasonic probe is not the target section corresponding to the first mode image, and the posture of the ultrasonic probe is adjusted by registering the position information and the angle information of the punctuation and the initial punctuation at the position of the target section which needs to control the movement of the ultrasonic probe so as to enable the ultrasonic probe. And the position information and the angle information of the registration punctuations and the initial punctuations are the position information and the angle information of the registration punctuations and the position information and the angle information of the initial punctuations. As shown in fig. 2, specifically, the method includes:
step S510, determining a registration transformation matrix according to the position information and the angle information of the registration mark and the initial mark;
the position information and the angle information are six-degree-of-freedom coordinates (x, y, z, ax, ay, az), and ax, ay, az are angle information in the xyz direction.
Step S520, planning a guide path according to the registration transformation matrix;
step S530, guiding the ultrasonic probe to move according to the guide path so as to enable an initial punctuation in the ultrasonic image to coincide with the registration punctuation;
acquiring a real-time position of the ultrasonic probe; and judging whether the ultrasonic probe deviates from the guide path or not according to the real-time position of the ultrasonic probe, and if so, updating the guide path according to the real-time position.
In an embodiment, the real-time position of the ultrasound probe includes acquiring an environmental image including at least a detection object and the ultrasound probe, and identifying the real-time position of the ultrasound probe through a trained tracking neural network model, which specifically includes: acquiring a model image of an ultrasonic probe; inputting the model image and the environment image into a shared full convolution neural network, wherein the shared full convolution neural network outputs a first feature corresponding to the model image and a second feature corresponding to the environment image; the first characteristic is convolution of a convolution kernel and the second characteristic to obtain a spatial response graph; outputting the spatial response map to a linear interpolation layer to acquire the real-time position of the ultrasonic probe in the environment image; the real-time position (position information and angle information) of the ultrasound probe is mapped to the same coordinate system as the registered and initial punctuation.
It should be understood that the model image of the ultrasound probe is preset in the ultrasound device and can be called through the input unit, the input unit can be a keyboard, a trackball, a mouse, a touch pad, or the like, or a combination thereof, and the input unit can also be a voice recognition input unit, a gesture recognition input unit, or the like. It is to be understood that the target organ information may be a name of the target organ or a target organ icon displayed on the display selected through the input unit. The spatial response map comprises the response intensity of the first feature on the second feature, the response intensity value is 0-1, and the model image and the acquaintance value of each position in the environment image.
Providing operation prompt information in the process of guiding the ultrasonic probe to move according to the guide path, wherein the operation prompt information comprises: one or more of voice-operated prompts, visual-operated prompts, and tactile-operated prompts. The visual operation prompt can prompt the direction and the angle of the probe moving on the display, or generate a moving loss stopping icon at the body surface corresponding to the detection object. The tactile operation cue is that the ultrasonic probe vibrates when the ultrasonic probe deviates from the guide path.
And S540, overlapping and fusing the ultrasonic image and the first modality image.
The superposition fusion mode can be used for superposing and fusing the ultrasonic image and the first modality image according to the preset transparency. The ultrasound image and the first modality image may be displayed differently in color, brightness, or grayscale.
As a second aspect of the present invention, the present invention also provides a multi-modality fusion imaging apparatus, as shown in fig. 5, including:
the first acquisition unit is used for acquiring a first modal image of a target organ to be scanned of an inspection object;
a second acquisition unit that acquires an ultrasound image of the target organ by an ultrasound probe;
the determining unit is used for determining at least two registration punctuations in the first modality image and determining initial punctuations corresponding to the registration punctuations in the ultrasonic image according to the registration punctuations;
a third obtaining unit, configured to obtain position information and angle information of the registration mark and the initial mark in the same coordinate system;
and the registration fusion unit guides the ultrasonic probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to enable the ultrasonic image acquired by the ultrasonic probe to be matched and fused with the first modality image.
A unit may advantageously be configured to reside in the addressable storage medium and configured to execute on one or more processors. Thus, a unit may include, by way of example, components (such as software components, object-oriented software components, class components, and task components), processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functionality provided for in the components and units may be combined into fewer components and units or further separated into additional components and units.
As a third aspect of the present invention, a computer storage medium,
the computer storage medium has stored therein a computer program which, when executed by a processor, is configured to implement the steps of the multimodal fusion imaging method as set forth in any one of the above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
As a fourth aspect of the present invention, the present invention also provides an ultrasound apparatus including at least a memory having a computer program stored thereon, and a processor implementing the steps of the multimodal fusion imaging method of any one of the above when the computer program on the memory is executed.
The memory may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviated: HDD) or a solid-state drive (english: SSD); the memory may also comprise a combination of memories of the kind described above.
The processor may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP. The processor may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The multi-modal fusion imaging method can determine at least two registration punctuations in the first modal image and determine initial punctuations corresponding to the registration punctuations in the ultrasound image according to the registration punctuations; and guiding the ultrasonic probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to enable the ultrasonic image acquired by the ultrasonic probe to be matched and fused with the first modality image, thereby improving the resolution and the real-time performance of the auxiliary diagnosis image and improving the diagnosis accuracy of a clinician.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.
Claims (10)
1. A multi-modality fused imaging method, comprising:
acquiring a first modal image of a target organ to be scanned of an inspection object;
acquiring an ultrasonic image of the target organ through an ultrasonic probe;
determining at least two registration punctuations in the first modality image and determining an initial punctuation corresponding to the registration punctuation in the ultrasound image according to the registration punctuations;
acquiring position information and angle information of the registration mark and the initial mark in the same coordinate system;
and guiding the ultrasonic probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to enable the ultrasonic image acquired by the ultrasonic probe to be matched and fused with the first modality image.
2. The method for multi-modal fusion imaging as recited in claim 1, wherein said determining an initial punctuation in the ultrasound image corresponding to the registration punctuation comprises:
and inputting the first modality image and the ultrasonic image into a trained recognition neural network model for processing to obtain the registration punctuations and corresponding initial punctuations in the ultrasonic image.
3. The method of multimodal fusion imaging as claimed in claim 2, wherein the inputting the first modality image and the ultrasound image into a trained recognition neural network model process comprises:
inputting a first modality image into a first convolution neural network of the recognition neural network model for processing, and determining position information and angle information of the registration punctuations in the first modality image;
and inputting the ultrasonic image into a second convolution neural network of the recognition neural network model for processing to obtain the position information and the angle information of the initial punctuations corresponding to the registration punctuations in the ultrasonic image.
4. The multi-modality fusion imaging method according to claim 3, wherein the guiding movement of the ultrasound probe according to the position information and the angle information of the registration punctuations and the initial punctuations to match-fuse the ultrasound image acquired by the ultrasound probe with the first modality image comprises:
determining a registration transformation matrix according to the position information and the angle information of the registration standard point and the initial standard point;
planning a guide path according to the registration transformation matrix;
guiding the ultrasonic probe to move according to the guide path so that an initial punctuation in the ultrasonic image is coincided with the registration punctuation;
and overlapping and fusing the ultrasonic image and the first modality image.
5. The multi-modality fused imaging method of claim 4, further comprising:
acquiring a real-time position of the ultrasonic probe;
and judging whether the ultrasonic probe deviates from the guide path or not according to the real-time position of the ultrasonic probe, and if so, updating the guide path according to the real-time position.
6. The multi-modality fusion imaging method according to claim 4, wherein the guidance of the ultrasound probe movement according to the guidance path provides operation prompt information, the operation prompt information including: one or more of voice-operated prompts, visual-operated prompts, and tactile-operated prompts.
7. The multi-modality fusion imaging method of claim 1, wherein the first modality image includes a CT image, an MR image, a PET image, or a three-dimensional ultrasound image.
8. The multi-modality fused imaging method of claim 1, wherein the registration landmark is at a contour of a tissue vessel, a vessel intersection, or a focal center point.
9. A multi-modality fusion imaging apparatus, comprising:
the first acquisition unit is used for acquiring a first modal image of a target organ to be scanned of an inspection object;
a second acquisition unit that acquires an ultrasound image of the target organ by an ultrasound probe;
the determining unit is used for determining at least two registration punctuations in the first modality image and determining initial punctuations corresponding to the registration punctuations in the ultrasonic image according to the registration punctuations;
a third obtaining unit, configured to obtain position information and angle information of the registration mark and the initial mark in the same coordinate system;
and the registration fusion unit guides the ultrasonic probe to move according to the position information and the angle information of the registration punctuations and the initial punctuations so as to enable the ultrasonic image acquired by the ultrasonic probe to be matched and fused with the first modality image.
10. A computer storage medium comprising, in combination,
the computer storage medium has stored thereon a computer program which, when executed by a processor, is adapted to carry out the steps of the multi-modal fusion imaging method as claimed in any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911416974.7A CN113129342A (en) | 2019-12-31 | 2019-12-31 | Multi-modal fusion imaging method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911416974.7A CN113129342A (en) | 2019-12-31 | 2019-12-31 | Multi-modal fusion imaging method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113129342A true CN113129342A (en) | 2021-07-16 |
Family
ID=76769168
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911416974.7A Pending CN113129342A (en) | 2019-12-31 | 2019-12-31 | Multi-modal fusion imaging method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113129342A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113951932A (en) * | 2021-11-30 | 2022-01-21 | 上海深至信息科技有限公司 | Scanning method and device for ultrasonic equipment |
CN116245831A (en) * | 2023-02-13 | 2023-06-09 | 天津市鹰泰利安康医疗科技有限责任公司 | Tumor treatment auxiliary method and system based on bimodal imaging |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999902A (en) * | 2012-11-13 | 2013-03-27 | 上海交通大学医学院附属瑞金医院 | Optical navigation positioning system based on CT (computed tomography) registration result and navigation method thereof |
CN104574329A (en) * | 2013-10-09 | 2015-04-29 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic fusion imaging method and ultrasonic fusion imaging navigation system |
CN104680481A (en) * | 2013-11-28 | 2015-06-03 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic auxiliary scanning method and ultrasonic auxiliary scanning system |
US20160007973A1 (en) * | 2009-07-31 | 2016-01-14 | Korea Advanced Institute Of Science And Technology | Sensor coordinate calibration in an ultrasound system |
CN107595390A (en) * | 2017-10-19 | 2018-01-19 | 青岛大学附属医院 | A kind of real-time matching fusion method of ultrasonic image and CT images |
CN108577940A (en) * | 2018-02-11 | 2018-09-28 | 苏州融准医疗科技有限公司 | A kind of targeting guiding puncture system and method based on multi-modality medical image information |
CN109124764A (en) * | 2018-09-29 | 2019-01-04 | 上海联影医疗科技有限公司 | Guide device of performing the operation and surgery systems |
KR101959438B1 (en) * | 2018-08-06 | 2019-03-18 | 전북대학교 산학협력단 | Medical image diagnosis system using multi-modality image creation technique |
-
2019
- 2019-12-31 CN CN201911416974.7A patent/CN113129342A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160007973A1 (en) * | 2009-07-31 | 2016-01-14 | Korea Advanced Institute Of Science And Technology | Sensor coordinate calibration in an ultrasound system |
CN102999902A (en) * | 2012-11-13 | 2013-03-27 | 上海交通大学医学院附属瑞金医院 | Optical navigation positioning system based on CT (computed tomography) registration result and navigation method thereof |
CN104574329A (en) * | 2013-10-09 | 2015-04-29 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic fusion imaging method and ultrasonic fusion imaging navigation system |
CN104680481A (en) * | 2013-11-28 | 2015-06-03 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic auxiliary scanning method and ultrasonic auxiliary scanning system |
CN107595390A (en) * | 2017-10-19 | 2018-01-19 | 青岛大学附属医院 | A kind of real-time matching fusion method of ultrasonic image and CT images |
CN108577940A (en) * | 2018-02-11 | 2018-09-28 | 苏州融准医疗科技有限公司 | A kind of targeting guiding puncture system and method based on multi-modality medical image information |
KR101959438B1 (en) * | 2018-08-06 | 2019-03-18 | 전북대학교 산학협력단 | Medical image diagnosis system using multi-modality image creation technique |
CN109124764A (en) * | 2018-09-29 | 2019-01-04 | 上海联影医疗科技有限公司 | Guide device of performing the operation and surgery systems |
Non-Patent Citations (3)
Title |
---|
CAROLINE EWERTSEN等: "" Real-Time Image Fusion Involving Diagnostic Ultrasound"", 《SPECIAL ARTICLES • REVIEW》, vol. 200, no. 3, 27 February 2013 (2013-02-27) * |
唐翡: ""多模态神经影像融合方法及其在脑疾病诊疗中的应用进展"", 《国际生物医学工程杂志》, vol. 42, no. 4, 31 August 2019 (2019-08-31) * |
柳俊;徐利建;顾力栩;詹维伟;: "肝脏二维超声与CT配准技术的临床可行性研究", 中国医学计算机成像杂志, no. 03, 25 June 2016 (2016-06-25) * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113951932A (en) * | 2021-11-30 | 2022-01-21 | 上海深至信息科技有限公司 | Scanning method and device for ultrasonic equipment |
CN116245831A (en) * | 2023-02-13 | 2023-06-09 | 天津市鹰泰利安康医疗科技有限责任公司 | Tumor treatment auxiliary method and system based on bimodal imaging |
CN116245831B (en) * | 2023-02-13 | 2024-01-16 | 天津市鹰泰利安康医疗科技有限责任公司 | Tumor treatment auxiliary method and system based on bimodal imaging |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101922180B1 (en) | Ultrasonic image processing apparatus and method for processing of ultrasonic image | |
US11264135B2 (en) | Machine-aided workflow in ultrasound imaging | |
CN112469340A (en) | Ultrasound system with artificial neural network for guided liver imaging | |
KR101565311B1 (en) | 3 automated detection of planes from three-dimensional echocardiographic data | |
US10402969B2 (en) | Methods and systems for model driven multi-modal medical imaging | |
CN104346821B (en) | Automatic planning for medical imaging | |
CN110584714A (en) | Ultrasonic fusion imaging method, ultrasonic device, and storage medium | |
CN105046644B (en) | Ultrasonic and CT image registration method and system based on linear correlation | |
JP2019511268A (en) | Determination of rotational orientation in three-dimensional images of deep brain stimulation electrodes | |
CN111816285B (en) | Medical information processing device and medical information processing method | |
CN114845642A (en) | Intelligent measurement assistance for ultrasound imaging and associated devices, systems, and methods | |
US20210100530A1 (en) | Methods and systems for diagnosing tendon damage via ultrasound imaging | |
KR20200080906A (en) | Ultrasound diagnosis apparatus and operating method for the same | |
CN106456253A (en) | Reconstruction-free automatic multi-modality ultrasound registration. | |
CN113129342A (en) | Multi-modal fusion imaging method, device and storage medium | |
US10420532B2 (en) | Method and apparatus for calculating the contact position of an ultrasound probe on a head | |
WO2021034981A1 (en) | Ultrasound guidance dynamic mode switching | |
CN113116384A (en) | Ultrasonic scanning guidance method, ultrasonic device and storage medium | |
US20220183759A1 (en) | Determining a surgical port for a trocar or laparoscope | |
CN113116378A (en) | Multi-modal fusion imaging method, ultrasound apparatus, and storage medium | |
US20240099692A1 (en) | Guided acquisition of a 3d representation of an anatomical structure | |
EP4271277A2 (en) | Ultrasound imaging system, method and a non-transitory computer-readable medium | |
US11452494B2 (en) | Methods and systems for projection profile enabled computer aided detection (CAD) | |
US11928828B2 (en) | Deformity-weighted registration of medical images | |
CN114631841A (en) | Ultrasonic scanning feedback device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |