CN112085829A - Spiral CT image reconstruction method and equipment based on neural network and storage medium - Google Patents
Spiral CT image reconstruction method and equipment based on neural network and storage medium Download PDFInfo
- Publication number
- CN112085829A CN112085829A CN201910448427.0A CN201910448427A CN112085829A CN 112085829 A CN112085829 A CN 112085829A CN 201910448427 A CN201910448427 A CN 201910448427A CN 112085829 A CN112085829 A CN 112085829A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- neural network
- data
- section
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 51
- 238000003062 neural network model Methods 0.000 claims abstract description 20
- 238000002591 computed tomography Methods 0.000 claims description 55
- 238000012549 training Methods 0.000 claims description 33
- 230000008569 process Effects 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 17
- 238000013527 convolutional neural network Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000004088 simulation Methods 0.000 claims description 4
- 239000000463 material Substances 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000013499 data model Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 abstract description 5
- 238000013170 computed tomography imaging Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 15
- 238000004422 calculation algorithm Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 8
- 238000011176 pooling Methods 0.000 description 8
- 230000033001 locomotion Effects 0.000 description 7
- 230000005855 radiation Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000009659 non-destructive testing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005293 physical law Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
本公开提供一种基于神经网络的螺旋CT图像重建设备和方法。其中,该设备包括:存储器,用于存储指令和来自螺旋CT设备对被检查对象的三维投影数据,所述被检查对象被预设为多层截面;处理器,配置为执行所述指令,以便:分别对各层截面进行图像重建,对于每层截面的重建,包括:输入所述与待重建截面相关的三维投影数据至经训练的神经网络模型,得到截面重建图像;根据多层截面的重建图像形成三维重建图像。本公开的设备通过结合深度神经网络优势和螺旋CT成像问题的特殊性,能够将三维数据投影数据重建为信息更多噪声更少的三维重建图像。
The present disclosure provides a neural network-based spiral CT image reconstruction device and method. Wherein, the device includes: a memory for storing instructions and three-dimensional projection data from a helical CT device to an object under inspection, the object under inspection is preset as a multi-layer section; a processor, configured to execute the instructions, so as to : Perform image reconstruction on each layer section respectively. The reconstruction of each layer section includes: inputting the three-dimensional projection data related to the section to be reconstructed into the trained neural network model to obtain a section reconstruction image; according to the reconstruction of the multi-layer section The images form a three-dimensional reconstructed image. By combining the advantages of the deep neural network and the particularity of the helical CT imaging problem, the device of the present disclosure can reconstruct the three-dimensional data projection data into a three-dimensional reconstructed image with more information and less noise.
Description
技术领域technical field
本公开涉及辐射成像,具体涉及一种基于神经网络的螺旋CT图 像重建方法和设备以及存储介质。The present disclosure relates to radiation imaging, in particular to a neural network-based spiral CT image reconstruction method and device, and a storage medium.
背景技术Background technique
X射线CT(Computed-Tomography)成像系统在医疗、安检、工 业无损检测等领域中都有着广泛的应用。射线源和探测器按照一定的 轨道采集一系列的投影数据,经过图像重建算法的复原可以得到物体 在该射线能量下的线性衰减系数的三维空间分布。CT图像重建是从 探测器采集到的投影数据中恢复线性衰减系数分布,是CT成像的核 心步骤。目前,在实际应用中主要使用滤波反投影(Filtered Back-Projection,FBP)、Feldkmap-Davis-Kress(FDK)类的解析重建算法 和Algebra Reconstruction Technique(ART)、Maximum APosterior(MAP) 等迭代重建方法。X-ray CT (Computed-Tomography) imaging systems are widely used in medical, security, industrial non-destructive testing and other fields. The ray source and detector collect a series of projection data according to a certain orbit, and after the restoration of the image reconstruction algorithm, the three-dimensional spatial distribution of the linear attenuation coefficient of the object under the ray energy can be obtained. CT image reconstruction is to restore the linear attenuation coefficient distribution from the projection data collected by the detector, which is the core step of CT imaging. At present, the analytical reconstruction algorithms such as Filtered Back-Projection (FBP), Feldkmap-Davis-Kress (FDK) and iterative reconstruction methods such as Algebra Reconstruction Technique (ART) and Maximum APosterior (MAP) are mainly used in practical applications. .
随着人们对辐射剂量这一问题愈发重视,如何在低剂量、快速扫 描的条件下获得常规质量或更高质量图像成为领域内研究的热门。在 重建方法方面,解析重建速度快,但局限于传统的系统架构,且不能 很好地解决数据缺失、噪声大等问题。与解析算法相比,迭代重建算 法在系统架构方面的适用条件广泛,对于各种非标准扫描轨道、低剂 量大噪声、投影数据缺失等问题都能取得较好的重建结果。但是迭代 重建算法往往要求多次迭代,重建耗时较长。对于数据规模更大的三 维螺旋CT更是难以实际运用。对于医疗和工业上广泛应用的螺旋 CT,增大螺距可以减少扫描时间,提高扫描效率,降低辐射剂量。 然而,增大螺距意味着有效数据的减少。利用常规解析重建方法得到 的图像质量较差;而迭代重建方法由于耗时较长,难以实际应用。As people pay more and more attention to the issue of radiation dose, how to obtain conventional quality or higher quality images under the condition of low dose and fast scanning has become a hot research topic in the field. In terms of reconstruction methods, the analytical reconstruction speed is fast, but it is limited to the traditional system architecture, and cannot well solve the problems of missing data and large noise. Compared with the analytical algorithm, the iterative reconstruction algorithm has a wide range of applicable conditions in terms of system architecture, and can achieve better reconstruction results for various problems such as non-standard scanning orbits, low dose and large noise, and missing projection data. However, iterative reconstruction algorithms often require multiple iterations, and the reconstruction takes a long time. It is even more difficult for practical application of 3D helical CT with larger data scale. For the helical CT widely used in medical and industrial fields, increasing the pitch can reduce the scanning time, improve the scanning efficiency, and reduce the radiation dose. However, increasing the pitch means less effective data. The image quality obtained by the conventional analytical reconstruction method is poor; and the iterative reconstruction method is difficult to be practically applied due to its time-consuming.
深度学习在计算机视觉、自然语言处理等方面取得了重大发展, 尤其是卷积神经网络因为其网络结构的简洁、特征提取的有效、参数 空间的压缩等多个方面的优势成为图像分类、检测等应用的主流网络 结构。但目前未有相关的应用神经网络来进行螺旋CT图像重建的研 究。Deep learning has made significant progress in computer vision, natural language processing, etc. In particular, convolutional neural networks have become popular in image classification, detection, etc. The mainstream network structure of the application. However, there is no related research on the application of neural network to the reconstruction of spiral CT images.
发明内容SUMMARY OF THE INVENTION
根据本公开实施例,提出了一种螺旋CT图像重建方法和设备以 及存储介质。According to the embodiments of the present disclosure, a spiral CT image reconstruction method and device and a storage medium are provided.
根据本公开的一方面,提供一种基于神经网络的螺旋CT图像重 建设备,其中,包括:According to an aspect of the present disclosure, a neural network-based spiral CT image reconstruction device is provided, comprising:
存储器,用于存储指令和来自螺旋CT设备对被检查对象的三维 投影数据,所述被检查对象被预设为多层截面;a memory for storing instructions and three-dimensional projection data from the helical CT apparatus to the inspected object, the inspected object being preset as a multi-layer section;
处理器,配置为执行所述指令,以便:a processor configured to execute the instructions to:
分别对各层截面进行图像重建,对于每层截面的重建,包括:输 入所述与待重建截面相关的三维投影数据至经训练的神经网络模型, 得到截面重建图像;Image reconstruction is performed on each layer section respectively, and the reconstruction of each layer section includes: inputting the three-dimensional projection data related to the section to be reconstructed into the trained neural network model to obtain a section reconstruction image;
根据多层截面的重建图像形成三维重建图像。A three-dimensional reconstructed image is formed from the reconstructed image of the multilayer cross-section.
根据本公开的另一方面,提供一种螺旋CT图像重建方法,其中, 包括:According to another aspect of the present disclosure, there is provided a spiral CT image reconstruction method, comprising:
被检查对象被预设为多层截面;The inspected object is preset as a multi-layer section;
分别对各层截面进行图像重建,对于每层截面的重建,包括:输 入所述与待重建截面相关的三维投影数据至经训练的神经网络模型, 得到截面重建图像;Image reconstruction is performed on each layer section respectively, and the reconstruction of each layer section includes: inputting the three-dimensional projection data related to the section to be reconstructed into the trained neural network model to obtain a section reconstruction image;
根据多层截面的重建图像形成三维重建图像。A three-dimensional reconstructed image is formed from the reconstructed image of the multilayer cross-section.
根据本公开的再一方面,提供一种用于训练神经网络的方法,所 述神经网络包括:According to yet another aspect of the present disclosure, there is provided a method for training a neural network, the neural network comprising:
投影域子网络,用于处理输入的待重建截面相关的螺旋CT 三维投影数据,得到二维投影数据;The projection domain sub-network is used to process the input three-dimensional projection data of the spiral CT related to the cross-section to be reconstructed, and obtain the two-dimensional projection data;
域转换子网络,用于对二维投影数据进行解析重建,得到待 重建截面图像;The domain conversion sub-network is used to analyze and reconstruct the two-dimensional projection data to obtain the cross-sectional image to be reconstructed;
图像域子网络,用于对图像域截面图像进行处理,得到待重 建截面的精确重建图像;The sub-network in the image domain is used to process the cross-section image in the image domain to obtain an accurate reconstructed image of the cross-section to be reconstructed;
其中,所述方法包括:Wherein, the method includes:
利用基于输入的三维投影数据、图像真值、以及设定截面的平面 重建图像这三者的数据模型的一致性代价函数来调整神经网络中参 数。The parameters in the neural network are adjusted by using the consistency cost function of the data model based on the input 3D projection data, the ground truth of the image, and the plane reconstructed image of the set section.
根据本公开的又一方面,提供一种计算机可读存储介质,其中存 储有计算机指令,当所述指令被处理器执行时实现如上述的螺旋CT 图像重建方法。According to yet another aspect of the present disclosure, there is provided a computer-readable storage medium in which computer instructions are stored, and when the instructions are executed by a processor, implement the above-described spiral CT image reconstruction method.
本公开的基于神经网络的螺旋CT图像重建设备,通过结合深度 网络优势和螺旋CT成像问题的特殊性,提供的设备能够将三维投影 数据重建为较为准确的三维图像;The neural network-based spiral CT image reconstruction device of the present disclosure, by combining the advantages of the deep network and the particularity of the spiral CT imaging problem, provides a device that can reconstruct the three-dimensional projection data into a relatively accurate three-dimensional image;
本公开通过针对性的神经网络模型架构,结合仿真和实际数据, 训练该网络,从而能够可靠、有效、全面地覆盖所有系统信息和被成 像对象的集合信息,准确地重建物体图像,抑制低剂量带来的噪声和 数据缺失带来的伪影;The present disclosure trains the network through a targeted neural network model architecture, combined with simulation and actual data, so as to reliably, effectively and comprehensively cover all system information and the collection information of the imaged object, accurately reconstruct object images, and suppress low-dose Noise and artifacts caused by missing data;
本公开的神经网络模型虽然训练过程需要大量数据和运算,但是 实际重建过程不需要迭代,重建所需计算量与解析重建方法可比,远 快于迭代重建算法。Although the training process of the neural network model of the present disclosure requires a large amount of data and operations, the actual reconstruction process does not require iteration, and the amount of computation required for reconstruction is comparable to the analytical reconstruction method and much faster than the iterative reconstruction algorithm.
附图说明Description of drawings
为了更好地理解本公开实施例,将根据以下附图对本公开实施例 进行详细描述:In order to better understand the embodiments of the present disclosure, the embodiments of the present disclosure will be described in detail according to the following drawings:
图1示出了本公开一个实施例的螺旋CT系统的结构示意图;FIG. 1 shows a schematic structural diagram of a helical CT system according to an embodiment of the present disclosure;
图2A是如图1所示的螺旋CT系统中探测器相对于被检查对象 进行螺旋运动的轨迹示意图;图2B是螺旋CT系统中探测器探测的 信号对应三维投影数据示意图。Fig. 2A is a schematic diagram of the trajectory of the helical motion of the detector relative to the object to be inspected in the helical CT system shown in Fig. 1; Fig. 2B is a schematic diagram of the three-dimensional projection data corresponding to the signal detected by the detector in the helical CT system.
图3是如图1所示的螺旋CT系统中控制与数据处理装置的结构 示意图;Fig. 3 is the structural representation of control and data processing device in the spiral CT system as shown in Fig. 1;
图4示出了根据本公开实施例的基于神经网络的螺旋CT图像重 建设备原理示意图;Fig. 4 shows a schematic diagram of the principle of a device for reconstructing a spiral CT image based on a neural network according to an embodiment of the present disclosure;
图5示出了根据本公开实施例的神经网络的一种结构示意图;FIG. 5 shows a schematic structural diagram of a neural network according to an embodiment of the present disclosure;
图6为本公开实施例的神经网络的可视化网络结构图;6 is a visual network structure diagram of a neural network according to an embodiment of the disclosure;
图7示出了投影域子网络一示例性网络结构;FIG. 7 shows an exemplary network structure of the projection domain sub-network;
图8是描述根据本公开实施例的螺旋CT图像重建方法示意流程 图。Fig. 8 is a schematic flow chart describing a method for reconstructing a spiral CT image according to an embodiment of the present disclosure.
具体实施方式Detailed ways
下面将详细描述本公开具体实施例,应当注意,这里描述的实施 例只用于举例说明,并不用于限制本公开实施例。在以下描述中,为 了提供对本公开实施例的透彻理解,阐述了大量特定细节。然而,对 于本领域普通技术人员显而易见的是:不必采用这些特定细节来实行 本公开实施例。在其他实例中,为了避免混淆本公开实施例,未具体 描述公知的结构、材料或方法。Specific embodiments of the present disclosure will be described in detail below. It should be noted that the embodiments described herein are only used for illustration and are not used to limit the embodiments of the present disclosure. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It will be apparent, however, to one of ordinary skill in the art that these specific details need not be employed to practice embodiments of the present disclosure. In other instances, well-known structures, materials, or methods have not been described in detail in order to avoid obscuring the disclosed embodiments.
在整个说明书中,对“一个实施例”、“实施例”、“一个示例”或“示 例”的提及意味着:结合该实施例或示例描述的特定特征、结构或特 性被包含在本公开至少一个实施例中。因此,在整个说明书的各个地 方出现的短语“在一个实施例中”、“在实施例中”、“一个示例”或“示例” 不一定都指同一实施例或示例。此外,可以以任何适当的组合和/或 子组合将特定的特征、结构或特性组合在一个或多个实施例或示例中。 此外,本领域普通技术人员应当理解,这里使用的术语“和/或”包括 一个或多个相关列出的项目的任何和所有组合。Throughout this specification, references to "one embodiment," "an embodiment," "an example," or "an example" mean that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in the present disclosure in at least one embodiment. Thus, appearances of the phrases "in one embodiment," "in an embodiment," "one example," or "an example" in various places throughout the specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures or characteristics may be combined in any suitable combination and/or subcombination in one or more embodiments or examples. Further, those of ordinary skill in the art should understand that as used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
对于医疗和工业上广泛应用的螺旋CT,增大螺距可以减少扫描 时间,提高扫描效率,降低辐射剂量。然而,增大螺距意味着有效数 据的减少。利用常规解析重建方法得到的图像质量较差;而迭代重建 方法由于耗时较长,难以实际应用。For the helical CT widely used in medical and industrial fields, increasing the pitch can reduce the scanning time, improve the scanning efficiency, and reduce the radiation dose. However, increasing the pitch means less effective data. The image quality obtained by the conventional analytical reconstruction method is poor; while the iterative reconstruction method is difficult to be practically applied due to the time-consuming.
本公开从深度学习角度,针对大螺距扫描下的螺旋CT设备,提 出了一种基于卷积神经网络的重建方法,深度挖掘数据信息,结合螺 旋CT系统的物理规律,设计了独特的网络架构、以及训练方法,从 而在较短的时间内重建得到更高质量的图像。From the perspective of deep learning, the present disclosure proposes a reconstruction method based on convolutional neural network for helical CT equipment under large-pitch scanning, deeply mines data information, and combines the physical laws of the helical CT system to design a unique network architecture, and training methods to reconstruct higher quality images in less time.
本公开的实施例提出了一种基于神经网络的螺旋CT图像重建方 法和设备以及存储介质。其中利用神经网络来处理来自螺旋CT设备 对被检查对象的三维投影数据以获得被检查对象线性衰减系数的体 分布。该神经网络可以包括:投影域子网络,域转换子网络以及图像 域子网络。投影域子网络处理输入的三维投影数据,得到二维投影数 据。域转换子网络对二维投影数据进行解析重建,得到图像域设定截 面图像。图像域子网络输入截面图像,经过包含若干层的卷积神经网 络作用,采集数据在图像域的特征,对图像特征进行进一步的提取并 相互耦合,得到设定截面的精确重建图像。利用本公开上述实施例的 方案,能够对螺旋CT设备被检查对象的三维投影数据重建得到质量 更高的结果。Embodiments of the present disclosure provide a neural network-based spiral CT image reconstruction method and device, and a storage medium. The neural network is used to process the three-dimensional projection data of the inspected object from the helical CT equipment to obtain the volume distribution of the inspected object's linear attenuation coefficient. The neural network may include: a projection domain sub-network, a domain transformation sub-network, and an image domain sub-network. The projection domain subnet processes the input 3D projection data to obtain 2D projection data. The domain transformation sub-network analyzes and reconstructs the two-dimensional projection data, and obtains the cross-sectional image of the image domain setting. The image domain sub-network inputs the cross-sectional image, and through the convolutional neural network including several layers, the features of the data in the image domain are collected, and the image features are further extracted and coupled with each other to obtain an accurate reconstructed image of the set cross-section. Using the solutions of the above embodiments of the present disclosure, it is possible to reconstruct the three-dimensional projection data of the object being inspected by the helical CT device to obtain higher quality results.
图1示出了本公开一个实施例的螺旋CT系统的结构示意图。如 图1所示,根据本实施例的CT螺旋系统包括X射线源20、机械运 动装置30、以及探测器和数据采集系统10,对被检查对象60进行螺 旋CT扫描。FIG. 1 shows a schematic structural diagram of a helical CT system according to an embodiment of the present disclosure. As shown in FIG. 1 , the CT helical system according to the present embodiment includes an
X射线源10例如可以为X光机,可以根据成像的分辨率选择合 适的X光机焦点尺寸。在其他实施例中也可以不使用X光机,而是 使用直线加速器等产生X射线束。The
机械运动装置包括载物台60和机架30。载物台可沿着截面的轴 线方向(垂直于纸面的方向)移动,机架30也可以转动,同时带动 机架上的探测器和X射线源10同步转动。本实施例中按照平移载物 台、同步旋转探测器、X射线源,以使探测器相对于被检查对象作螺旋运动。The mechanical movement device includes the
探测器及数据采集系统10包括X射线探测器和数据采集电路等。 X射线探测器可以使用固体探测器,也可以使用气体探测器或者其他 探测器,本公开的实施例不限于此。数据采集电路包括读出电路、采 集触发电路及数据传输电路等,探测器通常采集的为模拟信号,通过 数据采集电路可以转换为数字信号。一个示例中,探测器可以是一排 探测器或多排探测器,对于多排探测器,可以设置不同的排间距。The detector and
控制和数据处理装置60中例如包括安装有控制程序和基于神经 网络的螺旋CT图像重建设备,负责完成螺旋CT系统运行过程的控 制,包括机械转动、电气控制、安全联锁控制等,训练神经网络(即 机器学习过程),并且利用训练的神经网络从投影数据重建CT图像等。For example, the control and
图2A是如图1所示的CT螺旋系统中探测器相对于被检查对象 进行螺旋运动的轨迹示意图。如图2A所示,载物台可以前后平移(即 图1中垂直于纸面的方向),该过程中同步带动被检查对象移动;同 时,探测器围绕载物台中心轴做圆周运动,被检查对象待重建图像μ 对应的设定截面与探测器之间的相对运动关系则为探测器围绕设定 截面作螺旋运动。Fig. 2A is a schematic diagram of the trajectory of the helical motion of the detector relative to the object to be inspected in the CT helical system shown in Fig. 1 . As shown in Figure 2A, the stage can be translated back and forth (that is, in the direction perpendicular to the paper in Figure 1), and the object to be inspected is moved synchronously during this process; The relative motion relationship between the set section corresponding to the image to be reconstructed μ of the inspection object and the detector is that the detector makes a helical motion around the set section.
图3示出了如图1所示的控制和数据处理装置60的结构示意图。 如图3所示,探测器及数据采集系统10采集得到的数据通过接口单 元370和总线380存储在存储设备310中。只读存储器(ROM)320 中存储有计算机数据处理器的配置信息以及程序。随机存取存储器(RAM)330用于在处理器350工作过程中暂存各种数据。另外,存 储设备310中还存储有用于进行数据处理的计算机程序,例如训练神 经网络的程序和重建CT图像的程序等等。内部总线380连接上述的 存储设备310、只读存储器320、随机存取存储器330、输入装置340、 处理器350、显示设备360和接口单元370。本公开实施例中的基于 神经网络的螺旋CT图像重建设备与控制和数据处理装置60共用存 储设备310,内部总线380,只读存储器(ROM)320,显示设备360 和处理器350等,用于实现螺旋CT图像的重建。FIG. 3 shows a schematic structural diagram of the control and
在用户通过诸如键盘和鼠标之类的输入装置340输入的操作命 令后,计算机程序的指令代码命令处理器350执行训练神经网络的算 法和/或重建CT图像的算法。在得到重建结果之后,将其显示在诸如 LCD显示器之类的显示设备360上,或者直接以诸如打印之类硬拷 贝的形式输出处理结果。The instruction code of the computer program instructs the
根据本公开的实施例,利用上述系统对被检查对象进行螺旋CT 扫描,得到原始衰减信号。这样的衰减信号数据是一个三维数据,记 为P,则P是一个C×R×A大小的矩阵,其中,C表示探测器的列 数(如图2A和图2B中所标示的列方向),R表示探测器的行数(如 图2A和图2B中所标示的行方向,对应多排探测器的排),A表示探 测器采集到的投影的角度数目(如图2B中所标示维度),也就是将 螺旋CT投影数据组织成矩阵形式。原始衰减信号进行预处理后成为 三维投影数据(参见图2B所示)。例如,可以由螺旋CT系统对投影 数据进行负对数变换等预处理得到投影数据。然后,控制设备中的处 理器350执行重建程序,利用训练的神经网络对投影数据进行处理, 得到设定截面的二维投影数据,再进一步对二维投影数据进行解析重 建,得到图像域设定截面图像,接着可以对该图像域设定截面图像进 一步处理,得到设定截面的平面重建图像。例如,可以利用训练的卷 积神经网络(例如U-net型神经网络)处理图像,得到不同尺度的特 征图,并且对不同尺度的特征图进行合并,得到结果。According to an embodiment of the present disclosure, the above-mentioned system is used to perform a helical CT scan on the inspected object to obtain the original attenuation signal. Such attenuation signal data is a three-dimensional data, denoted as P, then P is a matrix of size C × R × A, where C represents the number of columns of the detector (the column direction marked in Fig. 2A and Fig. 2B ) , R represents the number of rows of detectors (the row direction as marked in Figure 2A and Figure 2B, corresponding to the rows of multiple rows of detectors), A represents the number of angles of projections collected by the detectors (dimension marked in Figure 2B) ), that is, the spiral CT projection data is organized into a matrix form. The original attenuation signal is preprocessed into 3D projection data (see Figure 2B). For example, the projection data can be obtained by performing preprocessing such as negative logarithmic transformation on the projection data by a helical CT system. Then, the
在一具体示例中,卷积神经网络可以包括卷积层、池化、和全连 接层。卷积层识别输入数据集合的特性表征,每个卷积层带一个非线 性激活函数运算。池化层精炼对特征的表示,典型的操作包括平均池 化和最大池化。一层或多层的全连接层实现高阶的信号非线性综合运 算,全连接层也带非线性激活函数。常用的非线性激活函数有Sigmoid、 Tanh、ReLU等等。In a specific example, a convolutional neural network may include convolutional layers, pooling, and fully connected layers. The convolutional layers identify the characteristic representation of the input data set, and each convolutional layer operates with a nonlinear activation function. The pooling layer refines the representation of features, and typical operations include average pooling and max pooling. One or more fully-connected layers implement high-order nonlinear signal synthesis operations, and the fully-connected layers also have nonlinear activation functions. Commonly used nonlinear activation functions are Sigmoid, Tanh, ReLU, etc.
在螺旋CT扫描时,通过增大螺距可以减少扫描时间,提高扫描 效率,降低辐射剂量,相应带来的是有效数据减少。可以选择对螺旋 CT投影数据进行插值,关于插值,即探测器行方向进行缺失数据插 值,插值方法包括但不限于线性插值和三次样条插值等。During helical CT scanning, increasing the pitch can reduce the scanning time, improve the scanning efficiency, and reduce the radiation dose, resulting in a corresponding reduction in effective data. You can choose to perform interpolation on the helical CT projection data. Regarding the interpolation, that is, the missing data interpolation in the detector row direction, the interpolation methods include but are not limited to linear interpolation and cubic spline interpolation.
进一步参见图1所示,位于不同位置的X射线源20发出的X射 线透射被检查对象60后,被探测器接收,转换成电信号并进而转换 成表示衰减值的数字信号,预处理后作为投影数据,以便由计算机进 行重建。Further referring to FIG. 1 , after the X-rays emitted by the
图4示出了根据本公开实施例的基于神经网络的螺旋CT图像重 建设备原理示意图。如图4所示,本公开实施例的基于神经网络的螺 旋CT图像重建设备中,通过输入所述三维投影数据至经训练的神经 网络模型,得到被检查对象设定截面的重建图像。其中,神经网络模 型经过训练,网络中参数得以优化,该网络通过训练集的数据进行学 习,包括训练和泛化处理,包括通过模拟数据和/或实际数据对神经 网络模型中的参数进行训练优化;以及通过部分实际数据,对已优化 的参数进行泛化处理,所述泛化处理包括对参数进行细化微调。Fig. 4 shows a schematic diagram of the principle of a device for reconstructing a spiral CT image based on a neural network according to an embodiment of the present disclosure. As shown in FIG. 4 , in the neural network-based spiral CT image reconstruction device of the embodiment of the present disclosure, by inputting the three-dimensional projection data into the trained neural network model, the reconstructed image of the set section of the object under inspection is obtained. Among them, the neural network model is trained, and the parameters in the network are optimized, and the network learns through the data of the training set, including training and generalization processing, including training and optimizing the parameters in the neural network model through simulated data and/or actual data. ; and performing generalization processing on the optimized parameters by using part of the actual data, and the generalization processing includes fine-tuning the parameters.
图5示出了根据本公开实施例的神经网络的一种结构示意图。如 图5所示,本公开实施例的神经网络可以包括三个级联的子网络,分 别为独立的神经网络,即投影域子网络,域转换子网络以及图像域子 网络。图6为本公开实施例的神经网络的可视化网络结构图。其中, 可以视觉化的了解各级子网络处理前后数据的类型。以下,参照图5 和图6对三级子网络进行具体说明。FIG. 5 shows a schematic structural diagram of a neural network according to an embodiment of the present disclosure. As shown in FIG. 5 , the neural network in the embodiment of the present disclosure may include three cascaded sub-networks, which are independent neural networks, namely, a projection domain sub-network, a domain transformation sub-network, and an image domain sub-network. FIG. 6 is a visual network structure diagram of a neural network according to an embodiment of the present disclosure. Among them, the types of data before and after processing by sub-networks at all levels can be visually understood. Hereinafter, the three-level sub-network will be specifically described with reference to FIG. 5 and FIG. 6 .
投影域子网络输入三维投影数据,该三维投影数据为螺旋CT系 统中探测器接收的数据,该子网络作为神经网络结构的第一部分,用 于将三维螺旋投影转换到二维平面投影,这部分网络以待重建物体某 一截面(设定截面,该截面为待重建图像的截面)相关的螺旋投影数 据作为输入,一示例中,投影域子网络可以包括若干层卷积神经网络, 螺旋投影数据通过该卷积神经网络后,输出物体的等效二维扇束(或 平行束)投影数据。该部分网络旨在通过卷积神经网络来提取出原始 的螺旋CT投影数据的特征,进而估计截面之间互相独立的扇束(或 平行束)投影。主要完成将螺旋CT投影的高复杂度问题简化为二维 平面内投影,不仅能消除锥角效应带来的影响,还能简化后续的重建 问题。二维重建所需的资源和计算量远远小于螺旋CT重建。The projection domain sub-network inputs the 3D projection data, which is the data received by the detector in the helical CT system. As the first part of the neural network structure, this sub-network is used to convert the 3D spiral projection to the 2D plane projection. The network takes the spiral projection data related to a certain section of the object to be reconstructed (the set section, which is the section of the image to be reconstructed) as input. In an example, the projection domain sub-network may include several layers of convolutional neural networks, and the spiral projection data After passing through the convolutional neural network, the equivalent two-dimensional fan beam (or parallel beam) projection data of the object is output. This part of the network aims to extract the features of the original spiral CT projection data through the convolutional neural network, and then estimate the fan beam (or parallel beam) projections between the sections that are independent of each other. The main task is to simplify the high-complexity problem of helical CT projection into a two-dimensional in-plane projection, which not only eliminates the influence of cone angle effects, but also simplifies subsequent reconstruction problems. 2D reconstruction requires far less resources and computation than helical CT reconstruction.
图7示出了投影域子网络一示例性网络结构。如图7所示,对于 待重建的设定截面图像,从上述的投影数据为P选取与其相关的投影 数据并进行重排,记为P’作为投影域子网络的输入。具体操作如下: 选取重建截面所在轴向坐标为中心,覆盖前后各180度螺旋扫描角度 的数据,找到每个扫描角度下重建截面所对应的探测器行数,将其重 排成C×A’×R’大小的矩阵。其中,A’表示我们选取螺旋投影的角度 数即360度,R’表示在所有角度下对应探测器的最大行数。FIG. 7 shows an exemplary network structure of the projection domain sub-network. As shown in Figure 7, for the set cross-sectional image to be reconstructed, the projection data related to P is selected from the above-mentioned projection data and rearranged, which is denoted as P' as the input of the projection domain subnet. The specific operations are as follows: Select the axial coordinate of the reconstructed section as the center, cover the data of the front and rear helical scanning angles of 180 degrees, find the number of detector rows corresponding to the reconstructed section under each scanning angle, and rearrange them into C×A' A matrix of size ×R'. Among them, A' represents the angle of the spiral projection that we choose, that is, 360 degrees, and R' represents the maximum number of rows corresponding to the detector at all angles.
此外,对于设定截面(待重建平面)的线衰减系数分布在扇束投 影条件下的投影数据记为p,p是一个C×A’大小的矩阵。在训练网络 前,可以先用解析重建方法,包括但不限于PI-original等,重建出输 入螺旋投影数据对应的截面图像用H表示扇束扫描的系统矩阵, 所以有作为投影域子网络残差。如图8所示,包括但不限于 采用一个U-net型神经网络结构作为投影域子网络,该部分网络以重 排后的投影数据P’作为输入,此部分网络作用是估计该二维截面内 线衰减系数μ的扇束投影p。这部分网络由多个卷积层组成,卷积层 配置K个尺度的2维卷积核。对于某一个尺度的2维卷积核有两个 维度,此处定义第一维度为探测器方向,第二维度为扫描角度方向。 两个维度的卷积核长度不必相同,例如取3*1,3*5,7*3的卷积核。 每个尺度可以设置多个卷积核。所有的卷积核为待定的网络参数。在 网络的池化部分,卷积层之间经过池化,图像尺度逐层减小,在上采 样部分,卷积层之间通过上采样,逐层恢复图像尺度。为保留更多图 像细节信息,池化部分之前的网络输出和上采样部分之后的网络输出 中同等尺度的图像在第三维度方向进行拼接,详见图7。用φP-net(P)表 示投影域子网络对应的算子,最后一层卷积层的输出结果使用残差方 式:若先单独训练投影域子网络,则代价函数可设置 为l范数,以l=2为例:In addition, the projection data of the linear attenuation coefficient distribution of the set section (plane to be reconstructed) under the condition of fan beam projection is denoted as p, where p is a C×A′-sized matrix. Before training the network, analytical reconstruction methods, including but not limited to PI-original, can be used to reconstruct the cross-sectional image corresponding to the input spiral projection data. Let H denote the system matrix scanned by the fan beam, so we have as the projection domain subnet residual. As shown in Figure 8, including but not limited to using a U-net type neural network structure as the projection domain sub-network, this part of the network takes the rearranged projection data P' as input, and the function of this part of the network is to estimate the two-dimensional cross-section Fan beam projection p of inner line attenuation coefficient μ. This part of the network consists of multiple convolutional layers, and the convolutional layers are configured with 2-dimensional convolution kernels of K scales. For a 2-dimensional convolution kernel of a certain scale, there are two dimensions. Here, the first dimension is defined as the detector direction, and the second dimension is the scanning angle direction. The lengths of the convolution kernels in the two dimensions do not have to be the same, for example, take 3*1, 3*5, and 7*3 convolution kernels. Multiple convolution kernels can be set for each scale. All convolution kernels are undetermined network parameters. In the pooling part of the network, after pooling between convolutional layers, the image scale is reduced layer by layer. In the upsampling part, the image scale is restored layer by layer through upsampling between convolutional layers. In order to retain more image details, the images of the same scale in the network output before the pooling part and the network output after the upsampling part are spliced in the third dimension, as shown in Figure 7. The operator corresponding to the projection domain subnet is represented by φ P-net (P), and the output of the last convolutional layer uses the residual method: If the projection domain sub-network is trained separately first, the cost function can be set to the l norm, taking l=2 as an example:
其中k为训练样本索引号,为投影标签。鉴于实际应用无法获 得标签,可以使用小螺距下完备数据的重建结果或领域内的先进迭代 重建方法的重建结果做扇束投影作为投影标签。where k is the index number of the training sample, is the projection label. In view of the fact that the labels cannot be obtained in practical applications, the reconstruction results of complete data under small pitch or the reconstruction results of advanced iterative reconstruction methods in the field can be used for fan beam projection as projection labels.
虽然图7将投影域子网络示例为一种U型网络的具体结构示例, 但是本领域的技术人员可想到用其他结构的网络也能实现本公开的 技术方案。此外,本领域的技术人员也可以想到将其他网络用作图像 域网络,例如自编码网络(Auto-Encoder)、全卷积神经网络(Fully convolution neural network)等,也能够实现本公开的技术方案。Although FIG. 7 exemplifies the projection domain sub-network as a specific structural example of a U-shaped network, those skilled in the art can imagine that the technical solutions of the present disclosure can also be implemented with networks with other structures. In addition, those skilled in the art can also think of using other networks as image domain networks, such as auto-encoder network (Auto-Encoder), fully convolution neural network (Fully convolution neural network), etc., can also realize the technical solutions of the present disclosure .
域转换子网络输入上述投影域子网络输出的二维投影数据,进行 解析重建后得到图像域设定截面图像。该子网络作为神经网络结构的 第二部分,用于投影域到图像域的域转换,这部分网络实现从二维扇 束(或平行束)投影域数据到图像域截面图像的运算,该子网络中网 络节点(神经元)间的权重系数可以由二维扇束(或平行束)CT扫 描关系中的扫描几何确定。此层的输入为第一部分输出的扇束(或平 行束)投影数据,输出为初步的CT重建图像(也就是图像域设定截 面图像)。由于第一部分的子网络已经把重建问题转化到二维,此部 分的域转换网络可以直接使用二维解析重建的矩阵算子完成。此部分 的算子也可以通过一个全连接型网络实现。使用仿真的或者实际的投 影数据和重建图像对进行训练得到。这部分网络的输出可以作为最终 输出结果,也可以经过图像域子网络处理后输出。The domain conversion sub-network inputs the two-dimensional projection data output by the above-mentioned projection domain sub-network, and after analyzing and reconstructing, obtains the image domain setting cross-section image. As the second part of the neural network structure, this sub-network is used for the domain conversion from the projection domain to the image domain. This part of the network realizes the operation from the two-dimensional fan beam (or parallel beam) projection domain data to the image domain cross-sectional image. The weighting coefficients between network nodes (neurons) in the network can be determined by the scan geometry in the relation of 2D fan beam (or parallel beam) CT scans. The input of this layer is the fan beam (or parallel beam) projection data output by the first part, and the output is the preliminary CT reconstruction image (that is, the image domain setting cross-sectional image). Since the sub-network of the first part has transformed the reconstruction problem into two dimensions, the domain transformation network of this part can be directly completed by using the matrix operator of the two-dimensional analytical reconstruction. The operators of this part can also be implemented by a fully connected network. Trained using simulated or real projection data and reconstructed image pairs. The output of this part of the network can be used as the final output result, or it can be output after being processed by the image domain sub-network.
一示例性实施例中,域转换子网络具体通过对上述p进行投影域 到图像域的逆向计算,得到图像域输出。使用领域内已有的Siddon 或其它方法计算投影矩阵,以此系统矩阵的元素对应解析重建连接层 的连接权重。以FBP扇束解析重建为例,这里W完 成投影域数据的加权,F对应于一个斜坡滤波卷积运算,完成带 权重的反投影。In an exemplary embodiment, the domain conversion sub-network specifically obtains the image domain output by performing reverse calculation on the above-mentioned p from the projection domain to the image domain. Use Siddon or other methods available in the field to calculate the projection matrix, and the elements of this system matrix correspond to the connection weights of the analytically reconstructed connection layer. Taking FBP fan beam analytical reconstruction as an example, Here W completes the weighting of the projection domain data, F corresponds to a ramp filter convolution operation, Complete backprojection with weights.
图像域子网络输入图像域设定截面图像,经过对图像特征进一步 提取融合后形成设定截面的平面重建图像。该图像域子网络为第三部 分,这部分网络以前述域转换子网络输出的图像域设定截面图像为输 入,经过包含若干层的卷积神经网络作用,采集数据在图像域的特征, 并以目标图像为学习目标,对图像特征进行进一步的提取并相互耦合, 从而达到在图像域优化图像质量的作用。此部分的输出为整个网络的 最终输出结果。The image domain sub-network inputs the set cross-section image in the image domain, and after further extraction and fusion of image features, a plane reconstructed image of the set cross-section is formed. The image domain sub-network is the third part. This part of the network takes the image domain set cross-sectional image output by the aforementioned domain conversion sub-network as input, and collects the characteristics of the data in the image domain through the action of the convolutional neural network including several layers. Taking the target image as the learning target, the image features are further extracted and coupled with each other, so as to optimize the image quality in the image domain. The output of this part is the final output of the entire network.
一示例性实例中,图像域子网络采用一个类似第一部分的U-net 型神经网络结构,该部分网络以作为输入,作用是实现图像域优 化。类似于第一部分网络,在前半部分,卷积层之间经过池化,图像 尺度逐层减小,在后半部分,卷积层之间通过上采样,逐层恢复图像 尺度。此部分网络可以采用残差训练方式,即最后一层卷积层的输出 结果加上等于对二维重建图像的估计μ。与投影域子网络类似的, 在图像域子网络包括但不限于选择3×3的卷积核,池化和上采样均 采用2×2的尺寸。选择ReLu作为激活函数。In an illustrative example, the image domain sub-network adopts a U-net type neural network structure similar to the first part, and this part of the network is composed of As input, the role is to achieve image domain optimization. Similar to the first part of the network, in the first half, the convolutional layers are pooled, and the image scale is reduced layer by layer, and in the second half, the image scale is restored layer by layer through upsampling between the convolutional layers. This part of the network can use the residual training method, that is, the output of the last convolutional layer plus the is equal to the estimate μ for the 2D reconstructed image. Similar to the projection domain sub-network, the image domain sub-network includes but is not limited to selecting a 3×3 convolution kernel, and both pooling and upsampling use a 2×2 size. Choose ReLu as the activation function.
本公开实施例中,可以采用代价函数作为待优化的目标函数,总 体网络的代价函数可以使用但不限于领域内常用的l-范数、RRMSE、 SSIM等以及多个代价函数的综合。In the embodiment of the present disclosure, a cost function can be used as the objective function to be optimized, and the cost function of the overall network can be, but not limited to, the l-norm, RRMSE, SSIM, etc., which are commonly used in the field, and the synthesis of multiple cost functions.
其中f={f1,f2,…,fn}为输出图像,目标图像为fi,分别 为第i幅输出图像与第i幅目标图像。是fi,的平均值;是 fi,的方差;是fi,的协方差;c1,c2为常数。where f={f 1 , f 2 ,..., f n } is the output image, and the target image is f i , are the ith output image and the ith target image, respectively. is f i , average of; is f i , Variance; is f i , covariance of ; c 1 , c 2 are constants.
本公开实施例中,可以对神经网络参数进行训练,训练数据包括 模拟数据和实际数据。对于模拟数据,建立基本的扫描物体数学模型, 按照实际系统建模生成螺旋投影数据,预处理后作为网络输入,使用 扫描物体的图像真值作为标签,训练网络参数。举例来说,模拟数据 可以是肺部模拟数据,共包含30个病例,每个病例含有100层切片, 共计3000个样本,并进行数据增广。增广方式包括但不限于旋转、 翻转等等。对于实际数据,可以在实际系统上扫描物体,获得螺旋投 影数据,预处理后输入到此网络获得初步的重建。然后对这些初步重 建结果进行针对性的图像处理,例如对已知局部平滑区域进行局部平 滑,得到标签图像。也可以使用领域内的先进迭代重建方法进行重建 得到标签图像。使用标签图像进一步训练网络,达到网络参数的细化 微调。一些实施例中,训练时可先训练螺旋投影转换到二维平面投影 的子网络,再整体训练;或直接整体训练。对于投影子网络和图像域 子网络的训练,可以各自单独训练;对于域转换子网络的参数可以通 过进行提前计算而无需后期训练,或者也可以对域转换子网络的参数 进行训练。In the embodiment of the present disclosure, the neural network parameters can be trained, and the training data includes simulated data and actual data. For the simulated data, the basic mathematical model of the scanned object is established, and the spiral projection data is generated according to the actual system modeling. For example, the simulation data can be lung simulation data, including a total of 30 cases, each case contains 100 slices, a total of 3000 samples, and data augmentation is performed. Augmentation methods include, but are not limited to, rotation, flipping, and the like. For actual data, objects can be scanned on the actual system to obtain helical projection data, which are input to this network after preprocessing to obtain preliminary reconstruction. These preliminary reconstruction results are then subjected to targeted image processing, such as local smoothing of known local smooth regions to obtain label images. Labeled images can also be reconstructed using state-of-the-art iterative reconstruction methods. The network is further trained using the labeled images to achieve fine-tuning of network parameters. In some embodiments, during training, the sub-network that converts the spiral projection to the two-dimensional plane projection can be trained first, and then the overall training is performed; or the overall training can be directly performed. For the training of the projection sub-network and the image domain sub-network, they can be trained separately; the parameters of the domain transformation sub-network can be calculated in advance without post-training, or the parameters of the domain transformation sub-network can be trained.
若先单独训练投影域子网络,则代价函数为If the projection domain subnet is trained separately first, the cost function is
其中k为训练样本索引号,为投影标签。鉴于实际应用无法获 得标签,我们使用小螺距下完备数据的重建结果或领域内的先进迭代 重建方法的重建结果做扇束投影作为投影标签。where k is the index number of the training sample, is the projection label. In view of the fact that labels cannot be obtained in practical applications, we use the reconstruction results of complete data under small pitch or the reconstruction results of advanced iterative reconstruction methods in the field to do fan beam projection as projection labels.
若单独训练图像域子网络,定义代价函数为2范数:If training image domain subnetworks separately, define the cost function as 2 norm:
其中k为训练样本索引号,μ*为图像标签。鉴于实际应用无法获 得标签,我们使用小螺距下完备数据的重建结果或领域内的先进迭代 重建方法的重建结果作为标签。如有其他途径获得高质量图像,也可 以用其他标签。where k is the training sample index number and μ * is the image label. Since labels cannot be obtained in practical applications, we use the reconstruction results of complete data under small pitch or the reconstruction results of advanced iterative reconstruction methods in the field as labels. Other tags may also be used if other means of obtaining high-quality images are available.
根据本发明的一个实施例,可采用直接训练方式。在直接训练过 程中,随机初始化投影域子网络及图像域子网络卷积核权值,由实际 采集数据集进行训练,训练完成后,用另一组实际采集数据作为测试 集以验证网络训练效果。According to an embodiment of the present invention, a direct training method can be adopted. In the direct training process, the convolution kernel weights of the projection domain sub-network and the image domain sub-network are randomly initialized, and the training is performed on the actual collected data set. After the training is completed, another set of actual collected data is used as the test set to verify the network training effect. .
对于实际CT扫描过程,把采集的数据输入上述训练过程获得已 训练网络(此时网络参数已经过机器学习),获得重建图像。For the actual CT scanning process, the collected data is input into the above training process to obtain a trained network (the network parameters have been machine-learned at this time), and the reconstructed image is obtained.
图8是描述根据本公开实施例的螺旋CT图像重建方法示意流程 图。如图8所示,在步骤S10,输入三维投影数据;在步骤S20,神 经网络模型输入三维投影数据后得到被检查对象设定截面的平面重 建图像,其中,所述神经网络模型经过训练。Fig. 8 is a schematic flow chart describing a method for reconstructing a spiral CT image according to an embodiment of the present disclosure. As shown in Figure 8, in step S10, three-dimensional projection data is input; in step S20, after the neural network model inputs the three-dimensional projection data, a plane reconstruction image of the set section of the inspected object is obtained, wherein the neural network model is trained.
根据本公开实施例的神经网络可以包括:投影域子网络,域转换 子网络以及图像域子网络。投影域子网络处理输入的三维投影数据, 得到二维投影数据。域转换子网络对二维投影数据进行解析重建,得 到图像域设定截面图像。图像域子网络输入所述图像域截面图像,经 过包含若干层的卷积神经网络作用,提取数据在图像域的特征,对图 像特征进行进一步耦合,得到设定截面的平面重建图像。利用本公开 上述实施例的方案,能够对螺旋CT设备被检查对象的三维投影数据 重建得到质量更高的结果。A neural network according to an embodiment of the present disclosure may include a projection domain sub-network, a domain transformation sub-network, and an image domain sub-network. The projection domain sub-network processes the input 3D projection data to obtain 2D projection data. The domain transformation sub-network analyzes and reconstructs the two-dimensional projection data, and obtains the image domain setting cross-section image. The image domain sub-network inputs the cross-sectional image in the image domain, and through the action of the convolutional neural network including several layers, the features of the data in the image domain are extracted, and the image features are further coupled to obtain the plane reconstructed image of the set cross-section. Using the solutions of the above embodiments of the present disclosure, it is possible to reconstruct the three-dimensional projection data of the object being inspected by the helical CT device to obtain higher-quality results.
根据本公开实施例的机器学习可以包括:通过模拟数据和/或实 际数据对神经网络模型中的参数进行训练优化;通过部分实际数据, 对已优化的参数进行泛化处理,所述泛化处理包括对参数进行细化微 调。The machine learning according to the embodiment of the present disclosure may include: training and optimizing parameters in the neural network model by using simulated data and/or actual data; performing generalization processing on the optimized parameters by using part of the actual data, the generalization processing Including fine-tuning of parameters.
本公开的方法可以灵活适用于不同的CT扫描模式和系统架构, 可用于医学诊断、工业无损检测和安检领域。The method of the present disclosure can be flexibly applied to different CT scanning modes and system architectures, and can be used in the fields of medical diagnosis, industrial non-destructive testing and security inspection.
以上的详细描述通过使用示意图、流程图和/或示例,已经阐述了 训练神经网络的方法和设备的众多实施例。在这种示意图、流程图和 /或示例包含一个或多个功能和/或操作的情况下,本领域技术人员应 理解,这种示意图、流程图或示例中的每一功能和/或操作可以通过 各种结构、硬件、软件、固件或实质上它们的任意组合来单独和/或 共同实现。在一个实施例中,本公开实施例所述主题的若干部分可以 通过专用集成电路(ASIC)、现场可编程门阵列(FPGA)、数字信号 处理器(DSP)、或其他集成格式来实现。然而,本领域技术人员应 认识到,这里所公开的实施例的一些方面在整体上或部分地可以等同 地实现在集成电路中,实现为在一台或多台计算机上运行的一个或多 个计算机程序(例如,实现为在一台或多台计算机系统上运行的一个 或多个程序),实现为在一个或多个处理器上运行的一个或多个程序 (例如,实现为在一个或多个微处理器上运行的一个或多个程序), 实现为固件,或者实质上实现为上述方式的任意组合,并且本领域技 术人员根据本公开,将具备设计电路和/或写入软件和/或固件代码的 能力。此外,本领域技术人员将认识到,本公开所述主题的机制能够 作为多种形式的程序产品进行分发,并且无论实际用来执行分发的信 号承载介质的具体类型如何,本公开所述主题的示例性实施例均适用。 信号承载介质的示例包括但不限于:可记录型介质,如软盘、硬盘驱 动器、紧致盘(CD)、数字通用盘(DVD)、数字磁带、计算机存储 器等;以及传输型介质,如数字和/或模拟通信介质(例如,光纤光 缆、波导、有线通信链路、无线通信链路等)。The foregoing detailed description has set forth numerous embodiments of methods and apparatus for training neural networks using schematic diagrams, flowcharts, and/or examples. Where such diagrams, flowcharts and/or examples include one or more functions and/or operations, those skilled in the art will understand that each function and/or operation in such diagrams, flowcharts or examples may be Implemented individually and/or collectively by various structures, hardware, software, firmware, or virtually any combination thereof. In one embodiment, portions of the subject matter described in embodiments of the present disclosure may be implemented by application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. Those skilled in the art will recognize, however, that some aspects of the embodiments disclosed herein may equally be implemented, in whole or in part, in an integrated circuit, as one or more computers running on one or more computers. Computer programs (eg, implemented as one or more programs running on one or more computer systems), implemented as one or more programs (eg, implemented as one or more programs) running on one or more processors One or more programs running on multiple microprocessors), implemented as firmware, or substantially implemented as any combination of the above, and those skilled in the art will have the ability to design circuits and/or write software and and/or firmware code capability. Furthermore, those skilled in the art will recognize that the mechanisms of the subject matter of this disclosure can be distributed as program products in many forms, and that regardless of the specific type of signal bearing medium actually used to perform the distribution, the mechanisms of the subject matter of this disclosure can be distributed as program products in many forms Exemplary embodiments apply. Examples of signal bearing media include, but are not limited to: recordable-type media, such as floppy disks, hard drives, compact disks (CDs), digital versatile disks (DVDs), digital magnetic tapes, computer memory, etc.; and transmission-type media, such as digital and /or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
虽然已参照几个典型实施例描述了本公开实施例,但应当理解, 所用的术语是说明和示例性、而非限制性的术语。由于本公开实施例 能够以多种形式具体实施而不脱离公开实施例的精神或实质,所以应 当理解,上述实施例不限于任何前述的细节,而应在随附权利要求所 限定的精神和范围内广泛地解释,因此落入权利要求或其等效范围内 的全部变化和改型都应为随附权利要求所涵盖。Although embodiments of the present disclosure have been described with reference to several exemplary embodiments, it is to be understood that the terms used are of description and illustration, and not of limitation. Since the disclosed embodiments can be embodied in various forms without departing from the spirit or essence of the disclosed embodiments, it should be understood that the above-described embodiments are not limited to any of the foregoing details, but should be within the spirit and scope defined by the appended claims be interpreted broadly within the scope of the claims, and therefore all changes and modifications that come within the scope of the claims or their equivalents are intended to be covered by the appended claims.
Claims (18)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910448427.0A CN112085829A (en) | 2019-05-27 | 2019-05-27 | Spiral CT image reconstruction method and equipment based on neural network and storage medium |
PCT/CN2019/103038 WO2020237873A1 (en) | 2019-05-27 | 2019-08-28 | Neural network-based spiral ct image reconstruction method and device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910448427.0A CN112085829A (en) | 2019-05-27 | 2019-05-27 | Spiral CT image reconstruction method and equipment based on neural network and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112085829A true CN112085829A (en) | 2020-12-15 |
Family
ID=73552051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910448427.0A Pending CN112085829A (en) | 2019-05-27 | 2019-05-27 | Spiral CT image reconstruction method and equipment based on neural network and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112085829A (en) |
WO (1) | WO2020237873A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114018962A (en) * | 2021-11-01 | 2022-02-08 | 北京航空航天大学宁波创新研究院 | Synchronous multi-spiral computed tomography method based on deep learning |
CN117611750A (en) * | 2023-12-05 | 2024-02-27 | 北京思博慧医科技有限公司 | Method and device for constructing three-dimensional imaging model, electronic equipment and storage medium |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113192155B (en) * | 2021-02-04 | 2023-09-26 | 南京安科医疗科技有限公司 | Spiral CT cone beam scanning image reconstruction method, scanning system and storage medium |
CN113689545B (en) * | 2021-08-02 | 2023-06-27 | 华东师范大学 | 2D-to-3D end-to-end ultrasound or CT medical image cross-modal reconstruction method |
CN114004929A (en) * | 2021-10-28 | 2022-02-01 | 内蒙航天动力机械测试所 | Three-dimensional rapid reconstruction system for double-view-angle X-ray perspective imaging |
CN113963132B (en) * | 2021-11-15 | 2025-01-14 | 广东电网有限责任公司 | A method for reconstructing three-dimensional distribution of plasma and related device |
CN114359317A (en) * | 2021-12-17 | 2022-04-15 | 浙江大学滨江研究院 | Blood vessel reconstruction method based on small sample identification |
CN114255296B (en) * | 2021-12-23 | 2024-04-26 | 北京航空航天大学 | CT image reconstruction method and device based on single X-ray image |
CN114742771B (en) * | 2022-03-23 | 2024-04-02 | 中国科学院高能物理研究所 | An automated non-destructive measurement method for the hole size on the back of a circuit board |
CN115690255B (en) * | 2023-01-04 | 2023-05-09 | 浙江双元科技股份有限公司 | CT image artifact removal method, device and system based on convolutional neural network |
CN116612206B (en) * | 2023-07-19 | 2023-09-29 | 中国海洋大学 | A method and system for reducing CT scanning time using convolutional neural networks |
CN117351482B (en) * | 2023-12-05 | 2024-02-27 | 国网山西省电力公司电力科学研究院 | A data set augmentation method, system, electronic device and storage medium for electric power visual recognition model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103714578A (en) * | 2014-01-24 | 2014-04-09 | 中国人民解放军信息工程大学 | Single-layer rearrangement filtered backprojection reconstruction method aiming to half mulching helical cone beam CT |
CN105093342A (en) * | 2014-05-14 | 2015-11-25 | 同方威视技术股份有限公司 | Spiral CT system and reconstruction method |
CN109171793A (en) * | 2018-11-01 | 2019-01-11 | 上海联影医疗科技有限公司 | A kind of detection of angle and bearing calibration, device, equipment and medium |
CN109300167A (en) * | 2017-07-25 | 2019-02-01 | 清华大学 | Method and device for reconstructing CT image and storage medium |
CN109300166A (en) * | 2017-07-25 | 2019-02-01 | 同方威视技术股份有限公司 | Method and device for reconstructing CT image and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898642B (en) * | 2018-06-01 | 2022-11-11 | 安徽工程大学 | A sparse angle CT imaging method based on convolutional neural network |
CN109102550B (en) * | 2018-06-08 | 2023-03-31 | 东南大学 | Full-network low-dose CT imaging method and device based on convolution residual error network |
-
2019
- 2019-05-27 CN CN201910448427.0A patent/CN112085829A/en active Pending
- 2019-08-28 WO PCT/CN2019/103038 patent/WO2020237873A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103714578A (en) * | 2014-01-24 | 2014-04-09 | 中国人民解放军信息工程大学 | Single-layer rearrangement filtered backprojection reconstruction method aiming to half mulching helical cone beam CT |
CN105093342A (en) * | 2014-05-14 | 2015-11-25 | 同方威视技术股份有限公司 | Spiral CT system and reconstruction method |
CN109300167A (en) * | 2017-07-25 | 2019-02-01 | 清华大学 | Method and device for reconstructing CT image and storage medium |
CN109300166A (en) * | 2017-07-25 | 2019-02-01 | 同方威视技术股份有限公司 | Method and device for reconstructing CT image and storage medium |
CN109171793A (en) * | 2018-11-01 | 2019-01-11 | 上海联影医疗科技有限公司 | A kind of detection of angle and bearing calibration, device, equipment and medium |
Non-Patent Citations (1)
Title |
---|
HONG-KAI YANG 等: "Slice-wise reconstruction for low-dose cone-beam CT using a deep residual convolutional neural network", NUCL.SCI TECH, vol. 30, pages 1 - 9 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114018962A (en) * | 2021-11-01 | 2022-02-08 | 北京航空航天大学宁波创新研究院 | Synchronous multi-spiral computed tomography method based on deep learning |
CN114018962B (en) * | 2021-11-01 | 2024-03-08 | 北京航空航天大学宁波创新研究院 | Synchronous multi-spiral computed tomography imaging method based on deep learning |
CN117611750A (en) * | 2023-12-05 | 2024-02-27 | 北京思博慧医科技有限公司 | Method and device for constructing three-dimensional imaging model, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2020237873A1 (en) | 2020-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112085829A (en) | Spiral CT image reconstruction method and equipment based on neural network and storage medium | |
CN110660123B (en) | Three-dimensional CT image reconstruction method and device and storage medium based on neural network | |
CN110047113B (en) | Neural network training method and device, image processing method and device, and storage medium | |
CN110544282B (en) | Three-dimensional multi-energy spectrum CT reconstruction method and equipment based on neural network and storage medium | |
CN109300166B (en) | Method and device for reconstructing CT image and storage medium | |
EP3435334B1 (en) | Method and device for reconstructing ct image and storage medium | |
EP3608877B1 (en) | Iterative image reconstruction framework | |
JP7187476B2 (en) | Tomographic reconstruction based on deep learning | |
JP7455622B2 (en) | Medical image processing device and learning image acquisition method | |
CN111492406B (en) | Method for training machine learning algorithm, image processing system and image reconstruction method | |
EP1861824B1 (en) | Method and device for the iterative reconstruction of tomographic images | |
KR20190138292A (en) | Method for processing multi-directional x-ray computed tomography image using artificial neural network and apparatus therefor | |
CN104240270A (en) | CT imaging method and system | |
Chen et al. | A new data consistency condition for fan‐beam projection data | |
Banjak | X-ray computed tomography reconstruction on non-standard trajectories for robotized inspection | |
CN112001978B (en) | A method and device for reconstructing images from dual-energy dual-90°CT scans based on generative confrontation networks | |
CN116188615A (en) | A Sparse Angle CT Reconstruction Method Based on Sine Domain and Image Domain | |
Miao | Comparative studies of different system models for iterative CT image reconstruction | |
JP7187131B2 (en) | Image generation device, X-ray computed tomography device and image generation method | |
CN117197349A (en) | A CT image reconstruction method and device | |
Mora et al. | New pixellation scheme for CT algebraic reconstruction to exploit matrix symmetries | |
Buzmakov et al. | Efficient and effective regularised ART for computed tomography | |
Chapdelaine et al. | A joint segmentation and reconstruction algorithm for 3D Bayesian Computed Tomography using Gaus-Markov-Potts Prior Model | |
Cierniak et al. | An Original Continuous-to-Continuous | |
Valat et al. | Sinogram Inpainting with Generative Adversarial Networks and Shape Priors. Tomography 2023, 9, 1137–1152 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |