CN114897693A - Microscopic image super-resolution method based on mathematical imaging theory and generation countermeasure network - Google Patents
Microscopic image super-resolution method based on mathematical imaging theory and generation countermeasure network Download PDFInfo
- Publication number
- CN114897693A CN114897693A CN202210494049.1A CN202210494049A CN114897693A CN 114897693 A CN114897693 A CN 114897693A CN 202210494049 A CN202210494049 A CN 202210494049A CN 114897693 A CN114897693 A CN 114897693A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- resolution
- microscopic
- imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 25
- 238000004088 simulation Methods 0.000 claims abstract description 15
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 37
- 230000008569 process Effects 0.000 claims description 15
- 238000005457 optimization Methods 0.000 claims description 6
- 230000006872 improvement Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000003213 activating effect Effects 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 238000013461 design Methods 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000001514 detection method Methods 0.000 description 12
- 238000013135 deep learning Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 6
- 238000005286 illumination Methods 0.000 description 5
- 238000002360 preparation method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000005284 excitation Effects 0.000 description 4
- 230000005684 electric field Effects 0.000 description 3
- 238000002073 fluorescence micrograph Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003416 augmentation Effects 0.000 description 2
- 238000010226 confocal imaging Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 238000000386 microscopy Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 206010006187 Breast cancer Diseases 0.000 description 1
- 208000026310 Breast neoplasm Diseases 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000942 confocal micrograph Methods 0.000 description 1
- 238000004624 confocal microscopy Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000011503 in vivo imaging Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005293 physical law Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a microscopic image super-resolution method based on a mathematical imaging theory and generation of a confrontation network, belonging to the field of image processing and computer vision; based on imaging models of a wide-field microscope and a confocal microscope, the invention provides a method for generating a confrontation network prediction high-resolution microscopic image by using a mathematical theory and improved super-resolution, and a high-resolution image pair and a low-resolution image pair are generated by using accurately-deduced point spread function simulation and are used as data sets required by network training without image registration. In the network design part, an improved generation countermeasure network is provided by combining the U-Net and the residual error network, and generators and discriminators in the network are redefined. The generator adopts a 16 residual error module structure consisting of a 4-layer convolution network and a jump connection structure to collect image details, and the discriminator optimizes by deepening a CNN layer and fusing feature mapping, so that the problem that the discriminator is difficult to judge whether the generated predicted image is real or not is solved.
Description
Technical Field
The invention belongs to the field of biological microscopic image processing, and mainly relates to a microscopic image super-resolution method based on a mathematical imaging theory and generation of a countermeasure network.
Background
Microscopy is a powerful tool for observing microscopic structures and dynamic processes within cells. Conventional Wide Field (WF) microscopes have fast imaging speed but low resolution. The Confocal Microscope (CM) adopts a point illumination and point detection mode, isolates background stray light, only collects signals of a focal plane, improves the resolution compared with a wide-field microscope, and can improve the resolution by 1.4 times when the size of a pinhole is 1au (air unit). In microscopic imaging, there is a trade-off between imaging speed and resolution, and it is difficult to achieve optimal results simultaneously.
Continuous measurement and quantitative analysis of the overall structure and local details on a microscopic scale is required to explore the dynamic changes of a sample at the cellular level, and thus large-field, high-resolution, long-term imaging is a demand in the field of life science. The existing super-resolution microscopic imaging method is difficult to achieve balance among the three indexes, and the occurrence of deep learning solves the problems to a certain extent. Studies have shown that deep learning works significantly in restoring fluorescence images with lower signal-to-noise ratios or low resolution. However, in order to make the prediction result of deep learning more accurate, these methods require a large number of registered high-resolution and low-resolution image pairs during the preparation of the data set, and thus the preparation cost of the data set is high. Furthermore, it is difficult to obtain a large number of fluorescence image pairs with the same field of view, especially in vivo imaging, where samples at different times may have different morphologies, and thus may cause a change in field of view during switching of the objective lens. In addition, optical distortion and chromatic aberration of different objectives are inevitable, so that it is difficult to obtain perfectly aligned images, which greatly affects the training effect of deep learning. Most deep learning reconstruction methods adopt a simple end-to-end strategy, ignore the physical laws in the imaging process, make the preparation of training data very challenging, and easily generate false structures or artifacts from a specific data set.
Therefore, one technical problem that needs to be urgently solved by those skilled in the art is: when the super-resolution image is predicted by deep learning, how to generate a high-resolution image pair and a low-resolution image pair which accord with the internal physical rules of the actual imaging process as a training data set on the premise of reducing the actually acquired fluorescence images, and meanwhile, an improved network is provided for a traditional deep learning network model, so that the super-resolution image prediction method with low cost, high quality and high efficiency is realized.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a microscopic image super-resolution method for generating a countermeasure network based on a mathematical imaging theory and improvement. According to the method, a real microscopic image does not need to be acquired, an accurate point spread function is calculated by establishing a mathematical model in the microscopic imaging process, and paired low-resolution and high-resolution images are obtained through simulation and serve as a data set for deep learning. Because the data set is obtained by imaging model simulation, the low-resolution image pair and the high-resolution image pair conform to the actual imaging relation. And the high-resolution microscopic image can be predicted after the network training is finished.
In order to achieve the purpose, the invention adopts the following technical scheme:
based on mathematical models of wide-field microscopy and confocal microscopy imaging processes, high-resolution and low-resolution image pairs are generated in a simulation mode and serve as network training data sets, real wide-field images and confocal microscopy images do not need to be acquired, the data sets can be prepared by utilizing open-source image data, and image alignment preprocessing does not need to be further completed; in the aspect of network optimization improvement, an improvement-based generation countermeasure network is provided. The network combines the U-Net and the residual network to redefine the generator and the discriminator in the generation countermeasure network. The generator adopts a 16 residual error module structure consisting of a 4-layer convolution network and a jump connection structure to collect image details, two sub-pixel convolution networks are used for amplifying an input low-resolution wide-field image of 128 x 128 pixels into a predicted image of 512 x 512 pixels, and a target image is further optimized by combining a mean square error loss function. The discriminator consists of 10 convolutional layers, each of which is batch normalized, activated by the ReLU. The number of feature maps was increased from 64 to 2048 and then decreased to 512. And finally, inputting the output tensor image into the flattening layer and the full connection layer, and activating through a Sigmoid activation function to obtain a final judgment result. The discriminator carries out optimization by deepening the CNN layer and fusing the feature mapping, and the problem that the discriminator is difficult to judge whether the generated predicted image is real is solved.
Specifically, the method for predicting the deep learning super-resolution microscopic image based on the physical imaging model mainly comprises the following steps:
establishing wide-field microscopic and confocal microscopic imaging system models, and deducing and calculating point spread functions of two microscopic imaging systems;
secondly, performing deconvolution calculation on the open-source image data set by using the point spread function of the imaging system established in the first step, adding Gaussian white noise to respectively obtain a simulated wide-field image and a simulated confocal microscopic image, and respectively performing translation, rotation and 50% turnover on the generated simulated image to complete image augmentation;
and step three, providing an improved generation countermeasure network, wherein the network is combined with a residual error network redefinition generator and a discriminator, and a low-resolution wide-field microscopic image with the size of 128 multiplied by 128 pixels obtained through simulation is used as a training data set and is input into the network.
And step four, training a generator and a discriminator of the network until the loss function reaches the minimum, and finishing the network training. At this time, a low-resolution wide-field microscopic image of 128 × 128 pixels is input to the network, and a predicted high-resolution microscopic image of 512 × 512 pixels can be output.
The first step is the establishment of a point spread function of the microscopic imaging system. Establishing a wide-field imaging model and a confocal microscopic imaging model based on Richards-Wolf vector diffraction integral and a TonyWilson confocal imaging theory, and respectively deducing accurate point spread functions;
in the second step, based on the imaging model, the point spread function derived by the mathematical imaging model established in the first step is utilized, the same image is calculated by a deconvolution method, a simulated wide-field microscopic image and a simulated confocal microscopic image are respectively obtained and serve as a low-resolution image pair and a high-resolution image pair, and the low-resolution image pair and the high-resolution image pair conform to the internal relation in the physical imaging process. In the simulation process, the system noise is set to be white Gaussian noise by combining the imaging parameters of the actual microscopic imaging system. The generation of the simulation image can utilize open source data, and a large number of real microscopic images do not need to be acquired. And because the low-resolution image and the high-resolution image are generated by the same image simulation, preprocessing such as image alignment is not needed to be carried out in the input network training model, and the training process is simplified. To expand the data set, the image was rotated and flipped by 50%.
In the third step, the improved generation countermeasure network is related to, and the network combines the U-Net and the residual error network to redefine the generator and the discriminator in the generation countermeasure network. The generator adopts a 16 residual error module structure consisting of a 4-layer convolution network and a jump connection structure to collect image details, two sub-pixel convolution networks are used for amplifying an input low-resolution wide-field image of 128 x 128 pixels into a predicted image of 512 x 512 pixels, and a target image is further optimized by combining a mean square error loss function. Each network module uses a convolution kernel of size 3 x 3 for high-dimensional description and deepens the network to improve the training effect of the model. The discriminator consists of 10 convolutional layers, each of which is batch normalized, activated by the ReLU. The number of feature maps was increased from 64 to 2048 and then decreased to 512. And finally, inputting the output tensor image into the flattening layer and the full connection layer, and activating through a Sigmoid activation function to obtain a final judgment result. The discriminator carries out optimization by deepening the CNN layer and fusing feature mapping, and the problem that the discriminator is difficult to judge whether the generated predicted image is real or not is solved.
The method of the invention is different from the prior method for generating the prediction microscopic image by adopting GAN, and comprises the following steps:
1. the image data sets are generated differently
The invention respectively deduces accurate point spread functions by establishing a strict wide-field imaging model and a strict confocal microscopic imaging model, and then obtains simulated wide-field microscopic and confocal microscopic images in a deconvolution mode. The invention can utilize the open source image data set to generate a large number of low-high resolution image pairs which accord with the actual physical imaging relation in a simulation mode without acquiring a large number of actual microscopic images. And because the low-high resolution image pair is generated from the same image, no accurate image registration is required before input to the network. The mode of generating the image data set by the simulation drive reduces the experiment cost, simplifies the manufacture of the data set and is completely different from the preparation mode of the traditional data set.
The form of GAN is different
The invention provides an improved generation countermeasure network by combining U-Net and a residual error network. And a generator and a discriminator are redefined, wherein the generator adopts a six-layer convolution network and a jump structure to form a residual error module, and a target image is further optimized by combining a mean square error loss function. The method provides rich characteristic information for a training model, and combines the characteristics of a U-Net network to design 16 residual modules into a U structure containing 6 convolutional layers, wherein the jump connection structure can effectively avoid extra information loss caused by pooling and upsampling. The discriminator network uses 10 convolution layers, the image convolution layers use a jump structure and superpose corresponding characteristic layers, so that a high-dimensional characteristic information optimization discriminator is obtained, and the problem that the discriminator is difficult to judge whether the generated super-resolution image is real or not is solved.
3. Compatibility of networks
The simulation images are adopted for network training, the imaging principle in the image generation process between the low-resolution image pairs and the high-resolution image pairs can be learned in the training process, super-resolution images of different types from the simulation data set can be predicted, the prediction result is good in accuracy, the network generalization performance is high, and the network compatibility is high.
The invention has the beneficial effects that:
the method integrates the physical imaging model and the deep learning network, and compared with the traditional deep learning method, the method does not need to acquire a large number of real microscopic images and does not need the work of image alignment and the like, thereby reducing the experimental cost and realizing the super-resolution confocal microscopic image prediction method with low cost, high quality and high efficiency.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is an illumination system and a detection system in a microscopic imaging model of the present invention;
FIG. 3 is a generator network of the present invention;
FIG. 4 is a discriminator network of the present invention;
FIG. 5 is a comparison of results using the method of the present invention;
FIG. 6 is a diagram of a structure similarity value distribution of 200 network input images and output images according to the method of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. The specific implementation mode comprises the following contents:
as shown in fig. 1, this embodiment shows an overall flowchart of a microscopic image super-resolution method based on a mathematical imaging theory and generating a countermeasure network, and mainly includes four parts, namely data set preparation, image augmentation, network training and network output. As shown in fig. 2, an illumination system and a detection system of the microscopic imaging model are respectively established, and a point spread function of the microscopic imaging model and a point spread function of the confocal microscopic imaging model are derived by combining Richards-Wolf vector diffraction integration and confocal imaging theory, wherein the point spread functions are derived by the following specific derivation processes:
as shown in fig. 2(a), incident light is focused to an illumination focal plane by the objective lens 1. The electric field component of the incident light is represented by the cylindrical coordinates (p,z) indicates that the beam is propagating in the positive z-axis direction. At the focal plane, i.e. z is 0, any point P (polar coordinate r) near the focal point P ,θ P ,The cartesian component of the electric field is expressed as:
wherein
cosε=cosθcosθ P +sinθsinθ P cos(φ-φ P ),
Wherein alpha is the aperture half angle of the objective lens, alpha is arcsin (NA/n), n is the refractive index of the imaging medium, and A is a constant. The excitation point spread function can thus be expressed as
PSF exc =|E x | 2 +|E y | 2 +|E z | 2 .
The point spread function of the confocal microscopic imaging system is affected by both the excitation light and the detection light and is expressed as the product of the excitation point spread function and the detection point spread function. FIG. 2(b) is an imaging model of a detection system of a microscopic imaging system, from a light source (spherical coordinates φ) 1 ,θ 1 ) The emitted light passes through the lens 2 and the objective lens 2 and then is focused on an area array detection plane (the spherical coordinate phi of the central point of the detector) 2 ,θ 2 ). Unlike the wide-field microscopic imaging process, in the confocal microscopic imaging process, a pinhole is required to be used for point detection, so that the effect of the pinhole needs to be considered when calculating the final detection point spread function. The Cartesian component of the detector plane electric field in the focal region is expressed as:
wherein (r) d ,φ d ,z d ) The plane coordinates, k, of the detector in a cylindrical coordinate system d Is a wave number and has k d =2π/λ。(p x ,p y ,p z ) T Representing the cartesian component of the electric dipole moment. The expressions of function K and function O are:
wherein (phi) 1 ,θ 1 ) And (phi) 2 ,θ 2 ) Spherical coordinates, alpha, at the excitation point and at the detection point, respectively 2 =arcsin(NA/n 2 ),n 2 The refractive index of the medium is imaged for the detection region. Probe point spread function PSF det That is, the point spread function of a wide-field imaging system can be expressed as:
PSF WF =PSF det =|E dx | 2 +|E dy | 2 +|E dz | 2 .
the point spread function of the confocal system can be expressed by comprehensively considering the effects of the illumination point spread function, the detection point spread function and the pinhole
Where p(s) represents the transmission function of the pinhole.
In order to make our simulated images closer to the real images captured by the microscope, the imaging model we have built needs to add noise at low resolution imaging. Literature research shows that the additive white Gaussian noise best meets the noise condition under the real imaging condition, and the real image can be effectively recovered. Therefore, white gaussian noise was added to the wide field imaging model for simulation.
Based on the point spread function of the wide-field microscopic imaging system and the confocal microscopic imaging system deduced by the method, the MCF-7 breast cancer cell image in the open source image data set BBBC021v is downloaded to be used as an original data set, and the simulated wide-field microscopic image and the simulated confocal microscopic image are obtained through deconvolution calculation and are used as a data set for network training.
The wide-field microscopic image and the confocal microscopic image obtained by the simulation calculation of different point spread functions of the same image are used as a pair of low-high resolution image pairs, and the image registration operation is not needed. The generation of the image accords with the physical imaging rule in the process of forming the real microscopic image. And inputting the simulated image into a subsequent network for training after image segmentation, rotation and overturning operations.
In the network training stage, the training and prediction of the images are completed by utilizing the improved generation countermeasure network provided by the invention. The improved generative warfare network redefines the structure of the generators and discriminators as compared to the conventional generative warfare network. Fig. 3 and 4 are structural diagrams of a generator and an arbiter for generating a countermeasure network according to the present invention, respectively. The convolution layer parameters are given in a k-s-n mode, k is the size of a convolution kernel, s is the step length, and n represents the number of characteristic graphs.
The generator adopts a 4-layer convolutional network and a hopping structure to acquire image details, and further optimizes a target image by combining a mean square error loss function. And providing rich characteristic information for a training model, and designing 16 residual modules, wherein a jump connection structure can effectively avoid extra information loss caused by pooling and upsampling. The discriminator network uses 10 convolution layers, the image convolution layers use a jump structure and superpose corresponding characteristic layers, so that a high-dimensional characteristic information optimization discriminator is obtained, and the problem that the discriminator is difficult to judge whether the generated super-resolution image is real or not is solved. The trained network can realize the input of a low-resolution image to obtain a high-resolution predicted image. Fig. 5 shows the comparison result of the method of the present invention, in which diagram (a) is the network input image, diagram (b) is the network output image, and diagram (c) is the true value, it can be seen that the resolution of the network output image is significantly improved compared to the input image, and is similar to the true value image. Fig. 6 is a diagram illustrating a structure similarity value distribution of 200 network input images and output images according to the method of the present invention, in which a hollow circle represents the input image and a solid circle represents the output image. This is shown in the figure. Compared with the input image, the structural similarity value of the output image is obviously improved, and the effectiveness of the method is verified.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and it is apparent that those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the present invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (2)
1. The microscopic image super-resolution method for generating the countermeasure network based on the imaging model and improvement is characterized in that the method generates the countermeasure network by establishing a simulation image generated by the physical imaging model to realize super-resolution microscopic image prediction, and comprises the following steps:
step 1, establishing wide-field microscopic and confocal microscopic imaging system models based on a physical imaging process, and deducing point spread functions of two microscopic imaging systems;
step 2, performing deconvolution calculation on the initial image data set by using the point spread function of the imaging system established in the step 1 to respectively obtain a simulated wide-field microscopic image and a simulated confocal microscopic image which are used as a low-high resolution image pair required by network training;
and 3, in the network training stage, designing an improved super-resolution generation countermeasure network by combining the U-Net structure and the residual error network.
And 4, training a generator and a discriminator until the loss function reaches the minimum, and finishing network training. At this time, a low-resolution wide-field microscopic image is input, and a predicted high-resolution microscopic image can be output.
2. The microscopic image super-resolution method for generating confrontation network based on imaging model and improvement according to claim 1, wherein in step 3, the improved generation confrontation network combines U-Net and residual network, redefines the generator and the discriminator in the generation confrontation network. The generator adopts a 16 residual error module structure consisting of a 4-layer convolution network and a jump connection structure to collect image details, two sub-pixel convolution networks are used for amplifying an input low-resolution wide-field image of 128 x 128 pixels into a predicted image of 512 x 512 pixels, and a target image is further optimized by combining a mean square error loss function. The discriminator consists of 10 convolutional layers, each of which is batch normalized, activated by the ReLU. The number of feature maps was increased from 64 to 2048 and then decreased to 512. And finally, inputting the output tensor image into the flattening layer and the full connection layer, and activating through a Sigmoid activation function to obtain a final judgment result. The discriminator carries out optimization by deepening the CNN layer and fusing feature mapping, and the problem that the discriminator is difficult to judge whether the generated predicted image is real or not is solved.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210494049.1A CN114897693A (en) | 2022-05-01 | 2022-05-01 | Microscopic image super-resolution method based on mathematical imaging theory and generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210494049.1A CN114897693A (en) | 2022-05-01 | 2022-05-01 | Microscopic image super-resolution method based on mathematical imaging theory and generation countermeasure network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114897693A true CN114897693A (en) | 2022-08-12 |
Family
ID=82721581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210494049.1A Pending CN114897693A (en) | 2022-05-01 | 2022-05-01 | Microscopic image super-resolution method based on mathematical imaging theory and generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114897693A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117094888A (en) * | 2023-07-31 | 2023-11-21 | 西北工业大学深圳研究院 | Image super-resolution method, image super-resolution device, electronic equipment and storage medium |
-
2022
- 2022-05-01 CN CN202210494049.1A patent/CN114897693A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117094888A (en) * | 2023-07-31 | 2023-11-21 | 西北工业大学深圳研究院 | Image super-resolution method, image super-resolution device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
de Haan et al. | Deep-learning-based image reconstruction and enhancement in optical microscopy | |
JP6845327B2 (en) | Data augmentation for defect inspection based on convolutional neural networks | |
CN112465701B (en) | Deep learning super-resolution reconstruction method of microscopic image, medium and electronic equipment | |
JP2021500740A (en) | Multi-step image alignment method for large offset die / die inspection | |
CN111429562B (en) | Wide-field color light slice microscopic imaging method based on deep learning | |
CN112633248B (en) | Deep learning full-in-focus microscopic image acquisition method | |
CN113917677B (en) | Three-dimensional super-resolution light sheet microscopic imaging method and microscope | |
CN111221118B (en) | Microscopic imaging method based on phase coding single lens | |
CN114897693A (en) | Microscopic image super-resolution method based on mathematical imaging theory and generation countermeasure network | |
Zhang et al. | Large depth-of-field ultra-compact microscope by progressive optimization and deep learning | |
CN114387196A (en) | Method and device for generating undersampled image of super-resolution microscope | |
Zhang et al. | Conformal convolutional neural network (CCNN) for single-shot sensorless wavefront sensing | |
CN116259053B (en) | Medical microscopic image imaging focus prediction method based on convolutional neural network | |
CN113433130A (en) | Method and device for generating confocal imaging by wide-field imaging | |
KR20200015804A (en) | Generation of high resolution images from low resolution images for semiconductor applications | |
CN111476125A (en) | Three-dimensional fluorescence microscopic signal denoising method based on generation countermeasure network | |
CN116912086A (en) | Dual-path fusion-based image resolution improvement method and system | |
CN116540394A (en) | Light sheet microscope single-frame self-focusing method based on structured light illumination and deep learning | |
Chen et al. | Superresolution microscopy imaging based on full-wave modeling and image reconstruction | |
Jiang et al. | Focus prediction of medical microscopic images based on lightweight densely connected with squeeze-and-excitation network | |
Zhang et al. | Deep learning-enhanced fluorescence microscopy via confocal physical imaging model | |
Zhang et al. | High-throughput, high-resolution registration-free generated adversarial network microscopy | |
Hou et al. | Evaluating the resolution of conventional optical microscopes through point spread function measurement | |
Sun et al. | Hybrid deep learning and physics-based neural network for programmable illumination computational microscopy | |
CN116721017B (en) | Self-supervision microscopic image super-resolution processing method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |