LU503281B1 - Digital image denoising method - Google Patents
Digital image denoising method Download PDFInfo
- Publication number
- LU503281B1 LU503281B1 LU503281A LU503281A LU503281B1 LU 503281 B1 LU503281 B1 LU 503281B1 LU 503281 A LU503281 A LU 503281A LU 503281 A LU503281 A LU 503281A LU 503281 B1 LU503281 B1 LU 503281B1
- Authority
- LU
- Luxembourg
- Prior art keywords
- image
- digital
- images
- digital images
- subset
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000010801 machine learning Methods 0.000 claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 21
- 238000012545 processing Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 7
- 238000012935 Averaging Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 230000009466 transformation Effects 0.000 abstract description 4
- 241000475481 Nebula Species 0.000 description 8
- 238000013459 approach Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004315 low visual acuity Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012014 optical coherence tomography Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000008961 swelling Effects 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20216—Image averaging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
The invention proposes a method for denoising digital images, and in particular digital deep sky images. The invention relies on the effect of image stacking, wherein a pixel-wise average of corresponding images is computed. After generating corresponding training data, a machine learning algorithm is trained to learn a stacking transformation, which is then applied to a noisy image. The machine learning algorithm is capable of generating a de-noised version of the input image by simulating the effect of stacking additional images to the input image.
Description
DIGITAL IMAGE DENOISING METHOD
The present invention lies in the field of image processing. In particular, it relates to the denoising of images, which may for example depict deep space objects. As such, the invention also relates to electronically assisted astronomy.
Electronically assisted astronomy allows near-real-time generation of enhanced views of deep sky objects like nebulae and galaxies. This approach is ideal for observers with poor visual acuity, for public viewing sessions or for public outreach activities. By capturing images directly from a telescope coupled to a digital imaging sensor, this approach allows for the generation of enhanced views of observed targets that can be displayed in near real-time. While astrophotography aims at producing detailed and visually appealing images after numerous hours of post-processing of long exposure images, electronically assisted astronomy aims at getting results by stacking on-the-fly raw digital images to accumulate the faint signal.
Noise tends to be random between different digital images, while the desired signal is constant in each digital image that is taken using a similar setup. When a set of digital images representing the same image content is stacked, the values of the individual digital images are averaged pixel by pixel, which causes that the random noise to decrease overall in the resulting stacked image, while the signal remains constant. As a result, stacking short-exposure digital images allows to obtain an improved
Signal-to-Noise ratio, which translates into a sharper and more detailed final stacked image.
It has been observed that the results, in terms of Peak Signal to Noise Ratio, PSNR, generally improve when the number of stacked digital images is increased. The signal that is images, 1.e. stars or nebulae, is present in each digital image, even though it may not be visible in an individual captured image.
However, obtaining a large number of digital images using a telescope and a coupled image sensor is a time-consuming operation, and it may not be physically possible to get the required number of digital images for post-processing by stacking. Indeed, the target position in the sky may become obstructed over time, meteorological conditions may change during the capture of a large number of digital images, or the nights for capturing the images may be too short depending on the season. It should be noted that it is also possible to capture more signal information in a digital image by increasing the exposure time of an individual digital image, but this approach also amplifies the noise content of the captured digital images.
Technical problem to be solved
It is an objective to present method and device, which overcome at least some of the disadvantages of the prior art.
In accordance with a first aspect of the invention a digital image denoising method is provided. The method comprising the steps of 1) providing an initial set of digital images representing a same scene; ii) for a first subset of digital images, computing, using data processing means, a corresponding stacked image, wherein a stacked image is obtained by averaging corresponding pixel values of the digital images in the first subset; and forming a second subset of digital images by adding the resulting stacked image to a subset of digital images that is disjoint from the first subset of digital images; 111) repeating step 11) by replacing the first subset by the second subset of digital images, until each digital image in the initial set has been used once for stacking;
IV) using data processing means, training a machine learning model so that the trained machine model 1s enabled to transform any input image representing the same scene into an output stacked image, by using the digital images of the initial set and the computed stacked images as training data; v) using a noisy digital image representing said scene as input image, generating a corresponding denoised digital image using the trained machine learning model.
Preferably each of the digital images in the initial set of images may comprise noise.
Preferably, each digital image in the initial set of digital images may have been acquired using the same or similar imaging means at different times.
The noisy digital input image may preferably be obtained by stacking all the digital images in the initial set.
Preferably, a scene may comprise a deep sky object, and the same depicted scene depicts the same deep sky object under comparable imaging conditions.
Preferably, the digital images may be deep space images representing deep space objects.
Prior to computing a stacked image, all images in the corresponding subset may preferably be registered to a common reference digital image and cropped to the same dimensions.
Preferably, the first and second subsets of digital images may have the same cardinality.
Steps 11) and 111) may preferably be repeated with first and second subsets of digital images having a different cardinality at each repetition, thereby generating more training data.
Preferably, the machine learning model may comprise a deep learning model.
The machine leaming model may preferably comprise a neural network of the residual dense network type.
Preferably, the digital image denoising method may comprise the pre-processing step of reducing the brightness of bright objects depicted in the images of the initial digital image set prior to computing stacked images.
In accordance with a further aspect of the invention, a computing device comprising a memory element and data processing means is provided. The data processing means are configured to: 1) load an initial set of digital images representing the same scene into the memory element; ii) for a first subset of digital images, compute a corresponding stacked image, wherein a stacked image is obtained by averaging corresponding pixel values of the digital images in the first subset, and to form a second subset of digital images by adding the resulting stacked image to a subset of digital images that is disjoint from the first subset of digital images;
111) repeat step 11) by replacing the first subset by the second subset of digital images, until each digital image in the initial set has been used once for stacking;
IV) train a machine leaming model so that the trained machine model is enabled to transform any input image into an output stacked image, by using the digital images of the initial set and the computed stacked images as training data: v) using a noisy digital image representing said scene as input image, generate a corresponding denoised digital image using the trained machine learning model.
Preferably, the data processing means may further be configured to execute the method steps in accordance with aspects of the invention.
According to another aspect of the invention, a computing device for digital image denoising is provided. The device comprises a memory element in which a trained machine learning model is provided, which is enabled to transform any input image representing a same scene into an output stacked image. The device further comprises data processing means configured to generate a denoised digital image using the trained machine learning model, by using a noisy digital image representing the same scene as an input.
In accordance with yet another aspect of the invention, a computer program comprising computer readable code means is provided, which, when run on a computer, causes the computer to carry out the method in accordance with aspects of the invention.
In accordance with a final aspect of the invention, a computer program product is provided, comprising a computer-readable medium on which the computer program according to an aspect of the invention is stored.
By using the present invention, it becomes possible to reap the benefits of image stacking in terms of denoising, without requiring a large number of real digital image captures. Using preferred embodiment of the invention, lightweight implementations are possible, which allow the method to be performed using limited processing and memory resources. From a set of real captured digital images and corresponding stacked images of a given scene, e.g. of a celestial nebula, a machine learning model is trained to learn the image transform that transforms a real captured digital image into a corresponding stacked image, wherein the stacked image is computed from a given number of real captured digital images. The trained machine learning model is therefore capable of simulating an obtained stacking result, if additional virtual images of the same nebula would be available. The proposed method is a denoising method, which learns a denoising transform that manifests itself during the stacking operation of real captured digital images. The method may for example be used to estimate the PSNR improvement one would obtain if one would capture a number of additional digital images in order to add them to a given stack. The visible gain of capturing and stacking additional digital images decreases with the total integration time: at a certain point, it takes a long time to get perceivable improvements, so that a preview of what the resulting image could look like after an additional hour of capturing digital images is useful in an astronomical setup. The method may further be used to enhance astronomical images for which it was not possible to capture sufficient digital images.
Several embodiments of the present invention are illustrated by way of figures, which do not limit the scope of the invention, wherein: - figure 1 provides a workflow illustrating the main steps of a preferred embodiment of the method in accordance with the invention; - figure 2 schematically illustrates a method and device in accordance with a preferred embodiment of the invention.
This section describes aspects of the invention in further detail based on preferred embodiments and on the figures. The figures do not limit the scope of the invention. Details that are described in the context of a particular embodiment are applicable to other embodiments, unless otherwise stated.
Figure 1 illustrates the main steps of a preferred embodiment of the method in accordance with the invention, with reference to claim 1. It will be referred to figures 1 and 2 in order to describe a preferred embodiment of the invention in what follows. While the denoising method is not limited to a specific image content, the description focuses on the example of deep sky imaging, as it is prone to comprise noise due to prolonged exposure times.
At a first step 1), an initial set of N digital images 11, 12, ..., IN representing a same scene is provided in a memory element 120 of a computing device 100. Preferably, all N images are captured using the same imaging setup comprising a telescope and image sensor have at different observation times.
While the invention is not limited to this example, the exposure times used to capture each digital image in the initial set of N images may be substantially identical. In a practical scenario, hundreds or thousands of images may be used in the initial set, while a reduced number is depicted in figure 2 for the sake of clarity.
The computing device 100 comprises data processing means 110, such as a processor which is configured to apply the described method steps using appropriately formulated computer code instructions. At step ii), a first subset of digital images 121 is used to compute a corresponding stacked image, by computing a pixel-wise average of the images in the first subset 121. In the depicted example, original digital images Il, I2 and I3 are averaged pixel by pixel to result in the stacked image 11-3. It is noted that the operation of stacking transforms image Il into image 11-3 using images Il, 12 and I3. Similarly, the operation of stacking transform image 12 into image 11-3 using images 12, I1 and
I3 and so forth.
Then, a second subset of digital images 122 is formed, by adding the resulting stacked image 11-3 from the previous step to subset of digital images from the initial set that is disjoint from the first subset of digital images. In the depicted example, stacked image 11-3 is added to the digital images 14 and I5 of the initial digital image set. Both of these images [4 and I5 were not used earlier in a stacking operation.
At step iii), the stacking operation of step ii) is repeated by using the second subset 122 of digital images as an input. The result is a stacked digital image 1-3-5, in which the original digital images Il, 12, 13, 14 and IS have been averaged pixel-by-pixel. This stacking operation generates further input- output transform pairs: image 11-3 is transformed into image I1-3-5 using two additional digital images 14, I5 from the initial set; image I1 is transformed into image [1-3-5 using a total of four digital images from the initial set, and so forth. This step is repeated until each one of the digital images of the initial set has been used once in a stacking operation. The digital images of the initial set, and the stacked images I1-3, I-3-5, … are all stored in a memory element and form a training data set 130.
By varying the cardinality of the subsets, different stacking transforms may be obtained, i.e. capturing the effect of adding 5, 10 or 100 digital images to obtain a given stacked digital image.
One goal of the proposed invention is to transform a stacked image of a deep sky object (stacked with numerous captured digital images of the initial set) into a stacked image as it should be if the capture of additional digital images would be continued. In other words, the proposed method simulates the addition of virtual new digital images in order to improve the signal-to-noise ratio of the resulting stacked image.
In order to achieve this, a machine learning model needs to be trained to learn the transformation between: - INPUT: a stacked image of a given deep sky object obtained with X capture digital images (each digital image being ideally obtained with Z seconds exposure time capture); - OUTPUT: a stacked image of the same deep sky object obtained with X+T digital images.
There are two parameters: the step T and the exposure time of each digital image (Z).
It is appreciated that after performing steps ii) and iii) as described hereabove, the information that is required for learning these transformations is stored in the training data set 130.
By way of an example, one may consider an image sequence with a step of 5 (T=5) and with 10 seconds as exposure time (Z=10) for each capture digital image. The training data set 130 will comprise the following input/expected output data pairs: - Input: Orion nebula image stacked with 1 digital image -> Expected Output: Orion nebula stacked with 6 digital images; - Input: Orion nebula image stacked with 6 digital images -> Expected Output: Orion nebula stacked with 11 digital images; - Input: Orion nebula image stacked with 11 digital images -> Expected Output: Orion nebula stacked with 16 digital images, and so forth.
At step iv) as shown in Figure 2, a machine learning model 140 is therefore trained so that the resulting trained machine model 150 is enabled to transform any input image representing the same scene into an output stacked image, by using the digital images of the initial set 1, 12, ..., IN and the computed stacked images [1-3, 1-3-4, … as training data 130.
At step v), a noisy digital image 01 representing said scene is used as input image to the trained machine learning model 150, which generates a corresponding denoised digital image 10 and store it in a memory element. The PSNR improvement between the digital images 01 and 10 may then for example be evaluated and presented to a user of the device 100.
To obtain an efficient trained machine learning model 150, an important training dataset 130 is preferred. By way of an example, more than 250 deep sky objects near Luxembourg have been captured with an automated telescope and corresponding stacked images have been computed, corresponding to several hundreds of digital images for each observed object.
It should be noted that during stacking, the stacked image size is progressively and slightly cropped due to apparent sky rotation. A stacked image at step X and a stacked image at step X+T can be not perfectly aligned, and their sizes can be lightly different. As a result, it may be required to ensure that input and expected output images in the training data set 130 are aligned and have the same size. To do that, existing image restriction techniques, see for example “Astroalign: A Python module for astronomical image registration”, Beroiz, M., Cabral, J. B., & Sanchez, B., Astronomy and
Computing, Volume 32, July 2020, 100384, which is hereby incorporated by reference in its entirety, may be used, and the input image is cropped to ensure it has the same size as the stacked output image
It has also been observed that during stacking, bright objects such as stars appear to “swell” and become larger during the stacking operation. To mitigate this effect in the trained machine learning model 150, a solution consists in modifying the training data set by processing star sizes in each expected output stacked image, for example by applying a brightness thresholding method. As a result, the machine learning model learns to simulate stacking while avoiding the effect of swelling stars.
To learn the transformation, the machine learning model 140 may for example be based on Generative
Adversanal Network. The General Adversarial Network, GAN, model is designed to remove noise from input images. A GAN model is composed of two Deep Learning models: a generator that ingests an image and provides another image as output, and a discriminator which guides the generator during the training by detecting real/fake images. The Python implementation that has been used to implement the invention is based on the Pix2Pix approach (Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). “Image-to-image translation with conditional adversarial networks” in Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134), which is hereby incorporated by reference in its entirety). Pix2Pix is generally used to transform an image into another form (ex: https://phillipi.github.io/pix2pix/), but here it has been used to remove something from the image. The resolution of input/output images may for example of 512x512 pixels. Lower resolutions may be considered without leaving the scope of the present invention, as they lead to a more lightweight GAN (Le, a lighter generator and a lighter discriminator).
In accordance with a particularly preferred embodiment of the invention, the machine learning model 140 is a deep neural network inspired by Residual Dense Network, RDN, techniques that are usually applied for Image Super Resolution, see for example Zhang, Y., Tian, Y ., Kong, Y., Zhong, B., & Fu,
Y. (2018) “Residual dense network for image super-resolution” in Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2472-2481), which is hereby incorporated by reference in its entirety. As these known architectures allow to recreate details in images by increasing their resolution, they can also be used to transform input images as shown for example in Huang, Y., Lu, Z.,
Shao, Z., Ran, M., Zhou, J, Fang, L., & Zhang, Y. (2019) “Simultaneous denoising and super- resolution of optical coherence tomography images based on generative adversarial network” Optics express, 27(9), 12289-12307, for tomography images. Nevertheless, Image Super Resolution algorithms are greedy and lead to machine learning models with several millions of parameters. In the context of the present invention, an increase in image resolution is however not necessary. In practice, a model inspired by Residual Dense Networks using Residual Dense Blocks and Convolutional layers with skip connections, but without an upscaling layer has been designed.
As a result, a preferred machine leaming model 140 is a Deep Leaming model with 454 000 parameters — while a basic x2 Image Super Resolution model contains 2 344 968 parameters and a
Pix2Pix model contains 54 431 363 parameters. The resulting model therefore requires memory and computational resources for training and it can be used in limited-memory IoT devices like the
RaspberryPi™ device or in microcontrollers, for example.
This machine learning model 140, has been trained using 10 000 high-resolution digital images in a training set — and 500 images to compute accuracy. After training, the model 150 was capable of achieving a denoising performance leading to a PSNR value of approximately 43.5dB.
In order to process a given real noisy image 01 with the trained machine learning model 150, it is split into image patches. The model 150 transforms the input patches into corresponding denoised output image patches, and the final denoised digital image 10 is reconstituted from the output image patches.
While various patch dimensions may be chosen without departing from the invention a patch size of 256x256 pixels appeared to provide a good trade-off.
Ideally, a machine learning model 140, 150 should be linked to a dedicated imaging setup, i.e., digital camera sensor, optical instrument and mount. In practice, it means that the training set 130 should ideally only contain stacked images built with digital images captured with a single setup — and the trained model 150 should only be used on images 01 captured with similar setup for best performance.
It should be noted that features described for a specific embodiment described herein may be combined with the features of other embodiments unless the contrary is explicitly mentioned. Based on the description and on the figures that have been provided, a person with ordinary skills in the art will be enabled to develop a computer program for implementing the described methods without undue burden and without requiring additional inventive skill.
It should be understood that the detailed description of specific preferred embodiments is given by way of illustration only, since various changes and modifications within the scope of the invention will be apparent to the person skilled in the art. The scope of protection is defined by the following set of claims.
Claims (15)
1. A digital image denoising method comprising the steps of 1) providing an initial set of digital images (Il, 12, ..., IN) representing a same scene; ii) for a first subset (121) of digital images, computing, using data processing means (110), a corresponding stacked image (11-3), wherein a stacked image is obtained by averaging corresponding pixel values of the digital images in the first subset; and forming a second subset (122) of digital images by adding the resulting stacked image (I1-3) to a subset of digital images (14, 15) that is disjoint from the first subset of digital images; lil) repeating step ii) by replacing the first subset (121) by the second subset (122) of digital images, until each digital image in the initial set has been used once for stacking; IV) using data processing means (110), training a machine learning model (140) so that the trained machine model (150) is enabled to transform any input image representing the same scene into an output stacked image, by using the digital images of the initial set (Il, 12, ..., IN) and the computed stacked images (I1-3, [1-3-5) as training data (130); v) using a noisy digital image (01) representing said scene as input image, generating a corresponding denoised digital image (10) using the trained machine leaming model (150).
2. The digital image denoising method according to claim 1, wherein each digital image (11, 12, ..., IN9 in the initial set of digital images has been acquired using the same imaging means at different times.
3. The digital image denoising method in accordance with any of the previous claims, wherein the noisy digital input image (01) is obtained by stacking all the digital images (I1, 12, ..., IN) in the initial set.
4. The digital image denoising method in accordance with any of the previous claims, wherein the digital images are deep space images representing deep space objects.
5. The digital image denoising method in accordance with claim 4, wherein prior to computing a stacked image, all images in the corresponding subset (121, 122) are registered to a common reference digital image and cropped to the same dimensions.
6. The digital image denoising method in accordance with any of the previous claims, wherein the first (121) and second subsets (122) of digital images have the same cardinality.
7. The digital image denoising method in accordance with any of the previous claims, wherein steps 11) and iii) are repeated with first (121) and second subsets (122) of digital images having a different cardinality at each repetition, thereby generating more training data (130).
8. The digital image denoising method in accordance with any of the previous claims, wherein the machine leaming model (140) comprises a deep leaming model.
9. The digital image denoising method in accordance with any of the previous claims, wherein the machine leaming model (140) comprises a neural network of the residual dense network type.
10. The digital image denoising method in accordance with any of the previous claims, comprising the pre-processing step of reducing the brightness of bright objects depicted in the images of the initial digital image set prior to computing stacked images.
11. A computing device (100) comprising a memory (120) element and data processing means (110), wherein the data processing means are configured to: 1) load an initial set of digital images (Il, 12, …, IN) representing the same scene into the memory element (120): ii) for a first subset of digital images (121), compute a corresponding stacked image (I1- 3), wherein a stacked image is obtained by averaging corresponding pixel values of the digital images in the first subset, and to form a second subset of digital images (122) by adding the resulting stacked image (I1- 3) to a subset of digital images (14, I5) that is disjoint from the first subset of digital images; 111) repeat step 11) by replacing the first subset (121) by the second subset (122) of digital images, until each digital image in the initial set has been used once for stacking;
IV) train a machine leaming model (140) so that the trained machine model (150) 1s enabled to transform any input image into an output stacked image, by using the digital images of the initial set and the computed stacked images as training data; v) using a noisy digital image (01) representing said scene as input image, generate a corresponding denoised digital image (10) using the trained machine leaming model (150).
12. The computing device according to claim 11, wherein the data processing means are further configured to execute the method steps in accordance with any of claims 2 to 10.
13. A computing device for digital image denoising, comprising a memory element in which a trained machine learning model is provided, which is enabled to transform any input image representing a same scene into an output stacked image, and data processing means configured to generate a denoised digital image using the trained machine learning model, by using a noisy digital image representing the same scene as an input.
14. A computer program comprising computer readable code means, which, when run on a computer, causes the computer to carry out the method in accordance with any of claims 1 to
10.
15. A computer program product comprising a computer-readable medium on which the computer program according to claim 14 is stored.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
LU503281A LU503281B1 (en) | 2022-12-30 | 2022-12-30 | Digital image denoising method |
PCT/EP2023/086513 WO2024141323A1 (en) | 2022-12-30 | 2023-12-19 | Digital image denoising method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
LU503281A LU503281B1 (en) | 2022-12-30 | 2022-12-30 | Digital image denoising method |
Publications (1)
Publication Number | Publication Date |
---|---|
LU503281B1 true LU503281B1 (en) | 2024-07-01 |
Family
ID=85036782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
LU503281A LU503281B1 (en) | 2022-12-30 | 2022-12-30 | Digital image denoising method |
Country Status (2)
Country | Link |
---|---|
LU (1) | LU503281B1 (en) |
WO (1) | WO2024141323A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200051260A1 (en) * | 2018-08-07 | 2020-02-13 | BlinkAI Technologies, Inc. | Techniques for controlled generation of training data for machine learning enabled image enhancement |
WO2020165196A1 (en) * | 2019-02-14 | 2020-08-20 | Carl Zeiss Meditec Ag | System for oct image translation, ophthalmic image denoising, and neural network therefor |
-
2022
- 2022-12-30 LU LU503281A patent/LU503281B1/en active IP Right Grant
-
2023
- 2023-12-19 WO PCT/EP2023/086513 patent/WO2024141323A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200051260A1 (en) * | 2018-08-07 | 2020-02-13 | BlinkAI Technologies, Inc. | Techniques for controlled generation of training data for machine learning enabled image enhancement |
WO2020165196A1 (en) * | 2019-02-14 | 2020-08-20 | Carl Zeiss Meditec Ag | System for oct image translation, ophthalmic image denoising, and neural network therefor |
Non-Patent Citations (6)
Title |
---|
ANTONIA VOJTEKOVA ET AL: "Learning to Denoise Astronomical Images with U-nets", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 13 November 2020 (2020-11-13), XP081813426 * |
BEROIZ, M.CABRAL, J. B.SANCHEZ, B.: "Astroalign: A Python module for astronomical image registration", ASTRONOMY AND COMPUTING, vol. 32, July 2020 (2020-07-01), pages 100384 |
CLAUDIO GHELLER ET AL: "Convolutional Deep Denoising Autoencoders for Radio Astronomical Images", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 16 October 2021 (2021-10-16), XP091077609 * |
HUANG, Y.LU, Z.SHAO, Z.RAN, M.ZHOU, J.FANG, L.ZHANG, Y.: "Simultaneous denoising and super-resolution of optical coherence tomography images based on generative adversarial network", OPTICS EXPRESS, vol. 27, no. 9, 2019, pages 12289 - 12307, XP055676129, DOI: 10.1364/OE.27.012289 |
ISOLA, P.ZHU, J. Y.ZHOU, T.EFROS, A. A.: "Image-to-image translation with conditional adversarial networks", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2017, pages 1125 - 1134, XP055972425, DOI: 10.1109/CVPR.2017.632 |
ZHANG, Y.TIAN, Y.KONG, Y.ZHONG, B.FU, Y.: "Residual dense network for image super-resolution", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2018, pages 2472 - 2481, XP033476213, DOI: 10.1109/CVPR.2018.00262 |
Also Published As
Publication number | Publication date |
---|---|
WO2024141323A1 (en) | 2024-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Molini et al. | Deepsum: Deep neural network for super-resolution of unregistered multitemporal images | |
Xu et al. | Learning to restore low-light images via decomposition-and-enhancement | |
Chen et al. | Hdrunet: Single image hdr reconstruction with denoising and dequantization | |
Zhang et al. | One-two-one networks for compression artifacts reduction in remote sensing | |
WO2019186407A1 (en) | Systems and methods for generative ensemble networks | |
An et al. | Single-shot high dynamic range imaging via deep convolutional neural network | |
CN111986084A (en) | Multi-camera low-illumination image quality enhancement method based on multi-task fusion | |
Rasheed et al. | LSR: Lightening super-resolution deep network for low-light image enhancement | |
Ibrahim et al. | 3DRRDB: Super resolution of multiple remote sensing images using 3D residual in residual dense blocks | |
Kim et al. | Joint demosaicing and deghosting of time-varying exposures for single-shot hdr imaging | |
Fu et al. | Raw image based over-exposure correction using channel-guidance strategy | |
LU503281B1 (en) | Digital image denoising method | |
CN110852947B (en) | Infrared image super-resolution method based on edge sharpening | |
Noor et al. | Multi-frame super resolution with deep residual learning on flow registered non-integer pixel images | |
CN116309066A (en) | Super-resolution imaging method and device | |
WO2024072250A1 (en) | Neural network training method and apparatus, image processing method and apparatus | |
Zhang et al. | An effective image restorer: Denoising and luminance adjustment for low-photon-count imaging | |
CN118134766B (en) | Infrared video super-resolution reconstruction method, device and equipment | |
Guo et al. | Single-Image HDR Reconstruction Based on Two-Stage GAN Structure | |
Song et al. | Super-resolution imaging quality enhancement method for distributed array infrared camera | |
US20240303783A1 (en) | A method of training a neural network, apparatus and computer program for carrying out the method | |
Peng et al. | Structure Prior Guided Deep Network for Compressive Sensing Image Reconstruction from Big Data | |
CN117670650A (en) | Image processing method and device | |
Khan | Project Definition Document | |
Ahn et al. | Multi-scale Adaptive Residual Network Using Total Variation for Real Image Super-Resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FG | Patent granted |
Effective date: 20240701 |