CN107240084B - Method and device for removing rain from single image - Google Patents
Method and device for removing rain from single image Download PDFInfo
- Publication number
- CN107240084B CN107240084B CN201710447576.6A CN201710447576A CN107240084B CN 107240084 B CN107240084 B CN 107240084B CN 201710447576 A CN201710447576 A CN 201710447576A CN 107240084 B CN107240084 B CN 107240084B
- Authority
- CN
- China
- Prior art keywords
- rain
- image
- frequency
- dictionary
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000001914 filtration Methods 0.000 claims description 46
- 238000012545 processing Methods 0.000 claims description 21
- 230000002146 bilateral effect Effects 0.000 claims description 16
- 238000000354 decomposition reaction Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 7
- 230000000873 masking effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 63
- 238000012360 testing method Methods 0.000 description 23
- 239000002131 composite material Substances 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method and a device for removing rain from a single image. And then performing sparse reconstruction on the image to be processed based on the rain dictionary to obtain a rain mark mask. And finally, carrying out rain removing treatment on the image to be treated through a rain mark mask to obtain a target image, namely an image without rain marks. Since this method only processes pixels with rain, the non-rain pixel portion is maintained. Therefore, the image rain drop residue after rain removal is less, richer edges and textures in the target image are kept, and the rain removal accuracy is greatly improved.
Description
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for removing rain from a single image.
Background
Under the influence of rainy weather conditions, partial texture and detail information of images collected by the outdoor lens are easily shielded by rain lines, so that the problems of over-bright local areas, fuzzy background images and the like are caused. The degradation of image quality in rainy days greatly restricts the functions of outdoor intelligent vision systems such as vision monitoring, vision navigation, target tracking and the like. And the states of the raindrop particles are variable, and the rainline directions and the thicknesses are different under different conditions, so that the research on how to recover high-quality images from various rain-day degraded images has extremely high research and application values.
The image rain removal comprises the rain removal of a video image and the rain removal of a single image. Unlike video images, there is no time information available to go to rain for a single image, which is difficult. Therefore, the research of removing rain from a single image draws wide attention of scholars at home and abroad. At present, the existing single-frame rain removing algorithm is mainly based on the following model [1 ]:
I=B+R (1)
wherein, I is a raininess image, B is a background image, and R is a raindrop layer image, and as can be seen from formula (1), the raininess image can be regarded as a linear superposition of the background image and the raindrop layer. The essence of image rain removal is to separate the rain drop layer image from the rain image, so as to improve the visual visibility of the background image.
Details (edges) and rain mark components of the background image are distributed in the high-frequency image, so most of related work firstly decomposes the rain image into a low-frequency image and a high-frequency image, then removes rain for the high-frequency image, and fuses the high-frequency image after rain removal and the low-frequency image to obtain a rain-free image. The rain removing of the single image can be roughly divided into a rain removing algorithm based on filtering and a rain removing algorithm based on machine learning.
In the prior art, a rain removing algorithm based on machine learning is provided, a frame of which realizes sparse representation of a high-frequency image based on dictionary learning, atomic classification of a learning dictionary is realized by combining with features of a Histogram of Oriented Gradients (HOG), a rain-free dictionary and a rain-free dictionary are obtained, a rain component and a rain component of the high-frequency image are obtained through sparse reconstruction, and the rain-free component of the high-frequency image and the low-frequency image are superposed to obtain an image after rain removal.
The existing image rain removing algorithm achieves certain effect, but still has the following problems: (1) the overlapping phenomenon exists between the rain line in the rain image and the texture of the background image, so that the texture similar to the rain line structure in the background image is easily judged as the rain line by mistake, and the image after rain removal has an excessive smooth phenomenon; (2) judging the rain lines as background images, and removing rain incompletely; the rain-free image and the rain layer image are sparsely represented by using the same dictionary, so that the complete separation of rain atoms and rain atoms cannot be ensured, and manual edges are easy to generate after sparse reconstruction.
In summary, the existing image rain removing method cannot effectively distinguish the rain pixels from the non-rain pixels, cannot give consideration to the advantages of fast image restoration and high quality of restored images, and has certain limitations.
Disclosure of Invention
The invention provides a method and a device for removing rain from a single image, which are used for improving the rain removing effect of an image with rain marks.
A first aspect provides a method of raining a single image, comprising:
identifying a pure rain area in an image to be processed;
taking the pure rain area as input to carry out dictionary learning and sparse representation thereof, and obtaining a rain dictionary;
performing sparse reconstruction on the image to be processed according to the rain dictionary to obtain a rain mark mask;
and carrying out rain removing treatment on the image to be treated according to the rain mark mask to obtain a target image.
Optionally, the image to be processed is a high-frequency grayscale image.
Optionally, the identifying a pure rain area in the image to be processed includes:
and extracting the pure rain area in the high-frequency gray image according to the gradient feature of the rain drop edge pixel.
Optionally, before the extracting the pure rain region in the high-frequency grayscale image according to the gradient feature of the rain drop edge pixel, the method further includes:
according to the propagation filtering and the guide filtering, the decomposition of the image to be processed is realized to obtain a low-frequency image ILFAnd high frequency image IHF;
For the high frequency image IHFAnd carrying out graying processing to obtain the high-frequency grayscale image G.
Optionally, the performing sparse reconstruction on the image to be processed according to the rain dictionary to obtain a rain mark mask includes:
using the rain dictionary to pair the high-frequency image IHFPerforming sparse reconstruction to obtain a high-frequency rain image, and setting a threshold to obtain the rain mark mask M;
Optionally, the performing rain removing processing on the image to be processed according to the rain drop mask to obtain a target image includes:
according to the optimized rain mark maskDefining improved bilateral filtering with the brightness characteristic of the raindrop in the image to be processed;
according to the improved bilateral filtering, rain removal of the high-frequency image is achieved, and a high-frequency rain-free image is obtained;
combining the high-frequency rain-free image with the low-frequency image ILFAnd fusing to obtain the target image.
Optionally, the image decomposition is implemented according to the propagation filtering and the guided filtering to obtain the low-frequency image ILFAnd high frequency image IHFThe method comprises the following steps:
carrying out propagation filtering on the image to be processed, and taking the filtered image as a guide map;
performing guiding filtering on the image to be processed according to the guiding graph to obtain the low-frequency image ILF;
Subtracting the low-frequency image I from the image to be processedLFObtaining the high frequency image IHF。
Optionally, the method further comprises:
acquiring the gradient amplitude and the gradient direction of the high-frequency gray level image G;
the extracting the pure rain area in the high-frequency gray level image according to the gradient feature of the rain drop edge pixel comprises the following steps:
and acquiring image areas in a sliding mode, counting pixel points of which the gradient amplitude of each image area is greater than T1, and extracting the pure rain areas according to the gradient direction of the pixel points.
Optionally, the performing dictionary learning and sparse representation thereof with the pure rain area as an input to obtain a rain dictionary includes:
obtaining p1 image blocks of the pure rain area as training samples, and performing dictionary learning to obtain the rain dictionary DR;
The sparse reconstruction of the image to be processed according to the rain dictionary to obtain a rain mark mask comprises the following steps:
for the high frequency image IHFPerforming sparse representation to obtain sparse representation coefficient of high-frequency image
According to the rain dictionary DRAnd the sparse representation coefficientsAnd performing sparse reconstruction to obtain the high-frequency rain image, and setting a threshold T2 to obtain a rain mark mask M.
Optionally, the raindrop mask M is optimized to obtain an optimized raindrop maskThe method comprises the following steps:
in the high-frequency color image, calculating the distance from a pixel point with the rain mark mask coefficient value of 1 to a diagonal line, and when the distance is greater than a set threshold value T3, correcting the corresponding rain mark mask coefficient value to be 0 to obtain the optimized rain mark mask
A second aspect provides a single image rain removal device comprising:
the identification module is used for identifying a pure rain area in the image to be processed;
the learning module is used for performing dictionary learning and sparse representation by taking the pure rain area as input to obtain a rain dictionary;
the reconstruction module is used for performing sparse reconstruction on the image to be processed according to the rain dictionary to obtain a rain mark mask;
and the rain removing processing module is used for removing rain from the image to be processed according to the rain mark mask to obtain a target image.
Optionally, the image to be processed is a high-frequency grayscale image.
Optionally, the identification module is specifically configured to:
and extracting the pure rain area in the high-frequency gray image according to the gradient feature of the rain drop edge pixel.
Optionally, the identification module is further configured to, before the pure rain region in the high-frequency grayscale image is extracted according to the gradient feature of the rain drop edge pixel, implement decomposition of the image to be processed according to propagation filtering and guided filtering, and obtain a low-frequency image ILFAnd high frequency image IHF(ii) a For the high frequency image IHFAnd carrying out graying processing to obtain the high-frequency grayscale image G.
Optionally, the reconstruction module is specifically configured to:
using the rain dictionary to pair the high-frequency image IHFPerforming sparse reconstruction to obtain a high-frequency rain image, and setting a threshold to obtain the rain mark mask M; optimizing the rain mark mask M to obtain an optimized rain mark mask
Optionally, the rain removing processing module is specifically configured to:
according to the optimized rain mark maskDefining an improved bilateral filtering with the luminance characteristics of rain marks in said image to be processed(ii) a According to the improved bilateral filtering, rain removal of the high-frequency image is achieved, and a high-frequency rain-free image is obtained; combining the high-frequency rain-free image with the low-frequency image ILFAnd fusing to obtain the target image.
Optionally, the rain removal processing module is further configured to:
carrying out propagation filtering on the image to be processed, and taking the filtered image as a guide map; performing guiding filtering on the image to be processed according to the guiding graph to obtain the low-frequency image ILF(ii) a Subtracting the low-frequency image I from the image to be processedLFObtaining the high frequency image IHF。
Optionally, the identification module is further configured to:
acquiring the gradient amplitude and the gradient direction of the high-frequency gray level image G; and acquiring image areas in a sliding mode, counting pixel points of which the gradient amplitude of each image area is greater than T1, and extracting the pure rain areas according to the gradient direction of the pixel points.
Optionally, the learning module is specifically configured to:
obtaining p1 image blocks of the pure rain area as training samples, and performing dictionary learning to obtain the rain dictionary DR(ii) a For the high frequency image IHFPerforming sparse representation to obtain sparse representation coefficient of high-frequency imageAccording to the rain dictionary DRAnd the sparse representation coefficientsAnd performing sparse reconstruction to obtain the high-frequency rain image, and setting a threshold T2 to obtain a rain mark mask M.
Optionally, the learning module is specifically configured to:
in the high-frequency color image, calculating the distance from a pixel point with the rain mark masking coefficient value of 1 to a diagonal line, and when the distance is greater than a set threshold value T3, correcting the corresponding rain mark masking coefficient value to be 0 to obtain the optimized rain mark masking coefficient valueMask and method for manufacturing the same
The invention provides a method and a device for removing rain from a single image. And then performing sparse reconstruction on the image to be processed based on the rain dictionary to obtain a rain mark mask. And finally, carrying out rain removing treatment on the image to be treated through a rain mark mask to obtain a target image, namely an image without rain marks. Since this method only processes pixels with rain, the non-rain pixel portion is maintained. Therefore, the image rain drop residue after rain removal is less, richer edges and textures in the target image are kept, and the rain removal accuracy is greatly improved.
Drawings
Fig. 1 is a schematic flow chart of a method for removing rain from a single image according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another method for removing rain from a single image according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of another method for removing rain from a single image according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of another method for removing rain from a single image according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a single image rain removing device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a universal terminal according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart of a single image rain removing method according to the present invention;
FIG. 8 is an exploded view of a rained image according to the present invention;
FIG. 9(a) is a schematic diagram of the gradient amplitude of a high frequency grayscale image;
fig. 9(b) is a schematic view of the gradient direction of the rain mark;
FIG. 9(c) is a diagram illustrating the gradient direction statistics of rain drop;
FIG. 10(a) is a schematic diagram of a high frequency grayscale image;
FIG. 10(b) is a schematic view of a pure rain zone;
FIG. 10(c) is a learned rain dictionary DRA schematic diagram of (a);
FIG. 10(d) is a schematic of a reconstructed high frequency rain image;
fig. 10(e) is a schematic view of a rain mark mask M;
FIG. 11(a) is a schematic diagram of an RGB color space;
FIG. 11(b) is a schematic view of a rain mark mask M;
FIG. 11(c) is a schematic diagram of optimized raindrop mask M —;
FIG. 11(d) is a diagram illustrating the contents of the blocks in FIG. 11 (b);
FIG. 11(e) is a diagram illustrating the contents of the boxes in FIG. 11 (c);
FIG. 12 is a schematic diagram of a test artwork and its details;
fig. 12(a) shows a test original 1;
FIG. 12(b) is a diagram illustrating the content of the box of the test original 1;
fig. 12(c) shows the test original image 2;
FIG. 12(d) is a diagram illustrating the content of the box of the test artwork 2;
fig. 12(e) shows the test original image 3;
FIG. 12(f) is a diagram illustrating the content of the box of the test artwork 3;
FIG. 13 is a schematic diagram of the rain removal results of the composite rain map of the original test image 1 by different methods;
fig. 13(a) is a schematic view of a synthesized rain map of the test original 1;
FIG. 13(b) is a schematic diagram showing the result of rain removal by the Chen et al method;
FIG. 13(c) is a schematic diagram showing the result of rain removal by Ding et al;
FIG. 13(d) is a schematic representation of the rain removal results of Kim et al;
FIG. 13(e) is a schematic representation of the rain removal results of the Luo et al process;
FIG. 13(f) is a schematic illustration of the rain removal results of an embodiment of the present invention;
FIG. 14 is a schematic view of a detail of FIG. 12;
FIG. 14(a) is a schematic diagram of the contents of the block of FIG. 13 (a);
FIG. 14(b) is a schematic diagram of the contents of the block of FIG. 13 (b);
FIG. 14(c) is a schematic diagram of the contents of the block of FIG. 13 (c);
FIG. 14(d) is a schematic diagram of the contents of the block of FIG. 13 (d);
FIG. 14(e) is a schematic diagram of the contents of block in FIG. 13 (e);
FIG. 14(f) is a schematic diagram of the contents of block in FIG. 13 (f);
FIG. 15 is a schematic diagram of the rain removal results of the composite rain map of the original test image 2 using different methods;
fig. 15(a) is a schematic view of a composite rain map of the test original 2;
FIG. 15b) is a schematic illustration of the rain removal results of the Chen et al process;
FIG. 15(c) is a schematic diagram of the rain removal results of Ding et al;
FIG. 15(d) is a schematic representation of the rain removal results of Kim et al;
FIG. 15(e) is a schematic representation of the rain removal results of the Luo et al process;
FIG. 15(f) is a schematic illustration of the rain removal results of an embodiment of the present invention;
FIG. 16 is a schematic view of a detail of FIG. 15;
FIG. 16(a) is a schematic diagram of the contents of the block of FIG. 15 (a);
FIG. 16(b) is a schematic diagram of the contents of the block of FIG. 15 (b);
FIG. 16(c) is a schematic diagram of the contents of the block of FIG. 15 (c);
FIG. 16(d) is a schematic diagram of the contents of the block of FIG. 15 (d);
FIG. 16(e) is a schematic diagram of the contents of the block in FIG. 15 (e);
FIG. 16(f) is a schematic diagram of the contents of the block in FIG. 15 (f);
FIG. 17 is a schematic diagram of the rain removal results of the composite rain map of the original test image 3 by different methods;
fig. 17(a) is a schematic view of a synthesized rain map of the test original image 3;
FIG. 17(b) is a schematic diagram showing the result of rain removal by the method of Chen et al;
FIG. 17(c) is a schematic diagram showing the result of rain removal by Ding et al;
FIG. 17(d) is a schematic representation of the rain removal results of Kim et al;
FIG. 17(e) is a schematic representation of the rain removal results of the Luo et al process;
FIG. 17(f) is a schematic illustration of the rain removal results of an embodiment of the present invention;
FIG. 18 is a schematic view of a detail of FIG. 17;
FIG. 18(a) is a schematic diagram showing the contents of the block of FIG. 17 (a);
FIG. 18(b) is a schematic diagram of the contents of the block of FIG. 17 (b);
FIG. 18(c) is a schematic diagram of the contents of the block of FIG. 17 (c);
FIG. 18(d) is a schematic representation of the contents of the block of FIG. 17 (d);
FIG. 18(e) is a schematic diagram of the contents of the block in FIG. 17 (e);
FIG. 18(f) is a schematic representation of the contents of the block in FIG. 17 (f);
FIG. 19 is a schematic illustration of the de-raining results of a different method on a real rain map test image 4;
FIG. 19(a) is a schematic view of a real rain map test image 4;
FIG. 19(b) is a schematic diagram showing the result of rain removal by the Chen et al method;
FIG. 19(c) is a schematic diagram of the rain removal results of Ding et al;
FIG. 19(d) is a schematic representation of the rain removal results of Kim et al;
FIG. 19(e) is a schematic representation of the rain removal results of the Luo et al process;
FIG. 19(f) is a schematic illustration of the rain removal results of an embodiment of the present invention;
FIG. 20 is a schematic view of a detail of FIG. 19;
FIG. 20(a) is a schematic diagram of the contents of the block of FIG. 19 (a);
FIG. 20(b) is a schematic diagram of the contents of the block of FIG. 19 (b);
FIG. 20(c) is a schematic diagram of the contents of the block of FIG. 19 (c);
FIG. 20(d) is a schematic representation of the contents of the block of FIG. 19 (d);
FIG. 20(e) is a schematic diagram of the contents of the block in FIG. 19 (e);
FIG. 20(f) is a schematic representation of the contents of the block in FIG. 19 (f);
FIG. 21 is a schematic illustration of the de-raining results of different methods on a real rain map test image 5;
FIG. 21(a) is a schematic view of a real rain map test image 5;
FIG. 21(b) is a schematic diagram showing the result of rain removal by the method of Chen et al;
FIG. 21(c) is a schematic diagram showing the result of rain removal by Ding et al;
FIG. 21(d) is a schematic representation of the rain removal results of Kim et al;
FIG. 21(e) is a schematic diagram showing the result of rain removal by the Luo et al method;
FIG. 21(f) is a schematic illustration of the rain removal results of an embodiment of the present invention;
FIG. 22 is a schematic view of a detail of FIG. 21;
FIG. 22(a) is a schematic diagram showing the contents of the block of FIG. 21 (a);
FIG. 22(b) is a schematic diagram of the contents of the block of FIG. 21 (b);
FIG. 22(c) is a schematic diagram showing the contents of the block of FIG. 21 (c);
FIG. 22(d) is a schematic representation of the contents of the block of FIG. 21 (d);
FIG. 22(e) is a schematic diagram showing the contents of the block of FIG. 21 (e);
FIG. 22(f) is a schematic representation of the contents of the block of FIG. 21 (f);
FIG. 23 is a schematic illustration of the de-raining results of a different method on a real rain map test image 6;
FIG. 23(a) is a schematic view of a real rain map test image 6;
FIG. 23(b) is a schematic diagram showing the result of rain removal by the method of Chen et al;
FIG. 23(c) is a schematic diagram showing the result of rain removal by Ding et al;
FIG. 23(d) is a schematic representation of the rain removal results of Kim et al;
FIG. 23(e) is a schematic diagram showing the result of rain removal by the Luo et al method;
FIG. 23(f) is a schematic illustration of the rain removal results of an embodiment of the present invention;
FIG. 24 is a schematic view of a detail of FIG. 23;
FIG. 24(a) is a schematic diagram showing the contents of the block of FIG. 23 (a);
FIG. 24(b) is a schematic diagram showing the contents of the block of FIG. 23 (b);
FIG. 24(c) is a schematic diagram showing the contents of the block of FIG. 23 (c);
FIG. 24(d) is a schematic representation of the contents of the block of FIG. 23 (d);
FIG. 24(e) is a schematic diagram showing the contents of the block in FIG. 23 (e);
FIG. 24(f) is a diagram showing the contents of the block in FIG. 23 (f).
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Fig. 1 is a schematic flowchart of a method for removing rain from a single image according to an embodiment of the present invention, where the method may be executed by a terminal device, and the terminal device may be: PC, smart phone, tablet computer, etc., referring to fig. 1, the method comprises:
102, performing sparse reconstruction on the image to be processed according to the rain dictionary to obtain a rain mark mask;
and 103, carrying out rain removing treatment on the image to be treated according to the rain mark mask to obtain a target image.
According to the single image rain removing method provided by the embodiment of the invention, the pure rain area in the image to be processed is firstly identified, and the pure rain area is further used as input to carry out dictionary learning and sparse representation, so that a rain dictionary is obtained. And then performing sparse reconstruction on the image to be processed based on the rain dictionary to obtain a rain mark mask. And finally, carrying out rain removing treatment on the image to be treated through a rain mark mask to obtain a target image, namely an image without rain marks. Since this method only processes pixels with rain, the non-rain pixel portion is maintained. Therefore, the image rain drop residue after rain removal is less, richer edges and textures in the target image are kept, and the rain removal accuracy is greatly improved.
Optionally, the image to be processed in step 100 is a high-frequency grayscale image.
Optionally, the rain dictionary includes structural information of rain marks. The rain mark mask can be a high-frequency rain image obtained after reconstruction, and then the high-frequency rain image is binarized to form the rain mark mask.
Further, for step 100, one possible implementation is:
and 100a, extracting the pure rain area in the high-frequency gray level image according to the gradient feature of the rain drop edge pixel.
On the basis of fig. 1, fig. 2 is a schematic flow chart of another method for removing rain from a single image according to an embodiment of the present invention, and referring to fig. 2, before step 100, the method further includes:
Further, one possible implementation of step 102:
102-1, utilizing the rain dictionary to perform comparison on the high-frequency image IHFPerforming sparse reconstruction to obtain a high-frequency rain image, and setting a threshold to obtain the rain mark mask M;
On the basis of fig. 1, fig. 3 is a schematic flow chart of another single image rain removing method according to an embodiment of the present invention, and referring to fig. 3, a possible implementation manner of step 103:
step 103-1, according to the optimized rain mark maskDefining improved bilateral filtering with the brightness characteristic of the raindrop in the image to be processed;
103-2, removing rain from the high-frequency image according to the improved bilateral filtering to obtain a high-frequency rain-free image;
step 103-3, the high-frequency rain-free image and the low-frequency image I are processedLFAnd fusing to obtain the target image.
On the basis of fig. 2, fig. 4 is a schematic flow chart of another single image rain removing method according to an embodiment of the present invention, and referring to fig. 4, a possible implementation manner of step 104:
104-1, performing propagation filtering on the image to be processed, and taking the filtered image as a guide map;
step 104-2, performing guiding filtering on the image to be processed according to the guiding graph to obtain the low-frequency image ILF;
Step 104-3, subtracting the low-frequency image I from the image to be processedLFObtaining the high frequency image IHF。
Further, an embodiment of the present invention further provides an implementation manner for acquiring a pure rain area:
acquiring the gradient amplitude and the gradient direction of the high-frequency gray level image G;
and acquiring image areas in a sliding mode, counting pixel points of which the gradient amplitude of each image area is greater than T1, and extracting the pure rain areas according to the gradient direction of the pixel points.
Optionally, one possible implementation manner of step 101 is:
101-1, acquiring p1 image blocks of the pure rain area as training samples, and performing dictionary learning to obtain the rain dictionary DR;
Another possible implementation of step 102:
step 102-3, for the high-frequency image IHFPerforming sparse representation to obtain sparse representation coefficient of high-frequency image
Step 102-4, according to the rain dictionary DRAnd the sparse representation coefficientsAnd performing sparse reconstruction to obtain the high-frequency rain image, and setting a threshold T2 to obtain a rain mark mask M.
Further, one possible implementation of step 102-2:
in the high-frequency color image, calculating the distance from a pixel point with the rain mark mask coefficient value of 1 to a diagonal line, and when the distance is greater than a set threshold value T3, correcting the corresponding rain mark mask coefficient value to be 0 to obtain the optimized rain mark mask
In order to implement the single image rain removing method provided by the above embodiments, the invention provides a single image rain removing device for performing the steps of the above embodiments to obtain corresponding technical effects. Specifically, fig. 5 is a schematic structural diagram of a single image rain removing device according to an embodiment of the present invention, and referring to fig. 5, the device includes: an identification module 200, a learning module 201, a reconstruction module 202 and a rain removal processing module 203;
the identification module 200 is used for identifying a pure rain area in the image to be processed;
a learning module 201, configured to perform dictionary learning and sparse representation thereof with the pure rain region as an input, so as to obtain a rain dictionary;
a reconstruction module 202, configured to perform sparse reconstruction on the image to be processed according to the rain dictionary to obtain a rain mark mask;
and the rain removing processing module 203 is configured to perform rain removing processing on the image to be processed according to the rain mark mask to obtain a target image.
According to the single image rain removing device provided by the embodiment of the invention, the pure rain area in the image to be processed is recognized by the recognition module, and the pure rain area is further used as input by the learning module for dictionary learning and sparse representation, so that a rain dictionary is obtained. And then the reconstruction module performs sparse reconstruction on the image to be processed based on the rain dictionary to obtain a rain mark mask. And finally, the rain removing processing module carries out rain removing processing on the image to be processed through a rain mark mask to obtain a target image, namely the image without rain marks. Since the device only processes pixels with rain, the rain-free pixel portion is maintained. Therefore, the image rain drop residue after rain removal is less, richer edges and textures in the target image are kept, and the rain removal accuracy is greatly improved.
Optionally, the image to be processed is a high-frequency grayscale image.
Optionally, the identification module is specifically configured to:
and extracting the pure rain area in the high-frequency gray image according to the gradient feature of the rain drop edge pixel.
Optionally, the identification module is further configured to, before the pure rain region in the high-frequency grayscale image is extracted according to the gradient feature of the rain drop edge pixel, implement decomposition of the image to be processed according to propagation filtering and guided filtering, and obtain a low-frequency image ILFAnd high frequency image IHF(ii) a For the high frequency image IHFAnd carrying out graying processing to obtain the high-frequency grayscale image G.
Optionally, the reconstruction module is specifically configured to:
using the rain dictionary to pair the high-frequency image IHFPerforming sparse reconstruction to obtain a high-frequency rain image, and setting a threshold to obtain the rain mark mask M; optimizing the rain mark mask M to obtain an optimized rain mark mask
Optionally, the rain removing processing module is specifically configured to:
according to the optimized rain mark maskDefining improved bilateral filtering with the brightness characteristic of the raindrop in the image to be processed; according to the improved bilateral filtering, rain removal of the high-frequency image is achieved, and a high-frequency rain-free image is obtained; combining the high-frequency rain-free image with the low-frequency image ILFAnd fusing to obtain the target image.
Optionally, the rain removing processing module is further configured to:
carrying out propagation filtering on the image to be processed, and taking the filtered image as a guide map; performing guiding filtering on the image to be processed according to the guiding graph to obtain the low-frequency image ILF(ii) a Subtracting the low-frequency image I from the image to be processedLFObtaining the high frequency image IHF。
Optionally, the identification module is further configured to:
acquiring the gradient amplitude and the gradient direction of the high-frequency gray level image G; and acquiring image areas in a sliding mode, counting pixel points of which the gradient amplitude of each image area is greater than T1, and extracting the pure rain areas according to the gradient direction of the pixel points.
Optionally, the learning module is specifically configured to:
obtaining p1 image blocks of the pure rain area as training samples, and performing dictionary learning to obtain the rain dictionary DR(ii) a For the high frequency image IHFPerforming sparse representation to obtain sparse representation coefficient of high-frequency imageAccording to the rain dictionary DRAnd the sparse representation coefficientsAnd performing sparse reconstruction to obtain the high-frequency rain image, and setting a threshold T2 to obtain a rain mark mask M.
Optionally, the learning module is specifically configured to:
in the high-frequency color image, calculating the distance from a pixel point with the rain mark mask coefficient value of 1 to a diagonal line, and when the distance is greater than a set threshold value T3, correcting the corresponding rain mark mask coefficient value to be 0 to obtain the optimized rain mark mask
Optionally, fig. 6 is a schematic structural diagram of a general terminal provided in the embodiment of the present invention, where the general terminal may include: a processor 300, a camera 301 and a screen 302;
the processor 300 may implement the functions corresponding to the identification module 200, the learning module 201, the reconstruction module 202, and the rain removal processing module 203;
the camera 301 is used for acquiring an image to be processed;
a screen 302 for displaying a target image.
In order to more clearly describe the technical contents of the present invention, the following further description is given in conjunction with specific embodiments. Fig. 7 is a schematic flow chart of a single image rain removing method provided by the present invention, and taking rain removal in fig. 12, fig. 19, fig. 21, and fig. 23 as an example, the image rain removing method provided by the present invention is adopted, and the steps are as follows:
1. establishing a rain image decomposition model:
I=B+R (1)
in the formula, I is a raininess image, B is a background image, and R is a raindrop layer image, and as can be seen from formula (1), the raininess image can be regarded as a linear superposition of the background image and the raindrop layer. The essence of image rain removal is to separate the rain drop layer image from the rain image, so as to improve the visual visibility of the background image.
2. Fig. 8 is an exploded view of a rain image according to the present invention, and referring to the rain image decomposition flow adopted in fig. 8, the rain image decomposition is implemented:
the rain image I is first subjected to propagation filtering, which is defined as follows:
wherein I (y), I (x) is the brightness value of image pixel y, x, omega (x) represents the area with x as the center, g (x; sigma) is the Gaussian function. Sigmaa,σrIs the corresponding variance of the gaussian function. And is
Where φ is the set of all pixels on the region adjacency path where pixels x and y are connected.
Then, the output image IpAs a guide image, performing guide filtering on a rain image, wherein the formula is as follows:
ILF=a.*IP+b (4)
in the formula, a and b are IpA corresponding coefficient matrix. I isLFFor the low-frequency image obtained after the filtering,
to obtain a high frequency image, the rain image I is subtracted by ILFObtaining a high frequency image IHFThe formula is as follows:
IHF=I-ILF (5)
3. extracting a pure rain area:
calculating the gradient amplitude and the gradient direction of the high-frequency gray level image G, wherein the formula is as follows:
where G (s, t) is the gray scale value of the pixel (s, t) in the image G, Gx(s, t) is the gradient in the horizontal direction, Gy(s, t) is a gradient in the vertical direction, Mag (s, t) is a gradient magnitude of the pixel (s, t), and θ (s, t) is a gradient direction of the pixel (s, t). Fig. 9(a) shows the gradient amplitudes of the high-frequency grayscale image, and it can be seen that the gradient amplitudes of the smooth regions except the texture edge and the rain mark edge are larger and approach to 0.
As shown in fig. 9(b), the raindrop has a certain width, and the gradient directions of the edge pixels on both sides of the raindrop are in the same straight line, but are opposite to each other. The gradient directions differing by pi can therefore be classified in the same way, the gradient directions being redefined as follows:
as shown in fig. 9(c), 2 pi is divided into 36 sectors each of which is 10 °, and as can be seen from equation (7), the gradient direction differing by pi falls into the same class, for example, the sectors [0,10 ° ] belong to the same class as the sectors [ -170 °, -180 ° ]. The raindrop is distributed in the whole image scene, therefore, if the corresponding gradient directions of the pixel points (namely texture edges or raindrop edges) meeting the requirement of a certain gradient amplitude value in a certain region belong to the same sector, the region can be considered to only contain the raindrop and have no other obvious textures, and the region is judged to be a pure rain region. The extraction steps are as follows:
1) in the high-frequency gray-scale image, selecting an r x r image area (r is 80) in a sliding mode;
2) setting a threshold value T1, counting pixel points with gradient amplitude values larger than T1 in each window, and counting the number of the pixel points in each sector by using the gradient directions of the pixel points to obtain a normalized sector histogram, wherein the normalized sector histogram is defined as follows:
in the formula,indicating the number of pixels in the ith sector in the jth window,then representing the sum of pixel points of all sectors in the jth window;
3) is selected to have the largestThe window j of (i.e., η → 1) is a pure rain area, and as shown in fig. 10(b), the formula is as follows:
in the formula, h and w are the height and width of the image.
4. Sparse reconstruction of high-frequency gray images:
in the extracted pure rain region, in this case, 8 × 8 blocks p1 are randomly taken as training samples to learn a rain dictionary, and the model can be expressed as follows:
in the formula,the k1 th image block, k1, which is a pure rain area, is 1,2, 3.Is composed ofThe corresponding sparse representation coefficients. DR∈Rm*nA rain dictionary. λ is a regular term parameter, balancing the performance of sparse representation and sparsity. Solving the formula (10) by adopting an online dictionary learning method to obtain a rain dictionary DRAs shown in fig. 10 (c).
k2 ═ 1,2, 3.., P2, sparse representation coefficients of high-frequency images were obtained by OMP algorithmThe formula is as follows:
combined with rain dictionary DRSparse representation coefficient with high frequency imageRealizing reconstruction of a high-frequency image block to obtain a high-frequency rain image, wherein the formula is as follows:
5. Obtaining a rain mark mask:
the reconstructed high-frequency image block is averaged to obtain a detected raindrop mask, as shown in fig. 10 (d).
Setting a threshold value T2, setting the pixel points with the brightness value greater than T2 in the rain mark mask as 1, and setting the other pixel points as 0, to obtain the binary rain mark mask M with the rain image, as shown in fig. 10(e), the formula is:
6. optimizing a rain mark mask:
in a high-frequency color image, calculating the distance from a pixel point with a raindrop mask coefficient value of 1 to a diagonal line, wherein the formula is as follows:
in the formula, r, g, b are three primary color channel values of the pixel, and when d is greater than the threshold T3, the corresponding mask system is used
7. Removing rain from the high-frequency color image:
the present case defines an improved bilateral filtering, with the formula:
in the formula, Ω (x) represents a neighborhood centered on x, i (y), and i (x) is the luminance value of the high-frequency image pixel y, x.The interference of rain pixel points in the neighborhood is eliminated. Meanwhile, when the pixel value in the neighborhood is larger than the central pixel, the two types of pixels are from different targets, the influence of the pixels in the neighborhood larger than the central pixel is eliminated during rain removal, and a binary function B is introducedxyThe formula is as follows:
and (3) performing the operation of the formula (16) on the RGB channels of the high-frequency image respectively to obtain the high-frequency rain-free image.
8. Acquiring a rain-free image:
and adding the high-frequency rain-free image and the low-frequency image to obtain a final rain-free image.
In order to verify the effectiveness of the method provided by the invention, the experimental results of the method provided by the invention and other methods applied to a large number of synthetic rain diagrams and natural rain diagrams are compared. Fig. 12 to 24 show the experimental results of the partial synthetic rain map and the real rain map, respectively.
In the experimental results of the composite rain map, fig. 12(a), 12(c) and 12(e) are original rain-free maps, fig. 13(a), 15(a) and 17(a) are composite rain maps corresponding to fig. 12(a), 12(c) and 12(e), in which a large number of rain drops are included, and fig. 13 to 18(b) are rain-removing results of Chen and the like, in which although there is no obvious rain drop, the background texture of the image is somewhat blurred. Fig. 13 to 18(c) show rain-removed images such as Ding in which traces of rain marks remain. Fig. 13 to 18(d) show rain removal charts of Kim and the like, in which a large amount of rain marks remain. Fig. 13 to 18(e) show the result of rain removal by the method of Luo, etc., and certain rain marks remain in the drawings. Fig. 13 to 18(f) show the rain removing result of the method provided by the present invention, because the rain mark detection accuracy is high, and only the rain removing operation is performed on the rain pixels, the blur or distortion of the rain-free pixels due to rain removing is avoided, and the difference and the relation between the rain mark and the background image are fully considered during the rain removing operation, so that the detail texture of the background image is retained to the maximum extent. As can also be seen from the comparison of the local enlarged images, the method provided by the invention is more natural and clear, retains more detailed information of the image, and is closer to the real image. The results of the synthetic rain map experiment were further objectively evaluated using Visual Information Fidelity (VIF) [16] and Structural Similarity (SSIM) [17], and are shown in table 1:
visual Information Fidelity (VIF) mainly solves the quality problem of images from the viewpoint of information sharing and communication. The formula is as follows:
wherein,representing information that the brain can extract from specific subbands in the reference image and the processed image, respectively. The larger the VIF value is, the better the image quality is.
The result similarity (SSIM) measures the similarity between two images by using the brightness, contrast and structure of the reference image and the processed image, and the formula is as follows:
wherein, C1,C2,C3Is a very small constant, ux,uyIs the average, σ, of the reference image and the processed imagex,σyIt represents the standard deviation of the reference image and the processed image, and if the SSIM value is larger, it represents that the similarity between the two images is larger.
Table 1: evaluation index for rain removal
As can be seen from Table 1, the method provided by the invention is relatively superior to other methods, the visual fidelity is higher, the method is more similar to a real image, and the rain removing effect is obvious.
Fig. 19 to 24 show the comparison of the rain removing effect of the real rain map and the detail map thereof, wherein (b) in fig. 19 to 24 is the rain removing result of Chen and the like, (c) in fig. 19 to 24 is the rain removing result of Ding and the like, (d) in fig. 19 to 24 is the rain removing result of Kim and the like, (e) in fig. 19 to 24 is the rain removing result of Luo and the like, and (f) in fig. 19 to 24 is the rain removing result of the present invention. For example, the texture of the soil wall in the test image 4, the background of the leaves in the test image 5 and the road in the test image 6 are clearer and have higher detail fidelity compared with other methods.
The invention provides a new framework for realizing rain mark detection and rain removal of a single image. Firstly, based on the prior knowledge that the gradient directions of the edge pixels on both sides of the raindrop are the same or opposite, a pure rain area is extracted. Then, dictionary learning is carried out by utilizing the pure rain area image to obtain a rain dictionary, sparse reconstruction is carried out on the high-frequency image based on the learned rain dictionary, and a binary rain mark mask is established. The most important point is that the bilateral filtering is improved by combining the binary rain mark mask and the brightness characteristic of the rain mark, only the rain pixels are processed, and the rain-free pixel part is maintained. Therefore, the image rain drop residue after rain removal is less, richer edges and textures in the target image are kept, and the rain removal accuracy is greatly improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for removing rain from a single image is characterized by comprising the following steps:
decomposing the image to be processed according to the propagation filtering and the guide filtering to obtain a low-frequency image ILFAnd high frequency image IHF(ii) a The image to be processed is a rainy image;
for the high frequency image IHFCarrying out graying processing to obtain a high-frequency grayscale image G;
extracting a pure rain area in the high-frequency gray level image according to the gradient feature of the rain drop edge pixel;
taking the pure rain area as input to carry out dictionary learning and sparse representation thereof, and obtaining a rain dictionary;
using the rain dictionary to pair the high-frequency image IHFPerforming sparse reconstruction to obtain a high-frequency rain image, and setting a threshold to obtain a rain mark mask M;
According to the optimized rain mark maskDefining improved bilateral filtering with the brightness characteristic of the raindrop in the image to be processed; the improved bilateral filtering is defined to satisfy the following formula:
wherein Ω (x) identifies a neighborhood centered at x, i (y), i (x) representing the luminance values of the high frequency image pixels y, x;the method is used for eliminating interference of rain pixel points in the neighborhood; when the pixel value in the neighborhood is larger than the central pixel, the two types of pixels are from different targets, the influence of the pixels in the neighborhood larger than the central pixel is eliminated during rain removal, and a binary function B is introducedxyThe formula is as follows:
according to the improved bilateral filtering, rain removal of the high-frequency image is achieved, and a high-frequency rain-free image is obtained; combining the high-frequency rain-free image with the low-frequency image ILFAnd fusing to obtain a target image.
2. The method for rain removal from a single image according to claim 1, wherein said image decomposition is carried out according to a propagation filter and a guided filter to obtain a low-frequency image ILFAnd high frequency image IHFThe method comprises the following steps:
carrying out propagation filtering on the image to be processed, and taking the filtered image as a guide map;
performing guiding filtering on the image to be processed according to the guiding graph to obtain the low-frequency image ILF;
Subtracting the low-frequency image I from the image to be processedLFObtaining the high frequency image IHF。
3. The method of raining a single image of claim 1, further comprising:
acquiring the gradient amplitude and the gradient direction of the high-frequency gray level image G;
the extracting the pure rain area in the high-frequency gray level image according to the gradient feature of the rain drop edge pixel comprises the following steps:
acquiring image regions in a sliding mode, and counting that the gradient amplitude of each image region is greater than T1And extracting the pure rain area according to the gradient direction of the pixel points.
4. The method for removing rain from a single image according to claim 1, wherein the dictionary learning and the sparse representation thereof are performed by taking the pure rain area as an input, and a rain dictionary is obtained, and the method comprises the following steps:
obtaining p1 image blocks of the pure rain area as training samples, and performing dictionary learning to obtain the rain dictionary DR;
Performing sparse reconstruction on the image to be processed according to the rain dictionary to obtain a rain mark mask, wherein the method comprises the following steps:
for the high frequency image IHFPerforming sparse representation to obtain sparse representation coefficient of high-frequency image
5. The method for removing rain from a single image according to claim 1, wherein the rain mark mask M is optimized to obtain an optimized rain mark maskThe method comprises the following steps:
in the high-frequency color image, calculating the distance from a pixel point with the raindrop mask coefficient value of 1 to a diagonal line, and when the distance is greater than a set threshold value T3Correcting the corresponding coefficient value of the rain mark mask to be 0 to obtain the optimized rain mark mask
6. A single image rain removal device, comprising:
an identification module for realizing the decomposition of the image to be processed according to the propagation filtering and the guide filtering to obtain a low-frequency image ILFAnd high frequency image IHF(ii) a The image to be processed is a rainy image; for the high frequency image IHFCarrying out graying processing to obtain a high-frequency grayscale image G; extracting a pure rain area in the high-frequency gray level image according to the gradient feature of the rain drop edge pixel;
the learning module is used for performing dictionary learning and sparse representation by taking the pure rain area as input to obtain a rain dictionary;
a reconstruction module for utilizing the rain dictionary to reconstruct the high frequency image IHFPerforming sparse reconstruction to obtain a high-frequency rain image, and setting a threshold to obtain a rain mark mask M; optimizing the rain mark mask M to obtain the optimized rain mark mask
A rain removal processing module for masking rain marks according to the optimizedDefining improved bilateral filtering with the brightness characteristic of the raindrop in the image to be processed; according to the improved bilateral filtering, rain removal of the high-frequency image is achieved, and a high-frequency rain-free image is obtained; combining the high-frequency rain-free image with the low-frequency image ILFFusing to obtain a target image;
the improved bilateral filtering is defined to satisfy the following formula:
wherein Ω (x) identifies a neighborhood centered at x, i (y), i (x) representing the luminance values of the high frequency image pixels y, x;the method is used for eliminating interference of rain pixel points in the neighborhood; when the pixel value in the neighborhood is larger than the central pixel, the two types of pixels are from different targets, the influence of the pixels in the neighborhood larger than the central pixel is eliminated during rain removal, and a binary function B is introducedxy,The formula is as follows:
7. the single image rain removal device of claim 6, wherein the rain removal processing module is further configured to:
carrying out propagation filtering on the image to be processed, and taking the filtered image as a guide map; performing guiding filtering on the image to be processed according to the guiding graph to obtain the low-frequency image ILF(ii) a Subtracting the low-frequency image I from the image to be processedLFObtaining the high frequency image IHF。
8. The single image rain removal device of claim 6, wherein the identification module is further configured to:
acquiring the gradient amplitude and the gradient direction of the high-frequency gray level image G; acquiring image regions in a sliding mode, and counting that the gradient amplitude of each image region is greater than T1And extracting the pure rain area according to the gradient direction of the pixel points.
9. The single-image rain removal device according to claim 6, wherein the learning module is specifically configured to:
obtaining p1 image blocks of the pure rain area as training samples, and performing dictionary learning to obtain the rain dictionary DR(ii) a For the high frequency image IHFPerforming sparse representation to obtain sparse representation coefficient of high-frequency imageAccording to the rain dictionary DRAnd the sparse representation coefficientsAnd performing sparse reconstruction to obtain the high-frequency rain image, and setting a threshold T2 to obtain a rain mark mask M.
10. The single-image rain removal device according to claim 6, wherein the learning module is specifically configured to: in the high-frequency color image, calculating the distance from a pixel point with the raindrop mask coefficient value of 1 to a diagonal line, and when the distance is greater than a set threshold value T3Correcting the corresponding coefficient value of the rain mark mask to be 0 to obtain the optimized rain mark mask
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710447576.6A CN107240084B (en) | 2017-06-14 | 2017-06-14 | Method and device for removing rain from single image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710447576.6A CN107240084B (en) | 2017-06-14 | 2017-06-14 | Method and device for removing rain from single image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107240084A CN107240084A (en) | 2017-10-10 |
CN107240084B true CN107240084B (en) | 2021-04-02 |
Family
ID=59987389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710447576.6A Active CN107240084B (en) | 2017-06-14 | 2017-06-14 | Method and device for removing rain from single image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107240084B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019181096A1 (en) * | 2018-03-19 | 2019-09-26 | ソニー株式会社 | Image processing device, image processing method, and program |
CN109035157A (en) * | 2018-06-25 | 2018-12-18 | 华南师范大学 | A kind of image rain removing method and system based on static rain line |
CN109636738B (en) * | 2018-11-09 | 2019-10-01 | 温州医科大学 | The single image rain noise minimizing technology and device of double fidelity term canonical models based on wavelet transformation |
CN109552255B (en) * | 2018-11-19 | 2020-08-04 | 厦门理工学院 | Wiper blade rubber strip fault detection method and system based on jitter state |
CN110018529B (en) * | 2019-02-22 | 2021-08-17 | 南方科技大学 | Rainfall measurement method, rainfall measurement device, computer equipment and storage medium |
CN109886900B (en) * | 2019-03-15 | 2023-04-28 | 西北大学 | Synthetic rain map rain removing method based on dictionary training and sparse representation |
CN110390654B (en) * | 2019-07-29 | 2022-11-01 | 华侨大学 | Post-processing method for multi-stage iterative collaborative representation of rain removal image |
CN110544217B (en) * | 2019-08-30 | 2021-07-20 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110866879B (en) * | 2019-11-13 | 2022-08-05 | 江西师范大学 | Image rain removing method based on multi-density rain print perception |
CN113538297B (en) * | 2021-08-27 | 2023-08-01 | 四川大学 | Image rain removing method based on gradient priori knowledge and N-S equation |
CN113902931B (en) * | 2021-09-17 | 2022-07-12 | 淮阴工学院 | Image rain removing method based on learning type convolution sparse coding |
CN117152000B (en) * | 2023-08-08 | 2024-05-14 | 华中科技大学 | Rainy day image-clear background paired data set manufacturing method and device and application thereof |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101860651A (en) * | 2010-06-17 | 2010-10-13 | 沈阳理工大学 | System for adding rain to video images |
CN105303526A (en) * | 2015-09-17 | 2016-02-03 | 哈尔滨工业大学 | Ship target detection method based on coastline data and spectral analysis |
CN106327452A (en) * | 2016-08-14 | 2017-01-11 | 曾志康 | Fragmented remote sensing image synthesis method and device for cloudy and rainy region |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101210625B1 (en) * | 2010-12-28 | 2012-12-11 | 주식회사 케이티 | Method for filling common hole and 3d video system thereof |
ES2481347B1 (en) * | 2012-12-26 | 2015-07-30 | Universidad De Almeria | PROCEDURE FOR AUTOMATIC INTERPRETATION OF IMAGES FOR THE QUANTIFICATION OF NUCLEAR TUMOR MARKERS. |
-
2017
- 2017-06-14 CN CN201710447576.6A patent/CN107240084B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101860651A (en) * | 2010-06-17 | 2010-10-13 | 沈阳理工大学 | System for adding rain to video images |
CN105303526A (en) * | 2015-09-17 | 2016-02-03 | 哈尔滨工业大学 | Ship target detection method based on coastline data and spectral analysis |
CN106327452A (en) * | 2016-08-14 | 2017-01-11 | 曾志康 | Fragmented remote sensing image synthesis method and device for cloudy and rainy region |
Non-Patent Citations (1)
Title |
---|
SINGLE-FRAME-BASED RAIN REMOVAL VIA IMAGE DECOMPOSITION;Yu-Hsiang Fu1 et al.;《IEEE》;20110712;第1454-1456页以及Fig.1-2 * |
Also Published As
Publication number | Publication date |
---|---|
CN107240084A (en) | 2017-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107240084B (en) | Method and device for removing rain from single image | |
CN104794688B (en) | Single image to the fog method and device based on depth information separation sky areas | |
Negru et al. | Exponential contrast restoration in fog conditions for driving assistance | |
CN111047530A (en) | Underwater image color correction and contrast enhancement method based on multi-feature fusion | |
CN110728640B (en) | Fine rain removing method for double-channel single image | |
CN112419163B (en) | Single image weak supervision defogging method based on priori knowledge and deep learning | |
CN114219732A (en) | Image defogging method and system based on sky region segmentation and transmissivity refinement | |
Gao et al. | A novel UAV sensing image defogging method | |
Das et al. | A comparative study of single image fog removal methods | |
Kim et al. | Adaptive patch based convolutional neural network for robust dehazing | |
Tangsakul et al. | Single image haze removal using deep cellular automata learning | |
Chen et al. | Visual depth guided image rain streaks removal via sparse coding | |
CN105608683A (en) | Defogging method of single image | |
Thepade et al. | Improved haze removal method using proportionate fusion of color attenuation prior and edge preserving | |
CN113298857A (en) | Bearing defect detection method based on neural network fusion strategy | |
CN109544470A (en) | A kind of convolutional neural networks single image to the fog method of boundary constraint | |
Kim et al. | Single image dehazing of road scenes using spatially adaptive atmospheric point spread function | |
Wang et al. | Fast visibility restoration using a single degradation image in scattering media | |
Chen et al. | Robust video content alignment and compensation for clear vision through the rain | |
CN111932469A (en) | Significance weight quick exposure image fusion method, device, equipment and medium | |
CN111932470A (en) | Image restoration method, device, equipment and medium based on visual selection fusion | |
Saxena et al. | Performance Analysis of Single Image Fog Expulsion Techniques | |
CN114240988B (en) | Image segmentation method based on nonlinear scale space | |
CN113781329B (en) | Fog removing method for remote sensing image | |
CN118505691B (en) | Electronic component detection system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |