MBLLEN
MBLLEN
MBLLEN
net/publication/360481784
CITATIONS READS
441 3,163
4 authors, including:
Feifan Lv Feng Lu
Beihang University (BUAA) Beihang University (BUAA)
9 PUBLICATIONS 679 CITATIONS 114 PUBLICATIONS 3,873 CITATIONS
All content following this page was uploaded by Feng Lu on 10 May 2022.
Abstract
We present a deep learning based method for low-light image enhancement. This
problem is challenging due to the difficulty in handling various factors simultaneously
including brightness, contrast, artifacts and noise. To address this task, we propose the
multi-branch low-light enhancement network (MBLLEN). The key idea is to extract rich
features up to different levels, so that we can apply enhancement via multiple subnets
and finally produce the output image via multi-branch fusion. In this manner, image
quality is improved from different aspects. Through extensive experiments, our proposed
MBLLEN is found to outperform the state-of-art techniques by a large margin. We addi-
tionally show that our method can be directly extended to handle low-light videos.
1 Introduction
Images and videos carry rich and detailed information of the real scenes. By capturing and
processing the image and video data, intelligent systems can be developed for various tasks
such as object detection, classification, segmentation, recognition, scene understanding and
3D reconstruction, and then used in many real applications, e.g., automated driving, video
surveillance and virtual/augmented reality.
However, real systems rely heavily on the quality of the input images/videos. In partic-
ular, they may perform well with high quality input data but perform badly otherwise. One
typical case is to use the images captured in the poorly illuminated environment. When a
camera cannot receive sufficient light during a capture, there will be information loss in the
dark region and unexpected noise, as shown in Figure 1. Using such low quality images due
to low light will certainly reduce the performance of most vision-based algorithms, and thus,
c 2018. The copyright of this document resides with its authors.
It may be distributed unchanged freely in print or electronic forms.
Corresponding Author: Feng Lu
This work is partially supported by NSFC under Grant 61602020 and Grant 61732016.
2 LV, LU, WU, LIM: LOW-LIGHT IMAGE/VIDEO ENHANCEMENT USING CNNS
(a) Input (b) Dong [9] (c) NPE [37] (d) LIME [14]
(e) Ying [43] (f) BIMEF [41] (g) Ours (MBLLEN) (h) Ground Truth
Figure 1: The proposed MBLLEN can produce high quality images from low-light inputs. It
also performs well in suppressing the noise and aritifacts in dark regions.
it is highly demanded by real applications to enhance the quality of the low-light images
without requiring additional and expensive hardware.
Various researches have been done in the literature for low-light image enhancement.
They typically focus on restoring the image brightness and contrast, and suppressing the
unexpected visual effects like color distortion. Existing methods can be roughly divided into
two categories, namely, the histogram equalization-based methods, and the Retinex theory-
based methods. Algorithms in the former category optimize the pixel brightness based on the
idea of histogram equalization, while methods in the latter category recover the illumination
map of the scene and enhance different image regions accordingly.
Although remarkable progress has been made, there is still a lot room to improve. For
instance, existing methods tend to rely on certain assumptions about the pixel statistics or
visual mechanism, which, however, may not be applicable for certain real scenarios. Second,
besides brightness/contrast optimization, other factors such as artifacts in the dark region and
image noise due to low-light capture should be handled more carefully. Finally, developing
effective techniques for low-light video enhancement requires additional efforts.
In this paper, we propose a novel method for low-light image enhancement by taking
the success of the latest deep learning technology. At the core of our method is the proposed
fully convolutional neural network, namely the multi-branch low-light enhancement network
(MBLLEN). The MBLLEN consists of three types of modules, i.e., the feature extraction
module (FEM), the enhancement module (EM) and the fusion module (FM). The idea is to
learn to 1) extract rich features up to different levels via FEM, 2) enhance the multi-level
features respectively via EM and 3) obtain the final output by multi-branch fusion via FM.
In this manner, the MBLLEN is able to improve the image quality from different aspects and
accomplish the low-light enhancement task to its full extent.
Overall, our contributions are threefold. 1) We propose a novel method for low-light
image enhancement based on the deep neural networks. It improves both the objective and
subjective image quality. 2) Our method also works well in terms of suppressing image noise
and artifacts in the low light regions. 3) Our method can be directly extended to process low
light videos by using the temporal information. These properties make our method superior
to existing methods, and both quantitative and qualitative evaluations demonstrate that our
method outperforms the state-of-the-arts by a large margin.
LV, LU, WU, LIM: LOW-LIGHT IMAGE/VIDEO ENHANCEMENT USING CNNS 3
2 Related Work
This section briefly overviews existing techniques for low-light image/video enhancement.
Low-light image enhancement. Methods for low-light image enhancement can be
mainly divided into two categories. The first category is built upon the well-known his-
togram equalization (HE) technique and also uses additional priors and constraints. In partic-
ular, BPDHE [15] tries to preserve image brightness dynamically; Arici et al. [2] propose to
analyze and penalize the unnatural visual effects for better visual quality; DHECI [29] intro-
duces and uses the differential gray-level histogram; CVC [5] uses the interpixel contextual
information; LDR [26] focuses on the layered difference representation of 2D histogram to
try to enlarge the gray-level differences between adjacent pixels.
The other category is based on the Retinex theory [22], which assumes that an image is
composed of reflection and illumination. Typical methods, e.g., MSR [17] and SSR [18],
try to recover and use the illumination map for low-light image enhancement. Recently,
AMSR [24] proposes a weighting strategy based on SSR. NPE [37] balances the enhance-
ment level and image naturalness to avoid over-enhancement. MF [11] processes the illu-
mination map in a multi-scale fashion to improve the local contrast and maintain natural-
ness. SRIE [12] develops a weighted vibrational model for illumination map estimation.
LIME [14] considers both the illumination map estimation and denoising. BIMEF [41, 42]
proposes a dual-exposure fusion algorithm and Ying et al. [43] use the camera response
model for further enhancement. In general, conventional low-light enhancement methods
rely on certain statistical models and assumptions, which only partially explain the real world
scenes.
Deep learning-based methods. Recently, deep learning has achieved great success in
the field of low-level image processing. Powerful tools such as end-to-end networks and
GANs [13] have been employed by various applications, including image super resolu-
tion [4, 23], image denoising [31, 44] and image-to-image translation [16, 45]. There
are also methods proposed for low-light image enhancement. LLNet [28] uses the deep
autoencoder for low-light image denoising. However, it does not take advantage of recent
developments in deep learning. Other CNN-based methods like LLCNN [35] and [34] do
not handle brightness/contrast enhancement and image denoising simultaneously.
Low-light video enhancement. There are few researches for low-light video enhance-
ment. Some of them [27, 36] also follow the Retinex theory, while others use the gamma
correction technique [19] or the similar framework for image de-haze [9, 30]. In order to
suppress artifacts, similar patches from adjacent frames can be used [21]. Although these
methods have achieved certain progress in low-light video enhancement, they still share
some limitations, e.g., temporal information has not been well utilized to avoid flickering.
3 Methodology
The proposed method is introduced in this section with all the necessary details. Due to the
complexity of the image content, it is often difficult for a simple network to achieve high
quality image enhancement. Therefore, we design the MBLLEN in a multi-branch fashion.
It decomposes the image enhancement problem into sub-problems related to different feature
levels, which can be solved respectively to produce the final output via multi-branch fusion.
The input to the MBLLEN is a low-light color image and the output is an enhanced clean
image of the same size. The overall network architecture and the data process flow is shown
in Figure 2. The three modules, namely, FEM, EM and FM, are described later in detail.
4 LV, LU, WU, LIM: LOW-LIGHT IMAGE/VIDEO ENHANCEMENT USING CNNS
FEM
W×H×32
CONV CONV CONV CONV
CONV
3×3 3×3 3×3 3×3 EM 8@3×3
3×3 #n
Input FEM
CNOV
16@5×5
MELLEN EM EM EM EM EM 16@5×5
#1 #2 #n-1 #n
16@5×5
Output FM
16@5×5
DECNOV
8@5×5
3@5×5
FM
Figure 2: The proposed network with feature extraction module (FEM), enhancement mod-
ule (EM) and fusion module (FM). The output image is produced via feature fusion.
Low-light ? Loss
Region Loss
Structural Loss
Content Loss
Figure 3: Data flow for training. The proposed loss function consists of three parts.
where µx and µy are pixel value averages, σx2 and σy2 are variances, σxy is covariance, and C1
and C2 are constants to prevent the denominator to zero. Due to the page limit, the definition
of MS-SSIM can be checked in [38]. The value ranges of SSIM and MS-SSIM are (−1, 1]
and [0, 1], respectively. The final structure loss is defined as LStr = LSSIM + LMS−SSIM .
Context loss. Metrics such as MSE and SSIM only focus on low-level information in
the image, while it is also necessary to use some kind of higher-level information to improve
the visual quality. Therefore, we refer to the idea in SRGAN [23] and use similar strategies
to guide the training of the network. The basic idea is to employ a content extractor. Then,
if the enhanced image and the ground truth are similar, their corresponding outputs from the
content extractor should also be similar.
A suitable content extractor can be a neural network trained on a large dataset. Because
the VGG network [33] is shown to be well-structured and well-behaved, we choose the VGG
network as the content extractor in our method. In particular, we define the context loss based
on the output of the ReLU activation layers of the pre-trained VGG-19 network. To measure
the difference between the representations of the enhanced image and the ground truth, we
compute their sum of absolute differences. Finally, the context loss is defined as follows:
W H C
i, j i, j i, j
1
LV GG/i, j = ∑ ∑ ∑ kφi, j (E)x,y,z − φi, j (G)x,y,z k
Wi, j Hi, jCi, j x=1
(3)
y=1 z=1
where E and G are the enhanced image and ground truth, and Wi, j , Hi, j and Ci, j describe the
dimensions of the respective feature maps within the VGG network. Besides, φi, j indicates
the feature map obtained by j-th convolution layer in i-th block in the VGG-19 Network.
6 LV, LU, WU, LIM: LOW-LIGHT IMAGE/VIDEO ENHANCEMENT USING CNNS
Region loss. The above loss functions take the image as a whole. However, for our
low-light enhancement task, we need to pay more attention to those low-light regions. As
a result, we propose the region loss, which balances the degree of enhancement between
low-light and other regions in the image.
In order to do so, we first propose a simple strategy to separate low-light regions from
other parts of the image. By conducting preliminary experiments, we find that choosing the
top 40% darkest pixels among all pixels gives a good approximation of the low-light regions.
One can also propose more complex ways for dark region selection and in fact there are many
in the literature. Finally, the region loss is defined as follows:
nL mL nH mH
1 1
LRegion = wL · ∑ ∑ (kEL (i, j) − GL (i, j)k) + wH · mH nH ∑ ∑ (kEH (i, j) − GH (i, j)k), (4)
mL nL i=1 j=1 i=1 j=1
where EL and GL are the low-light regions of the enhanced image and ground truth, and EH
and GH are the rest parts of the images. In our case, we suggest wL = 4 and wH = 1.
4 Experimental Evaluation
The proposed method is evaluated and compared with existing methods through extensive
experiments. For comparison, we use the published codes of the existing methods.
Overall, we have done four major sets of experiments as follows. 1) We compare our
method with a number of existing methods including those latest and most representative
ones on the task of low-light image enhancement. 2) We show another set of comparisons
on the task of low-light image enhancement in the presence of Poisson noise. 3) We show
experimental results using real-world low-light images. 4) We conduct further experiments
on low-light video enhancement.
Low-light image synthesis. Low-light images differ from common images due to two
most salient features: low brightness and the presence of noise. For the former feature, we
apply a random gamma adjustment to each channel of the common images to produce the
γ
low-light images, which is similar to [28]. This process can be expressed as Iout = A × Iin ,
where A is a constant determined by the maximum pixel intensity in the image and γ obeys
a uniform distribution U(2, 3.5). As for the noise, although many previous methods ignore
it, we still take it into account. In particular, we add Poisson noise with peak value = 200 to
the low-light image. Finally, we select 16925 images in the VOC dataset to synthesize the
training set, 56 images for the validation set, and 144 images for the test set.
Low-light video synthesis. We chose e-Lab Video Data Set (e-VDS) [8] to synthesize
low-light videos. We cut the original videos into video clips (31 × 255 × 255 × 3) to build a
dataset of around 20000 samples, 95% of which form the training set and the rest for test.
Performance metrics. To evaluate the performance of different methods from differ-
ent aspects and in a more fair way, we use a variety of different metrics, including PSNR,
SSIM [39], Average Brightness(AB) [6], Visual Information Fidelity(VIF) [32], Lightness
order error(LOE) as suggested in [41] and TMQI [40]. Note that in the tables below, red,
green and blue colors indicate the best, sub-optimal and third best results, respectively.
We conduct experiments using the synthetic dataset. Results are compared between our
method and other 10 latest methods, as shown in Table 1. Our method outperforms all the
other methods in all cases and is far ahead of the second (green) and third best (blue).
Representative results are visually shown in Figure 4. By checking the details, it is clear
that our method achieves better visual effects, including good brightness/contrast and less
artifacts. It is highly encouraged to zoom in to compare the details.
In order to emphasize our advantage in detail recovery besides the brightness recovery,
we scale the image brightness of all methods according to the ground truth so that they have
the exactly correct maximum and minimum values. Then, we compare the results in Table 2.
Due to space limit, we only select those best methods from Table 1. The results show that
our method still outperforms all other methods by a large margin.
8 LV, LU, WU, LIM: LOW-LIGHT IMAGE/VIDEO ENHANCEMENT USING CNNS
Input MF [11] LIME [14] Ying [43] BIMEF [41] Ours Ground Truth
Table 2: Comparison of different methods after brightness scale according to ground truth.
Input MF [11] LIME [14] Ying [43] BIMEF [41] Ours Ground Truth
Input Dong [9] AMSR [25] NPE [37] LIME [14] Ying [43] BIMEF [41] Ours
Figure 6: Some real-world results. Image in the first row is captured in a railway station.
The rest images are from the Vassilios Vonikakis dataset downloaded from the Internet.
10 LV, LU, WU, LIM: LOW-LIGHT IMAGE/VIDEO ENHANCEMENT USING CNNS
Comparison of our MBLLEN, its video version (MBLLVEN) and three other methods
are shown in Table 4. We also introduce the AB(var) metric to measure the difference of the
average brightness variance between the enhanced video and the ground truth. This metric
reflects whether the video has unexpected brightness changes or flickers, and the proposed
MBLLVEN achieves the best performance in preserving the inter-frame consistency. The
enhanced videos are provided in the supplementary files for intuitive comparison.
5 Conclusion
This paper proposes a novel CNN-based method for low-light enhancement. Existing meth-
ods usually rely on certain assumptions and often ignore additional factors such as image
noise. To solve those challenges, we aim at training a powerful and flexible network to ad-
dress this task more effectively. Our network consists of three modules, namely the FEM,
EM and FM. It is designed to be able to extract rich features from different layers in FEM,
and enhance them via different sub-nets in EM. By fusing the multi-branch outputs via FM,
it produces high quality results and outperforms the state-of-the-arts by a large margin. The
network can also be modified to handle low-light videos effectively.
References
[1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig
Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow:
Large-scale machine learning on heterogeneous distributed systems. arXiv preprint
arXiv:1603.04467, 2016.
[2] Tarik Arici, Salih Dikbas, and Yucel Altunbasak. A histogram modification frame-
work and its application for image contrast enhancement. IEEE Transactions on image
processing, 18(9):1921–1935, 2009.
[3] Lucio Azzari and Alessandro Foi. Variance stabilization for noisy+ estimate combina-
tion in iterative poisson denoising. IEEE signal processing letters, 23(8):1086–1090,
2016.
[4] Jose Caballero, Christian Ledig, Andrew Aitken, Alejandro Acosta, Johannes Totz,
Zehan Wang, and Wenzhe Shi. Real-time video super-resolution with spatio-temporal
networks and motion compensation. In IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017.
LV, LU, WU, LIM: LOW-LIGHT IMAGE/VIDEO ENHANCEMENT USING CNNS 11
[5] Turgay Celik and Tardi Tjahjadi. Contextual and variational contrast enhancement.
IEEE Transactions on Image Processing, 20(12):3431–3441, 2011.
[6] ZhiYu Chen, Besma R Abidi, David L Page, and Mongi A Abidi. Gray-level grouping
(glg): an automatic method for optimized image contrast enhancement-part i: the basic
method. IEEE transactions on image processing, 15(8):2290–2302, 2006.
[8] Eugenio Culurciello and Alfredo Canziani. e-Lab video data set. https://
engineering.purdue.edu/elab/eVDS/, 2017.
[9] Xuan Dong, Guan Wang, Yi Pang, Weixin Li, Jiangtao Wen, Wei Meng, and Yao Lu.
Fast efficient algorithm for enhancement of low lighting video. In Multimedia and
Expo (ICME), 2011 IEEE International Conference on, pages 1–6. IEEE, 2011.
[10] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew
Zisserman. The pascal visual object classes (voc) challenge. International journal of
computer vision, 88(2):303–338, 2010.
[11] Xueyang Fu, Delu Zeng, Yue Huang, Yinghao Liao, Xinghao Ding, and John Paisley.
A fusion-based enhancing method for weakly illuminated images. Signal Processing,
129:82–96, 2016.
[12] Xueyang Fu, Delu Zeng, Yue Huang, Xiao-Ping Zhang, and Xinghao Ding. A weighted
variational model for simultaneous reflectance and illumination estimation. In Pro-
ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
2782–2790, 2016.
[13] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In
Advances in neural information processing systems, pages 2672–2680, 2014.
[14] Xiaojie Guo, Yu Li, and Haibin Ling. Lime: Low-light image enhancement via illu-
mination map estimation. IEEE Transactions on Image Processing, 26(2):982–993,
2017.
[15] Haidi Ibrahim and Nicholas Sia Pik Kong. Brightness preserving dynamic histogram
equalization for image contrast enhancement. IEEE Transactions on Consumer Elec-
tronics, 53(4):1752–1758, 2007.
[16] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image trans-
lation with conditional adversarial networks. arXiv preprint, 2017.
[17] Daniel J Jobson, Zia-ur Rahman, and Glenn A Woodell. A multiscale retinex for bridg-
ing the gap between color images and the human observation of scenes. IEEE Trans-
actions on Image processing, 6(7):965–976, 1997.
[18] Daniel J Jobson, Zia-ur Rahman, and Glenn A Woodell. Properties and performance
of a center/surround retinex. IEEE transactions on image processing, 6(3):451–462,
1997.
12 LV, LU, WU, LIM: LOW-LIGHT IMAGE/VIDEO ENHANCEMENT USING CNNS
[19] Minjae Kim, Dubok Park, David K Han, and Hanseok Ko. A novel approach for denois-
ing and enhancement of extremely low-light video. IEEE Transactions on Consumer
Electronics, 61(1):72–80, 2015.
[20] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv
preprint arXiv:1412.6980, 2014.
[21] Seungyong Ko, Soohwan Yu, Wonseok Kang, Chanyong Park, Sangkeun Lee, and
Joonki Paik. Artifact-free low-light video enhancement using temporal similarity and
guide map. IEEE Transactions on Industrial Electronics, 64(8):6392–6401, 2017.
[22] Edwin H Land. The retinex theory of color vision. Scientific American, 237(6):108–
129, 1977.
[23] Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham,
Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al.
Photo-realistic single image super-resolution using a generative adversarial network.
arXiv preprint, 2016.
[24] Chang-Hsing Lee, Jau-Ling Shih, Cheng-Chang Lien, and Chin-Chuan Han. Adaptive
multiscale retinex for image contrast enhancement. In Signal-Image Technology &
Internet-Based Systems (SITIS), 2013 International Conference on, pages 43–50. IEEE,
2013.
[25] Chang-Hsing Lee, Jau-Ling Shih, Cheng-Chang Lien, and Chin-Chuan Han. Adaptive
multiscale retinex for image contrast enhancement. In Signal-Image Technology &
Internet-Based Systems (SITIS), 2013 International Conference on, pages 43–50. IEEE,
2013.
[26] Chulwoo Lee, Chul Lee, and Chang-Su Kim. Contrast enhancement based on layered
difference representation of 2d histograms. IEEE transactions on image processing, 22
(12):5372–5384, 2013.
[27] Huijie Liu, Xiankun Sun, Hua Han, and Wei Cao. Low-light video image enhance-
ment based on multiscale retinex-like algorithm. In Control and Decision Conference
(CCDC), 2016 Chinese, pages 3712–3715. IEEE, 2016.
[28] Kin Gwn Lore, Adedotun Akintayo, and Soumik Sarkar. L lnet: A deep autoencoder
approach to natural low-light image enhancement. Pattern Recognition, 61:650–662,
2017.
[29] Keita Nakai, Yoshikatsu Hoshi, and Akira Taguchi. Color image contrast enhacement
method based on differential intensity/saturation gray-levels histograms. In Intelligent
Signal Processing and Communications Systems (ISPACS), 2013 International Sympo-
sium on, pages 445–449. IEEE, 2013.
[30] Jianhua Pang, Sheng Zhang, and Wencang Bai. A novel framework for enhancement
of the low lighting video. In Computers and Communications (ISCC), 2017 IEEE
Symposium on, pages 1366–1371. IEEE, 2017.
[31] Tal Remez, Or Litany, Raja Giryes, and Alex M Bronstein. Deep class-aware image
denoising. In Sampling Theory and Applications (SampTA), 2017 International Con-
ference on, pages 138–142. IEEE, 2017.
LV, LU, WU, LIM: LOW-LIGHT IMAGE/VIDEO ENHANCEMENT USING CNNS 13
[32] Hamid R Sheikh and Alan C Bovik. Image information and visual quality. IEEE
Transactions on image processing, 15(2):430–444, 2006.
[33] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-
scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[34] Li Tao, Chuang Zhu, Jiawen Song, Tao Lu, Huizhu Jia, and Xiaodong Xie. Low-light
image enhancement using cnn and bright channel prior. In Image Processing (ICIP),
2017 IEEE International Conference on, pages 3215–3219. IEEE, 2017.
[35] Li Tao, Chuang Zhu, Guoqing Xiang, Yuan Li, Huizhu Jia, and Xiaodong Xie. Llcnn:
A convolutional neural network for low-light image enhancement. In Visual Commu-
nications and Image Processing (VCIP), 2017 IEEE, pages 1–4. IEEE, 2017.
[36] Dongsheng Wang, Xin Niu, and Yong Dou. A piecewise-based contrast enhance-
ment framework for low lighting video. In Security, Pattern Analysis, and Cybernetics
(SPAC), 2014 International Conference on, pages 235–240. IEEE, 2014.
[37] Shuhang Wang, Jin Zheng, Hai-Miao Hu, and Bo Li. Naturalness preserved enhance-
ment algorithm for non-uniform illumination images. IEEE Transactions on Image
Processing, 22(9):3538–3548, 2013.
[38] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity
for image quality assessment. In Signals, Systems and Computers, 2004. Conference
Record of the Thirty-Seventh Asilomar Conference on, volume 2, pages 1398–1402.
Ieee, 2003.
[39] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality
assessment: from error visibility to structural similarity. IEEE transactions on image
processing, 13(4):600–612, 2004.
[40] Hojatollah Yeganeh and Zhou Wang. Objective quality assessment of tone-mapped
images. IEEE Transactions on Image Processing, 22(2):657–667, 2013.
[41] Zhenqiang Ying, Ge Li, and Wen Gao. A bio-inspired multi-exposure fusion framework
for low-light image enhancement. arXiv preprint arXiv:1711.00591, 2017.
[42] Zhenqiang Ying, Ge Li, Yurui Ren, Ronggang Wang, and Wenmin Wang. A new image
contrast enhancement algorithm using exposure fusion framework. In International
Conference on Computer Analysis of Images and Patterns, pages 36–46. Springer,
2017.
[43] Zhenqiang Ying, Ge Li, Yurui Ren, Ronggang Wang, and Wenmin Wang. A new
low-light image enhancement algorithm using camera response model. manuscript
submitted for publication, 2017.
[44] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaus-
sian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions
on Image Processing, 26(7):3142–3155, 2017.
[45] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-
to-image translation using cycle-consistent adversarial networks. arXiv preprint
arXiv:1703.10593, 2017.
View publication stats