Abstract
The image restoration (IR) technique is a part of image processing to improve the quality of an image that is affected by noise and blur. Thus, IR is required to attain a better quality of image. In this paper, IR is performed using linear regression-based support vector machine (LR-SVM). This LR-SVM has two steps: training and testing. The training and testing stages have a distinct windowing process for extracting blocks from the images. The LR-SVM is trained through a block-by-block training sequence. The extracted block-by-block values of images are used to enhance the classification process of IR. In training, the imperfections on the image are easily identified by setting the target vectors as the original images. Then, the noisy image is given at LR-SVM testing, based on the original image restored from the dictionary. Finally, the image block from the testing stage is enhanced using the hybrid Laplacian of Gaussian (HLOG) filter. The denoising of the HLOG filter provides enhanced results by using block-by-block values. This proposed approach is named as LR-SVM-HLOG. A dataset used in this LR-SVM-HLOG method is the Berkeley Segmentation Database. The performance of LR-SVM-HLOG was analyzed as peak signal-to-noise ratio (PSNR) and structural similarity index. The PSNR values of the house and pepper image (color image) are 40.82 and 36.56 dB, respectively, which are higher compared to the inter- and intra-block sparse estimation method and block matching and three-dimensional filtering for color images at 20% noise.
1 Introduction
Recorded images generally degrade due to environmental effects and imperfections in the imaging system [22]. Image restoration (IR) plays a major role in digital image processing, and it aims to reconstruct the high-frequency details or eliminate noises from the image [2]. This IR process takes place in deblurring, denoising, and medical applications [3], [8], [16], [24]. There are numerous methods available for image denoising, which are divided into four different categories: spatial filtering methods, transform domain filtering methods, partial differential equation-based methods, and variational methods [33]. Passive millimeter wave images (PMMWs) become affected by the high reflectivity of metal objects, and these PMMWs are used in aviation, security, and environmental monitoring [20]. IR over a hyperspectral image (HSI) requires rich spectral information, and this HSI is a three-dimensional data cube [12], [41]. The super-resolution reconstruction uses in IR-based image processing technique. A regularization method is used to perform the IR, and this regularization defines the quality of the image [29], [49]. Some of the filtering techniques include Wiener filtering and wave atom transform used in degraded images and those affected by noise, blur, etc. [4], [23].
The conventional methods used in IR are support vector machine (SVM)-based blur identification, residual-based deep convolutional neural network (CNN) for image dehazing, and multi-resolution CNN. SVM is used for classifying non-local feature vectors, and mapping of SVM is made between the vector and blur parameters [26]. Based on the transmission map of residual CNN, the haze from the image is eliminated and this network is trained with the help of the NYU2 depth dataset [18]. A multi-resolution CNN gives the reconstructed image by removing more patterns related to a specific frequency band [32]. In multi-scale vectorial total variation, the spatial-dependent regularization parameters are utilized to reconstruct the images [10]. The combination of discriminative learning techniques and advanced proximal optimization algorithms restores the image. Along with that, the denoising and deconvolution of the corrupted image are performed in the training part of discriminative learning [40]. In addition, the combination of non-local regularization technique with structured sparse representation is used for restoring the images, and it depends on the sparse nature of the transform coefficients of image patches [31]. In this paper, IR over corrupted images is done by linear regression-based (LR)-SVM. Here, the hybrid Laplacian of Gaussian (LOG) filtering technique (combination of LOG and Gaussian) is used for enhancing the image blocks extracted from the testing stage.
The major contributions of this research work are as follows:
A total of three kinds of noise, including Gaussian noise, speckle noise, and salt and pepper noise, are considered in this research work.
The images are restored through separating the image imperfections by separating the noise samples from the image blocks while performing LR-SVM. The proposed LR-SVM-HLOG method-based IR is applicable for both color and grayscale images.
The performance of this research work is improved by placing the hybrid LOG filter at the testing stage. Both the restoration and denoising of the LR-SVM-HLOG method are enhanced by providing the block-by-block information of the entire image.
This research work is composed as follows: Section 2 presents a brief review of some papers based on IR. Section 3 describes the details about the problem formulation and gives the solution with the help of LR-SVM-HLOG. Section 4 briefly describes how the image is restored from the noisy images. Section 5 describes the experimental results of an LR-SVM-HLOG method and conventional methods. The conclusion of this research work is given in Section 6.
2 Survey of Related Works
Several existing techniques related to IR from noise, blur, etc., are surveyed in this section. A brief evaluation of some contributions to the existing literature is presented in the Table 1. It also explains about the limitation of existing methodologies and the purpose of the work. Typically, the images are often corrupted by several noises, haze, fog, and blur due to an error generated in noisy sensors or communication channels. In recent years, researchers have developed several methods, such as the cascaded model of Gaussian conditional random fields [28], splitting method for structured total least squares [13], Bayesian model [19], [30], the combination of iterative VanCittert algorithm with noise reduction modeling [21], K-means singular value decomposition [39], joint log likelihood function [45], and filters [42], for restoring corrupted images. In addition, various kinds of neural networks and classifiers have been utilized in the IR process. The main processes present in all neural networks are noise identification (training stage), denoising (testing stage) the image, and storing the respective image [5], [7], [11], [17], [25], [27], [34], [35], [36], [37], [38], [44], [46], [47]. However, some of the techniques are not fit for those three kinds of noises. In this research, LR-SVM is used for restoring the noisy images.
Authors | Year | Algorithm | Purpose | Difference of Proposed Work |
---|---|---|---|---|
Ratnakar Dash and Banshidhar Majhi [7] | 2014 | Radial basis function neural network (RBFNN), Gabor filter, and Wiener filter | RBFNN and Gabor filter estimate the length of blur and the estimated image is restored by Wiener filter. | In this RBFNN, the weight is not connected between the input and output layers. |
Haibin Duan and Xiaohua Wang [11] | 2016 | Echo state network (ESN) and orthogonal pigeon-inspired optimization (OPIO) | ESN is a recurrent neural network that has been characterized by recurrent loops in synaptic connection pathways. This ESN is used for estimating the original images. | The ESN parameters is optimized by OPIO because the typical ESN has a more complicated fitting system due to the large amount of dynamic reservoir internal processing units. |
Yunjin Chen and Thomas Pock [5] | 2016 | Trainable non-linear reaction diffusion (TNRD) | The training of TNRD is similar to the other neural networks (NNs). | Here, only the Gaussian noise is considered in the image denoising process. |
Manjun Qin et al. [27] | 2018 | Deep convolutional neural network (CNN) | The different dehazing abilities were performed by training the deep CNN based on different levels of haze samples. Then, the dehazing result is produced by fusing the outputs of CNN individuals. | The Landsat 8 Operational Land Imager (OLI) dataset was used in this research, and the method was only used for the purpose of dehazing. |
Ilke Turkmen [34] | 2011 | Neuro-fuzzy detector-based median filter (NFDMF) | The hybrid tuning algorithm was used for tuning the fuzzy parameters. This leads to decreased or minimized errors. | Only three different kinds of images are used in the training process. |
Hang Zhao et al. [47] | 2016 | Neural networks | Denoising of the images is performed by investigating the impact of different loss function layers. | The convergence properties of the NN decide the IR performances. |
Jian Zhang et al. [44] | 2014 | Group-based sparse representation (GSR) | An effective self-adaptive dictionary learning method is developed to make the GSR tractable and robust. | Here, only the noisy image blocks are given to the training process. |
3 Problem Definition and Solution
The present issues of IR are analyzed in this section, which also states how the LR-SVM-HLOG method gives a solution to this problem. The problems of IR are as follows:
Difficulty in the prediction of noise;
Works only in appropriate noises; and
Selection of appropriate training sequence.
3.1 Difficulty in the Prediction of Noise
The prediction of impulse noises from the data acquisition process is difficult, and there is also no perfect noise reduction method [1], [43]. Solution: In this LR-SVM-HLOG method, the noisy image blocks are identified by computing the average intensity values of the neighborhood pixel for the respective pixel. By using this process, the noises from the images can be predicted easily and the respective image blocks can also be restored in the dictionary.
3.2 Works Only in Appropriate Noises
The restoration of images is performed only on appropriate noises, for example trainable non-linear reaction diffusion (TNRD) [5] and group-based sparse representation (GSR) [44]. Here, the TNRD training experiments are performed based on noise levels of 15%, 25%, and 50%, and the GSR receive the noisy images at different noise levels such as 10%, 20%, 30%, etc. However, this TNRD considers only the Gaussian noise and the GSR method considers only the impulse and Gaussian noise.
Solution: In the LR-SVM-HLOG method, three kinds of noises (Gaussian noise, speckle noise, and salt and pepper noise) are considered. The corrupted images are denoised by identifying noise imperfections based on the separable plane of LR-SVM. The denoising process is improved by converting the entire input image into a block-by-block sequence.
3.3 Selection of Appropriate Training Sequence
For an effective restoration of the images, the training sequences that are given to the system need to be in proper manner. However, in some of the methods, the training sequences are not in a proper manner. For example, IR of motion blurred images is obtained by using edge information, related coefficients, and next-layer information in kernel function [48].
Solution: This training problem is overcome by providing the appropriate training sequences. In this SVM, the values of the noisy image array are given as input and the target value for this SVM is set as values of the original image array. By giving the denoised image in the testing section, the imperfections in the image are analyzed and tested with trained input values to restore the respective image. Here, the input image is converted into a block-by-block sequence by using the windowing process, and these block values are tested with the entire database for effectively improving the restoration process.
4 Proposed Methodology for Effective IR
As the demand for high-quality and high-resolution images are growing, computational efficiency has become a main issue in IR. Several techniques have been developed for restoring the images from a corrupted image; however, some of the techniques are not fit for all types of noises. Hence, this research determines the technique for restoring different kinds of noises such as Gaussian noise, speckle noise, and salt and pepper noise. This LR-SVM-HLOG method has two stages: LR-SVM training and LR-SVM testing, which are clearly explained in the following sections.
4.1 LR-SVM Training
The training of LR-SVM has three main processes: image acquisition, block extraction, and classifications using LR-SVM. Figure 1 shows the block diagram for LR-SVM training.
4.1.1 Image Acquisition and Block Extraction
Initially, the input image is taken from the Berkeley Segmentation Database (BSD) and then the image is transformed into a grayscale image. After adding the noise in the original images, both the noise and the original images are given to block extraction. The block extraction process is performed by a distinct windowing process. The block extracted from the windowing process has a size of 3 × 3. Then, these blocks are arranged into an array for giving the input to the LR-SVM.
4.1.2 Training Using LR-SVM
In this LR-SVM, the noisy image array values are given as input and the target value for this LR-SVM is set as the original image array value. The LR-SVM is trained by the noisy image array (i.e. blocks) of three different noises, such as Gaussian noise, speckle noise, and salt and pepper noise, with standard deviations of 10%–90%. Based on the input and target values, this LR-SVM is trained and the training process of the LR-SVM is explained as follows: SVMs are the set of related supervised learning methods for classification and regression. The LR-SVMs are actually introduced for the classification problems and are later also used to resolve the regression issues. Based on the subset of the training data, the model is generated. The cost function for constructing the model does not consider the training points that lie beyond the region. Similarly, the model generated by the LR-SVM is based on a subset of training data, because the cost function for building the model avoids the training data that are close (i.e. within the threshold ϵ) to the model prediction. Table 2 presents the specifications of LR-SVM.
Parameter | Value |
---|---|
Kernel function | Linear |
Scale | 1 |
Gap tolerance | 1.0e−3 |
Iteration | 75,259 |
Cache size | 1000 |
Caching method | Queue |
Delta gradient | 0.0570 |
Gap | 9.2522e−4 |
Here, the set of training samples (image blocks) that is given to the LR-SVM is denoted as
The dual SVM problem is solved by Eq. (1):
where the Lagrange multiplier is denoted as
The samples that are given in the LR-SVM only belong to its dimensionality, and the jth dimension in the
The gradient G measures the changes in
a. Linear Regression for Gradient Estimation Both the gradient computation during training and the computation of prediction function during testing hinge on the term
where αi, yi, and xi are constants in this function. A linear regression model will estimate the output of this function,
where the function of computational bottleneck is
b. Updating the Linear Regression Coefficients There is still a need for a way to update the classifier w to complete the LR-SVM-HLOG linear regression-based approach. The weight (2) and gradient (3) are updated by fixing i among 1 to n. When αi is changed by
where
4.1.3 Loss Function
The loss function used in the LR-SVM corrects the class of each image pixel, which has a score greater than the incorrect classes based on the fixed margin value
where Li is the loss function, yi is the label that specifies the index of the correct class, and sj is the jth element score in Eq. (8):
where xi is the pixel of the image and the weight is W.
4.2 LR-SVM Testing
The corrupted images are given as an input to LR-SVM testing. LR-SVM testing runs only one time when the given input image is in grayscale. If the input image is a color image (RGB image), the LR-SVM processes the image three times (i.e. for different color features such as RGB) because the testing stage requires to test three colors (R, G, and B). Figure 2 presents an example of an input image of LR-SVM testing. Similar to LR-SVM training, the block extraction is performed in this training. The block values from the windowing process are used for enhancing the restoration process. Before giving the image blocks to the input of LR-SVM testing, the image blocks are replicated as three arrays because the LR-SVM is trained as three arrays. The noise present in the image pixels is found based on the intensity values of each pixel. The noise calculation of the respective pixel is performed by considering the intensity values of neighborhood pixels. Initially, the maximum and minimum boundary value of intensity is fixed to identify the noisy pixel. The pixel’s intensity value either goes higher than the maximum boundary value or lower than the minimum boundary value. Then, the respective pixel is referred to as noisy pixel. Finally, the noisy pixels are compared with the trained dictionary value to restore the respective pixel from the stored dictionary. The trained dictionary value comprises image blocks of three different noises with various standard deviations. Moreover, the image obtained from the LR-SVM testing is enhanced by the hybrid LOG filter. Figure 3 represents the block diagram of the LR-SVM testing.
4.2.1 Hybrid LOG Filtering
The hybrid LOG filter receives the input from the testing stage of LR-SVM, and it is a combination of LOG and Gaussian filter. The given input is in the form of a block-by-block sequence of image values, and this input format is used for enhancing the denoising process. Generally, the Laplacian filters are derivative filters and generally used in finding the areas of edges of images. As this derivative filter is very sensitive to noise, the research work used Gaussian filters for smoothening the image. In LOG filter, the Gaussian filtering is performed before the Laplacian filtering. After these two processes, again the Gaussian filtering is applied to smoothen the images. The output from the LOG filter is given as an input to the Gaussian filter. The filtering of LOG is shown in Eq. (9), and the Gaussian filtering of LOG is shown in Eq. (10).
The LOG scale space representation is
where the Laplacian operator is
5 Experimental Results and Discussion
This section gives a detailed description of the experimental results of the LR-SVM-HLOG method. Correspondingly, the experimental setup, type of data set, and performance measure are explained in this section. The performance of the LR-SVM-HLOG method is analyzed by restoring the images from the noisy images.
5.1 Experimental Set-up
The LR-SVM-HLOG method is analyzed with the help of MATLAB simulator software version 2018b. The entire work is done by using an I3 system with 3 GB RAM. Moreover, the better performance of the LR-SVM-HLOG method is shown by comparing the LR-SVM-HLOG method with conventional techniques. The performance of the LR-SVM-HLOG method is analyzed in terms of peak signal to-noise ratio (PSNR) and structural similarity index (SSIM).
5.2 Dataset Description
The evaluation of the LR-SVM-HLOG method is carried out on the 150 images of the image denoising benchmark dataset (BSD). The experimental image used in this LR-SVM-HLOG method is shown in Figure 4. The LR-SVM-HLOG method evaluated the performance of the IR by removing the Gaussian noise, speckle noise, and salt and pepper noise. The zero-mean Gaussian noise, speckle noise, and salt and pepper noise with standard deviations of 10%, 20%, 30%, 40%, and 50% noise are added to each image to evaluate the performance of the LR-SVM-HLOG method.
5.3 Performance Measure
Typically, the success of the IR process is evaluated by comparing the PSNR of the denoised and noisy images with regard to reference. However, it is well known that PSNR does not correlate with the visual quality of the results. Their SSIM considers three reasonably independent image components: luminance, contrast, and structure. SSIM gives a better prediction of subjective image quality than PSNR and other existing quality measures. The SSIM takes values in [0, 1], where 1 indicates that the reference and target images are identical.
a. Mean Square Error (MSE) MSE is defined as the squared intensity of the original image pixels (i.e. given at the testing side) to the output image pixels, and the mathematical expression for MSE is given in Eq. (11) and that for RMSE is given in Eq. (12). The MSE is identified for the
where I is the noise image and J is the noise-free image.
b. PSNR PSNR is used for computing the difference between two images: an input image and the restored image from the LR-SVM-HLOG method. The PSNR helps estimate the quality of the reconstructed image with respect to the original image. The PSNR is calculated using MSE by Eq. (11). The PSNR (dB) of an image is expressed in Eq. (13):
where MaxI is the maximum possible pixel value of the image.
c. SSIM The SSIM is based on the computation of three terms, namely the luminance term, the contrast term, and the structural term. The overall index is a multiplicative combination of the three terms. The SSIM is expressed in Eqs. (14) and (15):
where
5.4 Performance Analysis of Noisy IR
Here, the performance of the LR-SVM-HLOG method is evaluated for three different noises: Gaussian noise, speckle noise, and salt and pepper noise. The performance of the LR-SVM-HLOG method is evaluated for various images such as baboon, Barbara, parrot, cameraman, house, peppers, starfish, monarch, Lena, boat, etc. These images are taken from the BSD. The performance analysis of the PSNR and SSIM of the restored images is given in Table 3, and one example is given in Figure 5 for different levels of noise.
Noise | Noise % | Lena | Cameraman | Parrot | Boat | Starfish | Barbara |
---|---|---|---|---|---|---|---|
Gaussian noise | 10% | 40.45/0.9532 | 41.91/0.9510 | 41.6428/0.5909 | 41.57/0.9456 | 39.123/0.9322 | 41.1726/0.6228 |
20% | 38.12/0.9372 | 38.21/0.9354 | 39.8548/0.5633 | 39.54/0.9232 | 37.54/0.9134 | 39.2989/0.5892 | |
30% | 36.53/0.9156 | 36.76/0.9036 | 38.0601/0.5169 | 38.54/0.9025 | 35.54/0.8821 | 37.0639/0.5544 | |
40% | 36.78/0.8932 | 34.39/0.8821 | 37.005/0.4630 | 36.57/0.8915 | 33.98/0.8667 | 36.4487/0.5057 | |
50% | 32.89/0.8887 | 33.58/0.8637 | 31.4570/0.4052 | 34.98/0.8721 | 31.52/0.8475 | 30.9874/0.4348 | |
Speckle noise | 10% | 39.98/0.9437 | 34.21/0.9556 | 40.7230/0.5041 | 40.25/0.9444 | 36.83/0.9408 | 41.0197/0.5469 |
20% | 38.08/0.9236 | 32.21/0.9412 | 37.9635/0.3838 | 39.47/0.9300 | 34.12/0.9237 | 38.2599/0.4157 | |
30% | 36.87/0.9057 | 30.12/0.9354 | 36.3502/0.3162 | 37.65/0.8912 | 32.58/0.8902 | 36.6890/0.3401 | |
40% | 34.72/0.8954 | 29.94/0.9156 | 35.2762/0.2728 | 35.48/0.8815 | 31.47/0.8876 | 35.6505/0.2912 | |
50% | 32.45/0.8129 | 28.39/0.8965 | 34.6208/0.2464 | 33.54/0.8631 | 29.86/0.8689 | 35.0022/0.2607 | |
Salt and pepper noise | 10% | 39.75/0.9623 | 34.91/0.9624 | 38.9940/0.3991 | 40.61/0.9437 | 37.65/0.9366 | 39.3293/0.4299 |
20% | 37.45/0.9243 | 31.99/0.9465 | 35.9566/0.2638 | 38.25/0.9215 | 35.89/0.9223 | 36.3233/0.2676 | |
30% | 35.98/0.9043 | 30.10/0.9225 | 34.1747/0.1885 | 36.35/0.9044 | 34.00/0.8967 | 34.5260/0.1826 | |
40% | 33.45/0.8712 | 28.94/0.9121 | 32.9313/0.1422 | 34.89/0.8891 | 32.41/0.8709 | 32.4487/0.5057 | |
50% | 32.10/0.8523 | 26.96/0.8923 | 31.9813/0.1084 | 32.78/0.8644 | 30.87/0.8651 | 32.3329/0.0940 |
From the analysis, it is concluded that the LR-SVM-HLOG method gives an effective performance for the three different noises. Figure 5A describes the input image that is given in the testing stage, and Figure 5B is the restored image from the dictionary. The PSNR average computed from the 10% to 50% of Gaussian, salt and pepper, and speckle noises are 36.554, 35.746, and 36.42, respectively. Similarly, the average SSIM of Gaussian, salt and pepper, and speckle noises are 0.91758, 0.90288, and 0.89626, respectively.
Figure 6 describes the input image that is given in the testing stage with the presence of 50% Gaussian noise. The noise present in the input images of Figure 6 is identified by the intensity values of neighborhood pixels. Then, the noisy image blocks are restored from the dictionary values of LR-SVM. The restored images are shown in Figure 7. This LR-SVM-HLOG method is also analyzed in the medical images. Some of the examples for IR through the medical image are given in Figures 8 and 9.
Figures 8A and 9A show the noisy image that is given at the testing stage of LR-SVM-HLOG with Gaussian noise (σ = 50%). The restored image from the dictionary is shown in Figures 8B and 9B. The performance of the LR-SVM-HLOG method is analyzed with some conventional techniques such as GSR [44], non-locally centralized sparse representation (NCSR) [9], inter- and intra-block sparse estimation (IIBSE) [14], block matching and three-dimensional filtering (BM3D) [6], and weighted nuclear norm minimization (WNNM). NCSR is a sparse representative model, and it reduces the noise of an image by introducing a centralized sparse constraint and iterative shrinkage function for solving the minimization problem [9]. IIBSE is also used for norm minimization problems. Here, the intra-block sparsity priors are inter-block sparsity errors [14]. BM3D-based image denoising is introduced to denoise the images. The BM3D for color images is named as C-BM3D [6]. WNNM is introduced to denoise the images by using the image non-local self-similarity [15].
Table 4 and Figure 10 show the comparative analysis for grayscale images of the LR-SVM-HLOG method with some conventional techniques such as NCSR, IIBSE, GSR, and BM3D. Figure 10 illustrates the comparison of LR-SVM-HLOG with the conventional method GSR [44]. From the analysis, it is concluded that the LR-SVM-HLOG method gives effective IR performances. The performance of GSR is high at 30% and 40% noise, because it works only in medium-level noise. However, the LR-SVM-HLOG method works both at low and medium noise levels. The comparative analysis for color images is given in Table 5 and Figure 11. Figure 11 shows the PSNR comparison of the color images at 20% noise level. From the comparison, it is concluded that the IR over the color images provides an effective performance in terms of PSNR. The main reason is that the LR-SVM-HLOG method tests each image based on the block-by-block sequence. By testing the images in a block-by-block sequence, the entire information from the image is obtained, which is used for an effective IR.
Ratio | Methods | Cameraman | House | Barbara | Monarch | Lena | Peppers |
---|---|---|---|---|---|---|---|
10% | NCSR [9] | 34.12 | 36.80 | 34.98 | 34.57 | 35.81 | 34.66 |
BM3D [6] | 34.18 | 36.71 | 34.98 | – | 35.93 | 34.68 | |
WNNM [15] | 34.44 | 36.95 | 35.51 | 35.03 | 36.03 | 34.95 | |
LR-SVM-HLOG | 34.91 | 38.32 | 40.48 | 40.78 | 40.45 | 38.45 | |
20% | NCSR [9] | 30.48 | 33.97 | 31.72 | 30.69 | 32.92 | 31.26 |
GSR [44] | – | 36.78 | 34.59 | 29.55 | – | – | |
BM3D [6] | 30.48 | 33.77 | 31.78 | – | 33.05 | 31.29 | |
LR-SVM-HLOG | 34.21 | 36.82 | 39.12 | 38.45 | 38.12 | 36.56 | |
30% | GSR [44] | – | 38.93 | 36.92 | 33.17 | – | – |
BM3D [6] | 28.64 | 32.09 | 29.81 | – | 29.81 | 29.28 | |
WNNM [15] | 28.80 | 32.52 | 30.31 | 28.92 | 31.43 | 29.49 | |
LR-SVM-HLOG | 32.10 | 36.56 | 38.05 | 37.18 | 36.53 | 34.87 | |
40% | GSR [44] | – | 40.60 | 38.99 | 36.07 | – | – |
LR-SVM-HLOG | 30.39 | 31.72 | 39.12 | 36.74 | 34.78 | 32.12 | |
50% | NCSR [9] | 26.16 | 29.63 | 27.10 | 25.68 | 28.89 | 26.53 |
BM3D [6] | 25.84 | 29.37 | 27.17 | – | 27.17 | 26.41 | |
WNNM [15] | 26.42 | 30.32 | 27.79 | 26.32 | 29.24 | 26.91 | |
LR-SVM-HLOG | 27.58 | 32.00 | 35.01 | 33.00 | 32.89 | 37.12 |
Ratio | Methods | House | Lena | Baboon | Peppers |
---|---|---|---|---|---|
10% | C-BM3D [6] | 36.23 | 35.22 | 30.64 | 33.78 |
LR-SVM-HLOG | 41.32 | 40.45 | 41.28 | 38.45 | |
20% | IIBSE [14] | 33.57 | 32.88 | 27.46 | 31.85 |
C-BM3D [6] | 33.84 | 33.02 | 26.97 | 31.83 | |
LR-SVM-HLOG | 40.82 | 38.12 | 37.4 | 36.56 | |
30% | C-BM3D [6] | 32.33 | 31.59 | 25.14 | 30.62 |
LR-SVM-HLOG | 39.10 | 36.53 | 35.53 | 34.87 | |
40% | IIBSE [14] | 31.54 | 30.78 | 24.60 | 30.05 |
LR-SVM-HLOG | 38.04 | 36.78 | 34.83 | 32.12 | |
50% | C-BM3D [6] | 30.22 | 29.72 | 23.14 | 28.68 |
LR-SVM-HLOG | 37.60 | 32.89 | 32.83 | 31.12 |
6 Conclusion
IR technology is one of the major technical areas in digital image processing. In this paper, the LR-SVM-HLOG method is introduced for restoring the quality of corrupted images. This LR-SVM-HLOG method mainly comprises two steps: LR-SVM training and LR-SVM testing. Here, the training of LR-SVM is based on noisy image blocks with the original image blocks. The trained values are stored in the dictionary. The noisy image is given to the testing stage for comparing the input image with stored dictionary values. It provides the respective image block values when the input value is equal to the stored values. Then, the extracted image values are denoised by using the HLOG filter in the testing stage. This filter is used for smoothening the images. The LR-SVM-HLOG method is compared with three conventional techniques such as GSR, IIBSE, and NCSR. In noisy IR, the LR-SVM-HLOG method gives effective restoration compared to existing methodologies. For example, the grayscale image PSNR of LR-SVM-HLOG is 39.12 dB (for Barbara image) in 20% noise, and it is high compared to that of the NCSR, BM3D, and GSR. The color image PSNR of the LR-SVM-HLOG is 37.60 dB (for house image) in 50% of noise and it is higher than the PSNR of C-BM3D, which is 30.22 dB.
Bibliography
[1] M. Bagheri, M. A. Riahi and H. Hashemi, Denoising and improving the quality of seismic data using combination of DBM filter and FX deconvolution, Arab J. Geosci. 10 (2017), 440.10.1007/s12517-017-3224-5Search in Google Scholar
[2] A. Bouhamidi, R. Enkhbat and K. Jbilou, Conditional gradient Tikhonov method for a convex optimization problem in image restoration, J. Comput. Appl. Math 255 (2014), 580–592.10.1016/j.cam.2013.06.011Search in Google Scholar
[3] H. H. Chang, C. Y. Li and A. H. Gallogly, Brain MR image restoration using an automatic trilateral filter with GPU-based acceleration. IEEE Trans. Bio-Med Eng. 65 (2018), 400–413.10.1109/TBME.2017.2772853Search in Google Scholar PubMed
[4] H. Chen, Z. Cen, C. Wang, S. Lan and X. Li, Image restoration via improved Wiener filter applied to optical sparse aperture systems, Optik-Int. J. Light Electron Optics 147 (2017), 350–359.10.1016/j.ijleo.2017.08.102Search in Google Scholar
[5] Y. Chen and T. Pock, Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration, IEEE Trans. Pattern Anal. 39 (2017), 1256–1272.10.1109/TPAMI.2016.2596743Search in Google Scholar PubMed
[6] K. Dabov, A. Foi, V. Katkovnik and K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16 (2007), 2080–2095.10.1109/TIP.2007.901238Search in Google Scholar PubMed
[7] R. Dash and B. Majhi, Motion blur parameters estimation for image restoration, Optik-Int. J. Light Electron Optics 125 (2014), 1634–1640.10.1016/j.ijleo.2013.09.026Search in Google Scholar
[8] R. Dharmarajan and K. Kannan, A hypergraph-based algorithm for image restoration from salt and pepper noise, AEU-Int. J. Electron Commun. 64 (2010), 1114–1122.10.1016/j.aeue.2009.12.001Search in Google Scholar
[9] W. Dong, L. Zhang, G. Shi and X. Li, Nonlocally centralized sparse representation for image restoration, IEEE Trans. Image Process. 22 (2013), 1620–1630.10.1109/TIP.2012.2235847Search in Google Scholar PubMed
[10] Y. Dong, M. Hintermüller and M. Monserrat Rincon-Camacho, A multi-scale vectorial L τ-TV framework for color image restoration, Int. J. Comput. Vis. 92 (2011), 296–307.10.1007/s11263-010-0359-1Search in Google Scholar
[11] H. Duan and X. Wang, Echo state networks with orthogonal pigeon-inspired optimization for image restoration, IEEE Trans. Neural Netw Learn. Syst. 27 (2016), 2413–2425.10.1109/TNNLS.2015.2479117Search in Google Scholar PubMed
[12] H. Fan, Y. Chen, Y. Guo, H. Zhang and G. Kuang, Hyperspectral image restoration using low-rank tensor recovery, IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 10 (2017), 4589–4604.10.1109/JSTARS.2017.2714338Search in Google Scholar
[13] R. Feiz and M. Rezghi, A splitting method for total least squares color image restoration problem, J. Vis. Commun. Image Rep. 46 (2017), 48–57.10.1016/j.jvcir.2017.03.001Search in Google Scholar
[14] D. Gao and X. Wu, Multispectral image restoration via inter-and intra-block sparse estimation based on physically-induced joint spatiospectral structures, IEEE Trans. Image Process. 27 (2018), 4038–4051.10.1109/TIP.2018.2828341Search in Google Scholar PubMed
[15] S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted nuclear norm minimization with application to image denoising, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2862–2869, 2014.Search in Google Scholar
[16] W. He, H. Zhang, L. Zhang and H. Shen, Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration, IEEE Trans. Geosci. Remote. 54 (2016), 178–188.10.1109/TGRS.2015.2452812Search in Google Scholar
[17] D. Kim, H. U. Jang, S. M. Mun, S. Choi and H. K. Lee, Median filtered image restoration and anti-forensics using adversarial networks, IEEE Signal Proc. Lett. 25 (2018), 278–282.10.1109/LSP.2017.2782363Search in Google Scholar
[18] J. Li, G. Li and H. Fan, Image dehazing using residual-based deep CNN, IEEE Access 6 (2018), 26831–26842.10.1109/ACCESS.2018.2833888Search in Google Scholar
[19] X. Li, Image recovery via hybrid sparse representations: a deterministic annealing approach, IEEE J. Select. Top. Signal. Process. 5 (2011), 953.10.1109/JSTSP.2011.2138676Search in Google Scholar
[20] T. Liu, Z. Chen, S. Liu, Z. Zhang and J. Shu, Blind image restoration with sparse priori regularization for passive millimeter-wave images, J. Vis. Commun. Image Rep. 40 (2016), 58–66.10.1016/j.jvcir.2016.06.007Search in Google Scholar
[21] Y. Liu and W. Lu, A robust iterative algorithm for image restoration, EURASIP J. Image Video Process. 1 (2017), 53.10.1186/s13640-017-0201-6Search in Google Scholar PubMed PubMed Central
[22] X. G. Lv, Y. Z. Song and F. Li, An efficient nonconvex regularization for wavelet frame and total variation based image restoration, J. Comput. Appl. Math. 290 (2015), 553–566.10.1016/j.cam.2015.06.006Search in Google Scholar
[23] Z. Mbarki, H. Seddik and E. B. Braiek, A rapid hybrid algorithm for image restoration combining parametric Wiener filtering and wave atom transform, J. Vis. Commun. Image Rep. 40 (2016), 694–707.10.1016/j.jvcir.2016.08.009Search in Google Scholar
[24] G. Paul, J. Cardinale and I. F. Sbalzarini, Coupling image restoration and segmentation: a generalized linear model/Bregman perspective, Int. J. Comput. Vis. 104 (2013), 69–93.10.1007/s11263-013-0615-2Search in Google Scholar
[25] V. N. V. Satya Prakash, K. Satya Prasad and T. Jaya Chandra Prasad, Color image demosaicing using sparse based radial basis function network, Alexandria Eng. J. 56 (2017), 477–483.10.1016/j.aej.2016.08.032Search in Google Scholar
[26] J. Qiao and J. Liu, A SVM-based blur identification algorithm for image restoration and resolution enhancement, in: International Conference on Knowledge-Based and Intelligent Information and Engineering Systems. Springer, Berlin, Heidelberg, pp. 28–35, 2006.10.1007/11893004_4Search in Google Scholar
[27] M. Qin, F. Xie, W. Li, Z. Shi and H. Zhang, Dehazing for multispectral remote sensing images based on a convolutional neural network with the residual architecture, IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 11 (2018), 1645–1655.10.1109/JSTARS.2018.2812726Search in Google Scholar
[28] U. Schmidt, J. Jancsary, S. Nowozin, S. Roth and C. Rother, Cascades of regression tree fields for image restoration, IEEE Trans. Pattern Anal. 38 (2016), 677–689.10.1109/TPAMI.2015.2441053Search in Google Scholar PubMed
[29] J. Sevcik, V. Smidl and F. Sroubek, An adaptive correlated image prior for image restoration problems, IEEE Signal Proc. Lett. 25 (2018), 1024–1028.10.1109/LSP.2018.2836964Search in Google Scholar
[30] X. Shi, R. Guo, Y. Zhu and Z. Wang, Astronomical image restoration using variational Bayesian blind deconvolution, J. Syst. Eng. Electron. 28 (2017), 1236–1247.10.21629/JSEE.2017.06.21Search in Google Scholar
[31] Z. Su, S. Zhu, X. Lv and Y. Wan, Image restoration using structured sparse representation with a novel parametric data-adaptive transformation matrix, Signal Process-Image 52 (2017), 151–172.10.1016/j.image.2017.01.003Search in Google Scholar
[32] Y. Sun, Y. Yu and W. Wang, Moiré photo restoration using multiresolution convolutional neural networks, IEEE Trans. Image Process. 27 (2018), 4160–4172.10.1109/TIP.2018.2834737Search in Google Scholar PubMed
[33] L. Tang, Z. Fang, C. Xiang and S. Chen, Image selective restoration using multi-scale variational decomposition, J. Vis. Commun. Image Rep. 40 (2016), 638–655.10.1016/j.jvcir.2016.08.004Search in Google Scholar
[34] I. Turkmen, Efficient impulse noise detection method with ANFIS for accurate image restoration, AEU-Int. J. Electron Commun. 65 (2011), 132–139.10.1016/j.aeue.2010.02.006Search in Google Scholar
[35] I. Turkmen, The ANN based detector to remove random-valued impulse noise in images, J. Vis. Commun. Image Rep. 34 (2016), 28–36.10.1016/j.jvcir.2015.10.011Search in Google Scholar
[36] R. Wang and D. Tao, Non-local auto-encoder with collaborative stabilization for image restoration, IEEE Trans. Image Process. 25 (2016), 2117–2129.10.1109/TIP.2016.2541318Search in Google Scholar PubMed
[37] H. Wu and J. Lan, A novel fog-degraded image restoration model of golden scale extraction in color space, Arab J. Sci. Eng. 43 (2017), 1–21.10.1007/s13369-017-2869-4Search in Google Scholar
[38] Y. Xia, C. Sun and W. X. Zheng, Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration, IEEE Trans. Neural Netw Learn. Syst. 23 (2012), 812–820.10.1109/TNNLS.2012.2184800Search in Google Scholar PubMed
[39] F. Xiang and Z. Wang, Split Bregman iteration solution for sparse optimization in image restoration, Optik-Int. J. Light Electron Optics 125 (2014), 5635–5640.10.1016/j.ijleo.2014.06.070Search in Google Scholar
[40] L. Xiao, F. Heide, W. Heidrich, B. Schölkopf and M. Hirsch, Discriminative transfer learning for general image restoration, IEEE Trans. Image Process. 27 (2018), 4091–4104.10.1109/TIP.2018.2831925Search in Google Scholar PubMed
[41] Y. Xie, Y. Qu, D. Tao, W. Wu, Q. Yuan and W. Zhang, Hyperspectral image restoration via iteratively regularized weighted Schatten p-norm minimization, IEEE Trans. Geosci Remote. 54 (2016), 4642–4659.10.1109/TGRS.2016.2547879Search in Google Scholar
[42] J. Xiong, Q. Liu, Y. Wang and X. Xu, A two-stage convolutional sparse prior model for image restoration, J. Vis. Commun. Image Rep. 48 (2017), 268–280.10.1016/j.jvcir.2017.07.002Search in Google Scholar
[43] S. Xu, X. Yang and S. Jiang, A fast nonlocally centralized sparse representation algorithm for image denoising, Signal Process. 131 (2017), 99–112.10.1016/j.sigpro.2016.08.006Search in Google Scholar
[44] J. Zhang, D. Zhao and W. Gao, Group-based sparse representation for image restoration,IEEE Trans. Image Process. 23 (2014), 3336–3351.10.1109/TIP.2014.2323127Search in Google Scholar PubMed
[45] L. Zhang, Y. Li, J. Wang and Y. Liu, Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method, Photon. Sensors. 8 (2018), 22–28.10.1007/s13320-017-0445-xSearch in Google Scholar
[46] Y. Zhang, L. Sun, C. Yan, X. Ji and Q. Dai, Adaptive residual networks for high-quality image restoration, IEEE Trans. Image Process. 27 (2018), 3150–3163.10.1109/TIP.2018.2812081Search in Google Scholar PubMed
[47] H. Zhao, O. Gallo, I. Frosio and J. Kautz, Loss functions for image restoration with neural networks, IEEE Trans. Comput. Imaging 3 (2017), 47–57.10.1109/TCI.2016.2644865Search in Google Scholar
[48] M. Zhao, X. Zhang, Z. Shi, P. Li and B. Li, Restoration of motion blurred images based on rich edge region extraction using a gray-level co-occurrence matrix, IEEE Access. 6 (2018), 15532–15540.10.1109/ACCESS.2018.2815608Search in Google Scholar
[49] X. Zhi, S. Jiang, W. Zhang, D. Wang and Y. Li, Image degradation characteristics and restoration based on regularization for diffractive imaging, Infrared Phys Techn. 86 (2017), 226–238.10.1016/j.infrared.2017.09.014Search in Google Scholar
©2020 Walter de Gruyter GmbH, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 Public License.