Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 3, December
Previous Issue
Volume 3, June
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

J. Imaging, Volume 3, Issue 3 (September 2017) – 17 articles

Cover Story (view full-size image): Previously we developed two methods to apply daytime colors to fused nighttime (e.g., intensified and LWIR) imagery: a ‘statistical’ mapping (equating the statistical properties of a target and reference image) and a ‘sample-based’ mapping (derived from matching samples). Both methods give fused nighttime imagery a natural daylight color appearance, enhance contrast and improve object visibility. Here, we propose new methods that combine the advantages of both previous methods, resulting in a smooth transformation (with good generalization properties) and a close match with daytime colors. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
4683 KiB  
Article
Novel Low Cost 3D Surface Model Reconstruction System for Plant Phenotyping
by Suxing Liu, Lucia M. Acosta-Gamboa, Xiuzhen Huang and Argelia Lorence
J. Imaging 2017, 3(3), 39; https://doi.org/10.3390/jimaging3030039 - 18 Sep 2017
Cited by 25 | Viewed by 10379
Abstract
Accurate high-resolution three-dimensional (3D) models are essential for a non-invasive analysis of phenotypic characteristics of plants. Previous limitations in 3D computer vision algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present an image-based 3D [...] Read more.
Accurate high-resolution three-dimensional (3D) models are essential for a non-invasive analysis of phenotypic characteristics of plants. Previous limitations in 3D computer vision algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present an image-based 3D plant reconstruction system that can be achieved by using a single camera and a rotation stand. Our method is based on the structure from motion method, with a SIFT image feature descriptor. In order to improve the quality of the 3D models, we segmented the plant objects based on the PlantCV platform. We also deducted the optimal number of images needed for reconstructing a high-quality model. Experiments showed that an accurate 3D model of the plant was successfully could be reconstructed by our approach. This 3D surface model reconstruction system provides a simple and accurate computational platform for non-destructive, plant phenotyping. Full article
(This article belongs to the Special Issue 2D, 3D and 4D Imaging for Plant Phenotyping)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Hardware setup: (<b>a</b>) Capturing camera and lens; (<b>b</b>) Rotation stand; and (<b>c</b>) Background plate.</p>
Full article ">Figure 1 Cont.
<p>Hardware setup: (<b>a</b>) Capturing camera and lens; (<b>b</b>) Rotation stand; and (<b>c</b>) Background plate.</p>
Full article ">Figure 2
<p>3D plant model reconstruction pipeline: (<b>a</b>) Sample of the captured image from different view angles of an Arabidopsis plant; (<b>b</b>) Segmented plant objects based on the PlantCV (Plant Computer Vision) platform; (<b>c</b>) Reconstructed 3D model using structure from motion method; and, (<b>d</b>) Snapshots of reconstructed 3D model.</p>
Full article ">Figure 3
<p>Pseudo code for the object segmentation based on the PlantCV (Plant Computer Vision) suite.</p>
Full article ">Figure 4
<p>Camera geometrical model.</p>
Full article ">Figure 5
<p>3D scene capturing device: (<b>a</b>) Capturing camera and rotation stand; (<b>b</b>) Sample of the captured images with multiple view angles.</p>
Full article ">Figure 6
<p>Initial experiment without plant objects segmentation: (<b>a</b>) Capturing camera and rotation stand; (<b>b</b>) 3D model reconstruction based on images without segmentation; and, (<b>c</b>) 3D model result.</p>
Full article ">Figure 7
<p>Plant object segmentation method based on PlantCV.</p>
Full article ">Figure 8
<p>Sample of the plant object segmentation results: (<b>a</b>,<b>b</b>) Sample of the captured image from different view angles of Arabidopsis plant; (<b>c</b>,<b>d</b>) Segmented plant objects based on PlantCV.</p>
Full article ">Figure 8 Cont.
<p>Sample of the plant object segmentation results: (<b>a</b>,<b>b</b>) Sample of the captured image from different view angles of Arabidopsis plant; (<b>c</b>,<b>d</b>) Segmented plant objects based on PlantCV.</p>
Full article ">Figure 9
<p>High quality 3D plant model with detailed overlapping leaves.</p>
Full article ">
9074 KiB  
Article
Histogram-Based Color Transfer for Image Stitching
by Qi-Chong Tian and Laurent D. Cohen
J. Imaging 2017, 3(3), 38; https://doi.org/10.3390/jimaging3030038 - 9 Sep 2017
Cited by 10 | Viewed by 7210
Abstract
Color inconsistency often exists between the images to be stitched and will reduce the visual quality of the stitching results. Color transfer plays an important role in image stitching. This kind of technique can produce corrected images which are color consistent. This paper [...] Read more.
Color inconsistency often exists between the images to be stitched and will reduce the visual quality of the stitching results. Color transfer plays an important role in image stitching. This kind of technique can produce corrected images which are color consistent. This paper presents a color transfer approach via histogram specification and global mapping. The proposed algorithm can make images share the same color style and obtain color consistency. There are four main steps in this algorithm. Firstly, overlapping regions between a reference image and a test image are obtained. Secondly, an exact histogram specification is conducted for the overlapping region in the test image using the histogram of the overlapping region in the reference image. Thirdly, a global mapping function is obtained by minimizing color differences with an iterative method. Lastly, the global mapping function is applied to the whole test image for producing a color-corrected image. Both the synthetic dataset and real dataset are tested. The experiments demonstrate that the proposed algorithm outperforms the compared methods both quantitatively and qualitatively. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

Figure 1
<p>An example of color transfer in image stitching. (<b>a</b>) reference image; (<b>b</b>) test image; (<b>c</b>) color transfer for the test image using the reference color style; (<b>d</b>) stitching without color transfer; (<b>e</b>) stitching with color transfer. Image Source: courtesy of the authors and databases referred on [<a href="#B2-jimaging-03-00038" class="html-bibr">2</a>,<a href="#B14-jimaging-03-00038" class="html-bibr">14</a>].</p>
Full article ">Figure 2
<p>The framework of the proposed algorithm. Image Source: courtesy of the authors and databases referred on [<a href="#B29-jimaging-03-00038" class="html-bibr">29</a>].</p>
Full article ">Figure 3
<p>Comparison for the synthetic image pair. Image Source: courtesy of the authors and databases referred on [<a href="#B2-jimaging-03-00038" class="html-bibr">2</a>,<a href="#B14-jimaging-03-00038" class="html-bibr">14</a>].</p>
Full article ">Figure 4
<p>Histogram comparisons for overlapping regions in <a href="#jimaging-03-00038-f003" class="html-fig">Figure 3</a>. The first column shows the histograms (three color channels, respectively) of overlapping regions in the reference image, the second column shows the corresponding histograms in the test image, the third column shows the corresponding histograms of overlapping regions after the proposed method, the fourth column shows Pitie’s result, the fifth column shows Fecker’s result, and the last column shows Nikolova’s result.</p>
Full article ">Figure 5
<p>Comparison for the synthetic image pair. Image Source: courtesy of the authors and databases referred on [<a href="#B2-jimaging-03-00038" class="html-bibr">2</a>,<a href="#B14-jimaging-03-00038" class="html-bibr">14</a>].</p>
Full article ">Figure 6
<p>Comparison for the synthetic image pair. Image Source: courtesy of the authors and databases referred on [<a href="#B2-jimaging-03-00038" class="html-bibr">2</a>,<a href="#B14-jimaging-03-00038" class="html-bibr">14</a>].</p>
Full article ">Figure 7
<p>Comparison for the real image pair. Image Source: courtesy of the authors and databases referred on [<a href="#B29-jimaging-03-00038" class="html-bibr">29</a>].</p>
Full article ">Figure 8
<p>Histogram comparisons for overlapping regions in <a href="#jimaging-03-00038-f007" class="html-fig">Figure 7</a>. The first column shows the histograms of overlapping regions in the reference image, the second column shows the corresponding histograms in the test image, the third column shows the corresponding histograms of overlapping regions after the proposed method, the fourth column shows Pitie’s result, the fifth column shows Fecker’s result, and the last column shows Nikolova’s result.</p>
Full article ">Figure 9
<p>Comparison for the real image pair. Image Source: courtesy of the authors and databases referred on [<a href="#B29-jimaging-03-00038" class="html-bibr">29</a>].</p>
Full article ">Figure 10
<p>Comparison for the real image pair. Image Source: courtesy of the authors and databases referred on [<a href="#B3-jimaging-03-00038" class="html-bibr">3</a>].</p>
Full article ">
534 KiB  
Article
Enhancing Face Identification Using Local Binary Patterns and K-Nearest Neighbors
by Idelette Laure Kambi Beli and Chunsheng Guo
J. Imaging 2017, 3(3), 37; https://doi.org/10.3390/jimaging3030037 - 5 Sep 2017
Cited by 56 | Viewed by 9121
Abstract
The human face plays an important role in our social interaction, conveying people’s identity. Using the human face as a key to security, biometric passwords technology has received significant attention in the past several years due to its potential for a wide variety [...] Read more.
The human face plays an important role in our social interaction, conveying people’s identity. Using the human face as a key to security, biometric passwords technology has received significant attention in the past several years due to its potential for a wide variety of applications. Faces can have many variations in appearance (aging, facial expression, illumination, inaccurate alignment and pose) which continue to cause poor ability to recognize identity. The purpose of our research work is to provide an approach that contributes to resolve face identification issues with large variations of parameters such as pose, illumination, and expression. For provable outcomes, we combined two algorithms: (a) robustness local binary pattern (LBP), used for facial feature extractions; (b) k-nearest neighbor (K-NN) for image classifications. Our experiment has been conducted on the CMU PIE (Carnegie Mellon University Pose, Illumination, and Expression) face database and the LFW (Labeled Faces in the Wild) dataset. The proposed identification system shows higher performance, and also provides successful face similarity measures focus on feature extractions. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The original local binary pattern (LBP) operator; (<b>b</b>) Circular neighbor-set for three different values of P, R.</p>
Full article ">Figure 2
<p>Different texture primitives detected by <math display="inline"> <semantics> <mrow> <mi>L</mi> <mi>P</mi> <msubsup> <mi>B</mi> <mrow> <mi>P</mi> <mo>,</mo> <mi>R</mi> </mrow> <mrow> <mi>u</mi> <mn>2</mn> </mrow> </msubsup> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Diagram of the process. K-NN: k-nearest neighbors.</p>
Full article ">Figure 4
<p>LBP histograms comparison.</p>
Full article ">Figure 5
<p>Correct identification. (<b>a</b>,<b>b</b>): CMU PIE; (<b>c</b>,<b>d</b>): LFW.</p>
Full article ">Figure 6
<p>Incorrect vs. correct matching.(<b>a</b>,<b>b</b>): CMU PIE; (<b>c</b>,<b>d</b>): LFW.</p>
Full article ">
12426 KiB  
Article
Improved Color Mapping Methods for Multiband Nighttime Image Fusion
by Maarten A. Hogervorst and Alexander Toet
J. Imaging 2017, 3(3), 36; https://doi.org/10.3390/jimaging3030036 - 28 Aug 2017
Cited by 16 | Viewed by 7689
Abstract
Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their [...] Read more.
Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their contrast and the visibility of otherwise obscured details. As a result, it has been shown that these colorizing methods lead to an increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness. A crucial step in the proposed coloring process is the choice of a suitable color mapping scheme. When both daytime color images and multiband sensor images of the same scene are available, the color mapping can be derived from matching image samples (i.e., by relating color values to sensor output signal intensities in a sample-based approach). When no exact matching reference images are available, the color transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image. In the current study, we investigated new color fusion schemes that combine the advantages of both methods (i.e., the efficiency and color constancy of the sample-based method with the ability of the statistical method to use the image of a different but somewhat similar scene as a reference image), using the correspondence between multiband sensor values and daytime colors (sample-based method) in a smooth transformation (statistical method). We designed and evaluated three new fusion schemes that focus on (i) a closer match with the daytime luminances; (ii) an improved saliency of hot targets; and (iii) an improved discriminability of materials. We performed both qualitative and quantitative analyses to assess the weak and strong points of all methods. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

Figure 1
<p>Example from the total training set of six images. (<b>a</b>) Visual sensor band; (<b>b</b>) near infra-red (NIR) band; (<b>c</b>) longwave infrared or thermal (LWIR) (thermal) band; and (<b>d</b>) RGB-representation of the multiband sensor image (in which the ‘hot = dark’ mode is used).</p>
Full article ">Figure 2
<p>(<b>a</b>) Daytime reference image; (<b>b</b>) Intermediate (result at step 5); and (<b>c</b>) final result (after step 6) of the color-the-night (CTN) fusion method, in which the luminance is determined by the input sensor values (rather than by the corresponding daytime reference).</p>
Full article ">Figure 3
<p>Examples of (<b>a</b>) Toet [<a href="#B41-jimaging-03-00036" class="html-bibr">41</a>]; and (<b>b</b>) Pitié et al. [<a href="#B76-jimaging-03-00036" class="html-bibr">76</a>].</p>
Full article ">Figure 4
<p>Processing scheme of the CTN sample-based color fusion method.</p>
Full article ">Figure 5
<p>Processing scheme of the luminance-from-fit (LFF) and R3DF sample based color fusion methods.</p>
Full article ">Figure 6
<p>(<b>a</b>) Standard training set of daytime reference images; (<b>b</b>) Result of the CTN algorithm using the images in (<b>a</b>) for reference; (<b>c</b>) Result from the LFF method; and from (<b>d</b>) the SHT method (the training and test sets were the same in these cases).</p>
Full article ">Figure 7
<p>Results from (<b>a</b>) the CTN scheme (trained on the standard reference image set from <a href="#jimaging-03-00036-f006" class="html-fig">Figure 6</a>a); (<b>b</b>) a two-band color transformation in which the colors depend on the visible and NIR sensor values (using the color table depicted in the inset); and (<b>c</b>) the salient-hot-target (SHT) method in which hot elements are assigned their corresponding color from (<b>b</b>).</p>
Full article ">Figure 8
<p>Processing scheme of the SHT sample based color fusion method.</p>
Full article ">Figure 9
<p>Results of (<b>a</b>) an affine fit-transform; (<b>b</b>) a 2nd order polynomial fit; and (<b>c</b>) a R3DF transformation fit.</p>
Full article ">Figure 10
<p>Results from (<b>a</b>) the SCF method and (<b>b</b>) rigid-3D-fit (R3DF) method. The training and test sets were the same in these cases.</p>
Full article ">Figure 11
<p>Processing scheme of the CTN2 sample-based color fusion method.</p>
Full article ">Figure 12
<p>Results from color transformations derived from the standard training set (see <a href="#jimaging-03-00036-f006" class="html-fig">Figure 6</a> and <a href="#jimaging-03-00036-f007" class="html-fig">Figure 7</a>) and applied to a different scene in the same environment: (<b>a</b>) CTN; (<b>b</b>) LFF method; (<b>c</b>) SHT method; (<b>d</b>) daytime reference (not used for training); (<b>e</b>) CTN2; (<b>f</b>) statistical color fusion (SCF) method; (<b>g</b>) R3DF.</p>
Full article ">Figure 13
<p>Results from color transformations derived from the standard training set applied to a different scene with different sensor settings, registered in the same environment: (<b>a</b>) CTN method; (<b>b</b>) LFF method; (<b>c</b>) SHT method; (<b>d</b>) CTN2 method; (<b>e</b>) SCF method; (<b>f</b>) R3DF method.</p>
Full article ">Figure 13 Cont.
<p>Results from color transformations derived from the standard training set applied to a different scene with different sensor settings, registered in the same environment: (<b>a</b>) CTN method; (<b>b</b>) LFF method; (<b>c</b>) SHT method; (<b>d</b>) CTN2 method; (<b>e</b>) SCF method; (<b>f</b>) R3DF method.</p>
Full article ">Figure 14
<p>Results from color transformations derived from the standard training set applied to a different scene with different sensor settings in the same environment: (<b>a</b>) CTN method; (<b>b</b>) LFF method; (<b>c</b>) SHT method; (<b>d</b>) CTN2 method; (<b>e</b>) SCF method; (<b>f</b>) R3DF method.</p>
Full article ">Figure 15
<p>Results from color transformations derived from the standard training set applied to a different environment with different sensor settings: (<b>a</b>) CTN method; (<b>b</b>) LFF method; (<b>c</b>) SHT method; (<b>d</b>) daytime reference image; (<b>e</b>) CTN2 method; (<b>f</b>) SCF method; (<b>g</b>) R3DF method.</p>
Full article ">Figure 16
<p>Results from color transformations derived from the scene shown on the right (<b>d</b>) with different sensor settings: (<b>a</b>) CTN method; (<b>b</b>) LFF method; (<b>c</b>) SHT method; (<b>d</b>) scene used for training the color transformations; (<b>e</b>) CTN2 method; (<b>f</b>) SCF method; (<b>g</b>) R3DF method.</p>
Full article ">Figure 17
<p>Mean ranking scores for (<b>a</b>) Naturalness; (<b>b</b>) Discriminability; and (<b>c</b>) Saliency of hot targets, for each of the six color fusion methods (CTN, SCF, CTN2, LFF, SHT, R3DF). Filled (empty) bars represent the mean ranking scores for methods applied to images that were (not) in their training set. Filled (open) bars represent the scores when the methods were applied to images that were (not) included their training set. Error bars represent the standard error of the mean.</p>
Full article ">Figure 17 Cont.
<p>Mean ranking scores for (<b>a</b>) Naturalness; (<b>b</b>) Discriminability; and (<b>c</b>) Saliency of hot targets, for each of the six color fusion methods (CTN, SCF, CTN2, LFF, SHT, R3DF). Filled (empty) bars represent the mean ranking scores for methods applied to images that were (not) in their training set. Filled (open) bars represent the scores when the methods were applied to images that were (not) included their training set. Error bars represent the standard error of the mean.</p>
Full article ">
50319 KiB  
Article
Color Consistency and Local Contrast Enhancement for a Mobile Image-Based Change Detection System
by Marco Tektonidis and David Monnin
J. Imaging 2017, 3(3), 35; https://doi.org/10.3390/jimaging3030035 - 23 Aug 2017
Cited by 13 | Viewed by 5954
Abstract
Mobile change detection systems allow for acquiring image sequences on a route of interest at different time points and display changes on a monitor. For the display of color images, a processing approach is required to enhance details, to reduce lightness/color inconsistencies along [...] Read more.
Mobile change detection systems allow for acquiring image sequences on a route of interest at different time points and display changes on a monitor. For the display of color images, a processing approach is required to enhance details, to reduce lightness/color inconsistencies along each image sequence as well as between corresponding image sequences due to the different illumination conditions, and to determine colors with natural appearance. We have developed a real-time local/global color processing approach for local contrast enhancement and lightness/color consistency, which processes images of the different sequences independently. Our approach combines the center/surround Retinex model and the Gray World hypothesis using a nonlinear color processing function. We propose an extended gain/offset scheme for Retinex to reduce the halo effect on shadow boundaries, and we employ stacked integral images (SII) for efficient Gaussian convolution. By applying the gain/offset function before the color processing function, we avoid color inversion issues, compared to the original scheme. Our combined Retinex/Gray World approach has been successfully applied to pairs of image sequences acquired on outdoor routes for change detection, and an experimental comparison with previous Retinex-based approaches has been carried out. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

Figure 1
<p>Two images of the same scene acquired at different time points under different illumination conditions.</p>
Full article ">Figure 2
<p>Intensity transfer function for the Retinex model using a gain/offset function with <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>127.5</mn> </mrow> </semantics> </math> and two contrast gains (<math display="inline"> <semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics> </math>). For <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </semantics> </math>, the transfer function corresponds to the gain/offset function with a single gain value.</p>
Full article ">Figure 3
<p>Processed images using (<b>a</b>) the original Retinex [<a href="#B2-jimaging-03-00035" class="html-bibr">2</a>] with a single gain and (<b>b</b>) Retinex [<a href="#B2-jimaging-03-00035" class="html-bibr">2</a>] extended with the proposed two gains. Enlarged sections for the marked regions are shown on the bottom. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>120</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>80</mn> <mo>,</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>120</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Example of preventing from color inversion issues for (<b>a</b>) an image containing saturated colors (<b>b</b>) using our approach, compared to (<b>c</b>) using the original scheme [<a href="#B7-jimaging-03-00035" class="html-bibr">7</a>] with the gain/offset function applied after the color processing function. (image credit: J.L. Lisani CC BY)</p>
Full article ">Figure 5
<p>Vehicle-mounted color camera system.</p>
Full article ">Figure 6
<p>(<b>a</b>) original and processed images applying (<b>b</b>–<b>d</b>) previous Retinex-based approaches and (<b>e</b>) our approach. Enlarged sections for the marked regions are shown on the right. (<b>a</b>) unprocessed image; (<b>b</b>) Retinex [<a href="#B2-jimaging-03-00035" class="html-bibr">2</a>]; (<b>c</b>) Gray World [<a href="#B15-jimaging-03-00035" class="html-bibr">15</a>] extension; (<b>d</b>) hue-preserving Retinex [<a href="#B24-jimaging-03-00035" class="html-bibr">24</a>]; (<b>e</b>) new combined Retinex/Gray World.</p>
Full article ">Figure 7
<p>(<b>a</b>) original and processed images applying (<b>b</b>) the Gray World extension and (<b>c</b>) our approach for a scene acquired at two different time points. The second image for each example has been registered w.r.t. the first image (top) and the two images have been combined using a checkerboard (bottom). (<b>a</b>) unprocessed images; (<b>b</b>) Gray World extension; (<b>c</b>) new combined Retinex/Gray World.</p>
Full article ">Figure 8
<p>Mean RGB angular error over time for corresponding images of two image sequences for our approach and three previous Retinex-based approaches.</p>
Full article ">Figure 9
<p>(<b>a</b>) original and (<b>b</b>) processed images applying our approach for a scene with a change. The second image has been registered w.r.t. the first image. On the right, the magnitudes of the CIELAB color differences between the two images are visualized with the contour of the segmented change. (<b>a</b>) original images; (<b>b</b>) new combined Retinex/Gray World.</p>
Full article ">
1966 KiB  
Article
Image Fragile Watermarking through Quaternion Linear Transform in Secret Space
by Marco Botta, Davide Cavagnino and Victor Pomponiu
J. Imaging 2017, 3(3), 34; https://doi.org/10.3390/jimaging3030034 - 11 Aug 2017
Cited by 3 | Viewed by 5879
Abstract
In this paper, we apply the quaternion framework for color images to a fragile watermarking algorithm with the objective of multimedia integrity protection (Quaternion Karhunen-Loève Transform Fragile Watermarking (QKLT-FW)). The use of quaternions to represent pixels allows to consider the color information in [...] Read more.
In this paper, we apply the quaternion framework for color images to a fragile watermarking algorithm with the objective of multimedia integrity protection (Quaternion Karhunen-Loève Transform Fragile Watermarking (QKLT-FW)). The use of quaternions to represent pixels allows to consider the color information in a holistic and integrated fashion. We stress out that, by taking advantage of the host image quaternion representation, we extract complex features that are able to improve the embedding and verification of fragile watermarks. The algorithm, based on the Quaternion Karhunen-Loève Transform (QKLT), embeds a binary watermark into some QKLT coefficients representing a host image in a secret frequency space: the QKLT basis images are computed from a secret color image used as a symmetric key. A computational intelligence technique (i.e., genetic algorithm) is employed to modify the host image pixels in such a way that the watermark is contained in the protected image. The sensitivity to image modifications is then tested, showing very good performance. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

Figure 1
<p>Quaternion representation of a sub-image and Quaternion Karhunen-Loève Transform (QKLT) coefficients derivation.</p>
Full article ">Figure 2
<p>Watermarked color image, publicly available from the McGill Calibrated Color Image Database [<a href="#B30-jimaging-03-00034" class="html-bibr">30</a>] (<a href="http://tabby.vision.mcgill.ca/html/welcome.html" target="_blank">http://tabby.vision.mcgill.ca/html/welcome.html</a>), PSNR = 67.02 dB (with zoom on detail).</p>
Full article ">Figure 3
<p>Tampered image (with zoom on tampered detail).</p>
Full article ">Figure 4
<p>Verified image, with (nineteen) tampered blocks evidenced as crossed areas.</p>
Full article ">
2486 KiB  
Article
Improving CNN-Based Texture Classification by Color Balancing
by Simone Bianco, Claudio Cusano, Paolo Napoletano and Raimondo Schettini
J. Imaging 2017, 3(3), 33; https://doi.org/10.3390/jimaging3030033 - 27 Jul 2017
Cited by 30 | Viewed by 8941
Abstract
Texture classification has a long history in computer vision. In the last decade, the strong affirmation of deep learning techniques in general, and of convolutional neural networks (CNN) in particular, has allowed for a drastic improvement in the accuracy of texture recognition systems. [...] Read more.
Texture classification has a long history in computer vision. In the last decade, the strong affirmation of deep learning techniques in general, and of convolutional neural networks (CNN) in particular, has allowed for a drastic improvement in the accuracy of texture recognition systems. However, their performance may be dampened by the fact that texture images are often characterized by color distributions that are unusual with respect to those seen by the networks during their training. In this paper we will show how suitable color balancing models allow for a significant improvement in the accuracy in recognizing textures for many CNN architectures. The feasibility of our approach is demonstrated by the experimental results obtained on the RawFooT dataset, which includes texture images acquired under several different lighting conditions. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

Figure 1
<p>Example of correctly predicted image and mis-predicted image after a color cast is applied.</p>
Full article ">Figure 2
<p>A sample for each of the 68 classes of textures composing the RawFooT database.</p>
Full article ">Figure 3
<p>Scheme of the acquisition setup used to take the images in the RawFooT database.</p>
Full article ">Figure 4
<p>Example of the 46 acquisitions included in the RawFooT database for each class (here the images show the acquisitions of the “rice” class).</p>
Full article ">Figure 5
<p>The Macbeth color target, acquired under the 18 lighting conditions considered in this work.</p>
Full article ">Figure 6
<p>Example of the effect of the different color-balancing models on the “rice” texture class: device-raw (<b>a</b>); light-raw (<b>b</b>); dcraw-srgb (<b>c</b>); linear-srgb (<b>d</b>); and rooted-srgb (<b>e</b>).</p>
Full article ">Figure 7
<p>Classification accuracy obtained by each visual descriptor combined with each model.</p>
Full article ">Figure 8
<p>Accuracy behavior with respect to the difference (<math display="inline"> <semantics> <mrow> <mo>Δ</mo> <mi>T</mi> </mrow> </semantics> </math>) of <span class="html-italic">daylight temperature</span> between the training and the test: (<b>a</b>) setsVGG-M-128; (<b>b</b>) AlexNet; (<b>c</b>) VGG-VD-16; (<b>d</b>) ResNet-50.</p>
Full article ">
20023 KiB  
Article
Improved Parameter Estimation of the Line-Based Transformation Model for Remote Sensing Image Registration
by Ahmed Shaker, Said M. Easa and Wai Yeung Yan
J. Imaging 2017, 3(3), 32; https://doi.org/10.3390/jimaging3030032 - 22 Jul 2017
Cited by 2 | Viewed by 5379
Abstract
The line-based transformation model (LBTM), built upon the use of affine transformation, was previously proposed for image registration and image rectification. The original LBTM first utilizes the control line features to estimate six rotation and scale parameters and subsequently uses the control point(s) [...] Read more.
The line-based transformation model (LBTM), built upon the use of affine transformation, was previously proposed for image registration and image rectification. The original LBTM first utilizes the control line features to estimate six rotation and scale parameters and subsequently uses the control point(s) to retrieve the remaining two translation parameters. Such a mechanism may accumulate the error of the six rotation and scale parameters toward the two translation parameters. In this study, we propose the incorporation of a direct method to estimate all eight transformation parameters of LBTM simultaneously using least-squares adjustment. The improved LBTM method was compared with the original LBTM through using one synthetic dataset and three experimental datasets for satellite image 2D registration and 3D rectification. The experimental results demonstrated that the improved LBTM converges to a steady solution with two to three ground control points (GCPs) and five ground control lines (GCLs), whereas the original LBTM requires at least 10 GCLs to yield a stable solution. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overall workflow of line-based transformation model (LBTM).</p>
Full article ">Figure 2
<p>The synthetic CPs, GCLs and GCPs used in the first experiment.</p>
Full article ">Figure 3
<p>The GCLs and GCPs used in the second experiment; (<b>a</b>) Year 2006 IKONOS image; (<b>b</b>) Year 2012 IKONOS image.</p>
Full article ">Figure 4
<p>The GCLs and GCPs used in the third experiment. (<b>a</b>) WorldView-2 satellite image; (<b>b</b>) Landsat-8 satellite image.</p>
Full article ">Figure 5
<p>The GCLs and GCPs used in the fourth experiment.</p>
Full article ">Figure 6
<p>Analysis of the root-mean-squared (RMS) error with different levels of random noises in (<b>a</b>) GCLs, (<b>b</b>) GCPs and (<b>c</b>) both GCLs and GCPs.</p>
Full article ">Figure 7
<p>Analysis of the RMS error with different combinations of GCLs and GCPs being used in the original LBTM (blue) and the improved LBTM (red) in Experiment 2. <a href="#jimaging-03-00032-f007" class="html-fig">Figure 7</a>j shows the use of GCPs only for image registration as a comparison. (<b>a</b>) 1 GCP; (<b>b</b>) 2 GCPs; (<b>c</b>) 3 GCPs; (<b>d</b>) 4 GCPs; (<b>e</b>) 5 GCPs; (<b>f</b>) 10 GCPs; (<b>g</b>) 15 GCPs; (<b>h</b>) 20 GCPs; (<b>i</b>) 30 GCPs; (<b>j</b>) Only GCPs.</p>
Full article ">Figure 8
<p>Analysis of the RMS error with different combinations of GCLs and GCPs being used in the original LBTM (blue) and the improved LBTM (red) in Experiment 3. <a href="#jimaging-03-00032-f008" class="html-fig">Figure 8</a>j shows the use of GCPs only for image registration as a comparison. (<b>a</b>) 1 GCP; (<b>b</b>) 2 GCPs; (<b>c</b>) 3 GCPs; (<b>d</b>) 4 GCPs; (<b>e</b>) 5 GCPs; (<b>f</b>) 10 GCPs; (<b>g</b>) 15 GCPs; (<b>h</b>) 20 GCPs; (<b>i</b>) 30 GCPs; (<b>j</b>) Only GCPs.</p>
Full article ">Figure 9
<p>Analysis of the RMS error with different combinations of GCLs and GCPs being used in the original LBTM (blue) and the improved LBTM (red) in Experiment 4. (<b>a</b>) 1 GCP; (<b>b</b>) 2 GCPs; (<b>c</b>) 3 GCPs; (<b>d</b>) 4 GCPs; (<b>e</b>) 5 GCPs.</p>
Full article ">Figure 10
<p>Results of image registration performed in the two experiments. (<b>a</b>) Experiment 1: IKONOS to IKONOS; (<b>b</b>) Experiment 2: WorldView-2 to Landsat 8.</p>
Full article ">
1774 KiB  
Article
Robust Parameter Design of Derivative Optimization Methods for Image Acquisition Using a Color Mixer
by HyungTae Kim, KyeongYong Cho, Jongseok Kim, KyungChan Jin and SeungTaek Kim
J. Imaging 2017, 3(3), 31; https://doi.org/10.3390/jimaging3030031 - 21 Jul 2017
Cited by 5 | Viewed by 4674
Abstract
A tuning method was proposed for automatic lighting (auto-lighting) algorithms derived from the steepest descent and conjugate gradient methods. The auto-lighting algorithms maximize the image quality of industrial machine vision by adjusting multiple-color light emitting diodes (LEDs)—usually called color mixers. Searching for the [...] Read more.
A tuning method was proposed for automatic lighting (auto-lighting) algorithms derived from the steepest descent and conjugate gradient methods. The auto-lighting algorithms maximize the image quality of industrial machine vision by adjusting multiple-color light emitting diodes (LEDs)—usually called color mixers. Searching for the driving condition for achieving maximum sharpness influences image quality. In most inspection systems, a single-color light source is used, and an equal step search (ESS) is employed to determine the maximum image quality. However, in the case of multiple color LEDs, the number of iterations becomes large, which is time-consuming. Hence, the steepest descent (STD) and conjugate gradient methods (CJG) were applied to reduce the searching time for achieving maximum image quality. The relationship between lighting and image quality is multi-dimensional, non-linear, and difficult to describe using mathematical equations. Hence, the Taguchi method is actually the only method that can determine the parameters of auto-lighting algorithms. The algorithm parameters were determined using orthogonal arrays, and the candidate parameters were selected by increasing the sharpness and decreasing the iterations of the algorithm, which were dependent on the searching time. The contribution of parameters was investigated using ANOVA. After conducting retests using the selected parameters, the image quality was almost the same as that in the best-case parameters with a smaller number of iterations. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

Figure 1
<p>System Diagram for color mixing and automatic lighting.</p>
Full article ">Figure 2
<p>Target patterns acquired by maximum sharpness: (<b>a</b>) Pattern A; (<b>b</b>) Pattern B.</p>
Full article ">Figure 3
<p>Signal-to-noise (SN) ratios of control factors for Pattern A in the case of steepest descent method: (<b>a</b>) Sharpness; (<b>b</b>) Iterations.</p>
Full article ">Figure 4
<p>SN ratios of control factors for Pattern B in the case of steepest descent method: (<b>a</b>) Sharpness; (<b>b</b>) Iterations.</p>
Full article ">Figure 5
<p>SN ratios of control factors for Pattern A in the case of conjugate gradient method: (<b>a</b>) Sharpness; (<b>b</b>) Iterations.</p>
Full article ">Figure 6
<p>SN ratios of control factors for Pattern B in the case of conjugate gradient method: (<b>a</b>) Sharpness; (<b>b</b>) Iterations.</p>
Full article ">Figure 7
<p>Search path formed by steepest descent method using Patterns (<b>a</b>) A and (<b>b</b>) B.</p>
Full article ">Figure 8
<p>Search path formed by conjugate gradient method using Patterns (<b>a</b>) A and (<b>b</b>) B.</p>
Full article ">
4246 KiB  
Article
Using SEBAL to Investigate How Variations in Climate Impact on Crop Evapotranspiration
by Giorgos Papadavid, Damianos Neocleous, Giorgos Kountios, Marinos Markou, Anastasios Michailidis, Athanasios Ragkos and Diofantos Hadjimitsis
J. Imaging 2017, 3(3), 30; https://doi.org/10.3390/jimaging3030030 - 20 Jul 2017
Cited by 7 | Viewed by 6357
Abstract
Water allocation to crops, and especially to the most water intensive ones, has always been of great importance in agricultural processes. Deficit or excessive irrigation could create either crop health-related problems or water over-consumption, respectively. The latter could lead to groundwater depletion and [...] Read more.
Water allocation to crops, and especially to the most water intensive ones, has always been of great importance in agricultural processes. Deficit or excessive irrigation could create either crop health-related problems or water over-consumption, respectively. The latter could lead to groundwater depletion and deterioration of its quality through deep percolation of agrichemical residuals. In this context, and under the current conditions where Cyprus is facing effects of possible climate changes, the purpose of this study seeks to estimate the needed crop water requirements of the past (1995–2004) and the corresponding ones of the present (2005–2015) in order to test if there were any significant changes regarding the crop water requirements of the most water-intensive trees in Cyprus. The Mediterranean region has been identified as the region that will suffer the most from variations of climate. Thus the paper refers to effects of these variations on crop evapotranspiration (ETc) using remotely-sensed data from Landsat TM/ETM+/OLI employing a sound methodology used worldwide, the Surface Energy Balance Algorithm for Land (SEBAL). Though the general feeling is that of changes on climate will consequently affect ETc, our results indicate that there is no significant effect of climate variation on crop evapotranspiration, despite the fact that some climatic factors have changed. Applying Student’s t-test, the mean values for the most water-intensive trees in Cyprus of the 1994–2004 decade have shown no statistical difference from the mean values of 2005–2015 for all the cases, concluding that the climate change taking place in the past decades in Cyprus have either not affected the crop evapotranspiration or the crops have managed to adapt to the new environmental conditions through time. Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
Show Figures

Figure 1

Figure 1
<p>Mean annual precipitation (mm) (source: AGWATER project).</p>
Full article ">Figure 2
<p>Mean annual temperature (°C) (source: AGWATER project).</p>
Full article ">Figure 3
<p>Areas of interest and national meteorological stations used (source: Meteorological Service of Cyprus).</p>
Full article ">Figure 4
<p>Soil slope line for WDVI of the area of interest.</p>
Full article ">Figure 5
<p>Generation of LAI (<b>B</b>) (in pseudo color) using Landsat images (<b>A</b>) (Landsat 7 ETM+ images).</p>
Full article ">Figure 6
<p>Results of the SEBAL application for the five crops.</p>
Full article ">
29502 KiB  
Article
Pattern Reconstructability in Fully Parallel Thinning
by Yung-Sheng Chen and Ming-Te Chao
J. Imaging 2017, 3(3), 29; https://doi.org/10.3390/jimaging3030029 - 19 Jul 2017
Cited by 4 | Viewed by 6632
Abstract
It is a challenging topic to perform pattern reconstruction from a unit-width skeleton, which is obtained by a parallel thinning algorithm. The bias skeleton yielded by a fully-parallel thinning algorithm, which usually results from the so-called hidden deletable points, will result in the [...] Read more.
It is a challenging topic to perform pattern reconstruction from a unit-width skeleton, which is obtained by a parallel thinning algorithm. The bias skeleton yielded by a fully-parallel thinning algorithm, which usually results from the so-called hidden deletable points, will result in the difficulty of pattern reconstruction. In order to make a fully-parallel thinning algorithm pattern reconstructable, a newly-defined reconstructable skeletal pixel (RSP) including a thinning flag, iteration count, as well as reconstructable structure is proposed and applied for thinning iteration to obtain a skeleton table representing the resultant thin line. Based on the iteration count and reconstructable structure associated with each skeletal pixel in the skeleton table, the pattern can be reconstructed by means of the dilating and uniting operations. Embedding a conventional fully-parallel thinning algorithm into the proposed approach, the pattern may be over-reconstructed due to the influence of a biased skeleton. A simple process of removing hidden deletable points (RHDP) in the thinning iteration is thus presented to reduce the effect of the biased skeleton. Three well-known fully-parallel thinning algorithms are used for experiments. The performances investigated by the measurement of reconstructability (MR), the number of iterations (NI), as well as the measurement of skeleton deviation (MSD) confirm the feasibility of the proposed pattern reconstruction approach with the assistance of the RHDP process. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Illustration of Jang and Chin’s MST-based reconstructable parallel thinning. (<b>a</b>) Feature pixels obtained from the MST, (<b>b</b>) the final skeleton containing reduced feature pixels, feature pixels and the unit-width pixels to ensure pattern reconstruction, as well as (<b>c</b>) the reconstructed pattern.</p>
Full article ">Figure 2
<p>Flowchart of the proposed approach for digital pattern (<b>a</b>) thinning and (<b>b</b>) reconstruction.</p>
Full article ">Figure 3
<p>Definition of eight neighbors for a pixel <span class="html-italic">P</span>.</p>
Full article ">Figure 4
<p>Definition of 16-bit data format for a reconstructable skeletal pixel (RSP)-pixel.</p>
Full article ">Figure 5
<p>(<b>a</b>) Input digital pattern. The thinning results at the 1st, 2nd and 3rd iteration are shown in (<b>b</b>–<b>d</b>), respectively. The numbers in (<b>e</b>) illustrate the points removed at the related thinning iteration.</p>
Full article ">Figure 6
<p>Illustrations of a 1-pixel <math display="inline"> <semantics> <mrow> <mi>P</mi> <mo>∈</mo> <msub> <mi>T</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> in (<b>b</b>) and its eight neighbors <math display="inline"> <semantics> <mrow> <mo>∈</mo> <msub> <mi>T</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> confined by a dotted square. The current iteration count for thinning is 3. According to the Equation (<a href="#FD4-jimaging-03-00029" class="html-disp-formula">4</a>), (<b>c</b>) presents the <span class="html-italic">C</span>-value calculated for the three 1-neighbor <math display="inline"> <semantics> <mrow> <mover accent="true"> <mi>N</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mover accent="true"> <mi>N</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mover accent="true"> <mi>N</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> <mo>∈</mo> <msub> <mi>T</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> in (<b>a</b>) and that calculated for the 1-pixel <math display="inline"> <semantics> <mrow> <mi>P</mi> <mo>∈</mo> <msub> <mi>T</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> in (<b>b</b>).</p>
Full article ">Figure 7
<p>Illustrations of RSP updating from <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics> </math> to <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics> </math>. (<b>a</b>) Null <b>RSP</b> matrix initially. That is, the RSP information is set to be (0 0 0)<math display="inline"> <semantics> <msub> <mrow/> <mn>10</mn> </msub> </semantics> </math> in decimal or (0 000000 00000000)<math display="inline"> <semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics> </math> in binary. (<b>b</b>) Thinning result <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics> </math>, where the original pattern is regarded as <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>0</mn> </msub> </semantics> </math> and confined by a dotted line for reference. (<b>b</b>–<b>h</b>) show the updated RSP information for each 1-pixel <span class="html-italic">P</span> satisfying <math display="inline"> <semantics> <mrow> <mi>C</mi> <mo>(</mo> <mi>P</mi> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> in <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics> </math>, where the reconstructable information <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>(</mo> <mi>P</mi> <mo>)</mo> </mrow> </semantics> </math> is numbered. (<b>i</b>) The updated RSP pixel is denoted with a gray square.</p>
Full article ">Figure 7 Cont.
<p>Illustrations of RSP updating from <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics> </math> to <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics> </math>. (<b>a</b>) Null <b>RSP</b> matrix initially. That is, the RSP information is set to be (0 0 0)<math display="inline"> <semantics> <msub> <mrow/> <mn>10</mn> </msub> </semantics> </math> in decimal or (0 000000 00000000)<math display="inline"> <semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics> </math> in binary. (<b>b</b>) Thinning result <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics> </math>, where the original pattern is regarded as <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>0</mn> </msub> </semantics> </math> and confined by a dotted line for reference. (<b>b</b>–<b>h</b>) show the updated RSP information for each 1-pixel <span class="html-italic">P</span> satisfying <math display="inline"> <semantics> <mrow> <mi>C</mi> <mo>(</mo> <mi>P</mi> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> in <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics> </math>, where the reconstructable information <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>(</mo> <mi>P</mi> <mo>)</mo> </mrow> </semantics> </math> is numbered. (<b>i</b>) The updated RSP pixel is denoted with a gray square.</p>
Full article ">Figure 8
<p>Illustrations of <b>RSP</b> updating from <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics> </math> to <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>3</mn> </msub> </semantics> </math>. (<b>a</b>) The <b>RSP</b> matrix at <math display="inline"> <semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, where six skeletal pixels have been identified. (<b>b</b>) Thinning result <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics> </math>. (<b>c</b>,<b>d</b>) show respectively the updated RSP information for the 1-pixel (2, 3) and (3, 2) having zero <span class="html-italic">C</span> value in <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>3</mn> </msub> </semantics> </math>. (<b>e</b>) The updated RSP pixel is denoted with a gray square. The <b>RSP</b> matrix can be decomposed into <b>F</b>, <b>I</b> and <b>R</b> matrices, as shown in (<b>f</b>,<b>g</b>,<b>h</b>), respectively.</p>
Full article ">Figure 8 Cont.
<p>Illustrations of <b>RSP</b> updating from <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics> </math> to <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>3</mn> </msub> </semantics> </math>. (<b>a</b>) The <b>RSP</b> matrix at <math display="inline"> <semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, where six skeletal pixels have been identified. (<b>b</b>) Thinning result <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics> </math>. (<b>c</b>,<b>d</b>) show respectively the updated RSP information for the 1-pixel (2, 3) and (3, 2) having zero <span class="html-italic">C</span> value in <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>3</mn> </msub> </semantics> </math>. (<b>e</b>) The updated RSP pixel is denoted with a gray square. The <b>RSP</b> matrix can be decomposed into <b>F</b>, <b>I</b> and <b>R</b> matrices, as shown in (<b>f</b>,<b>g</b>,<b>h</b>), respectively.</p>
Full article ">Figure 9
<p>Illustrations of pattern reconstruction from the skeleton table. Reconstructed partial pattern from (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>P</mi> <mo>(</mo> <mn>3</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics> </math>, (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>P</mi> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics> </math>, (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>P</mi> <mo>(</mo> <mn>5</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics> </math>, (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>P</mi> <mo>(</mo> <mn>6</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics> </math>, (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>P</mi> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>)</mo> </mrow> </semantics> </math>, (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>P</mi> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics> </math>, (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>P</mi> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>5</mn> <mo>)</mo> </mrow> </semantics> </math> and (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>P</mi> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>6</mn> <mo>)</mo> </mrow> </semantics> </math>. Here, the reconstructable information 1-neighbor <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>∈</mo> <mi>R</mi> <mo>(</mo> <mi>P</mi> <mo>)</mo> </mrow> </semantics> </math> is displayed by a gray square. (<b>i</b>) shows the final reconstructed pattern <span class="html-italic">A</span> with the union of these partial patterns (<math display="inline"> <semantics> <mover accent="true"> <mi>A</mi> <mo>˙</mo> </mover> </semantics> </math>)s.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>d</b>) The original thinning results at the 1st, 2nd, 3rd and 4th iteration by means of the fully-parallel thinning without the assistance of RHDP process. (<b>e</b>) Gray squares show the reconstructed pattern with our approach. It is obviously over-reconstructed due to the HDP effect.</p>
Full article ">Figure 11
<p>Illustrations of transforming <math display="inline"> <semantics> <msubsup> <mi>T</mi> <mn>1</mn> <mo>*</mo> </msubsup> </semantics> </math> into <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics> </math>. With <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>0</mn> </msub> </semantics> </math> in <a href="#jimaging-03-00029-f005" class="html-fig">Figure 5</a>a, (<b>a</b>) shows the next thinning result <math display="inline"> <semantics> <msubsup> <mi>T</mi> <mn>1</mn> <mo>*</mo> </msubsup> </semantics> </math> purely obtained by Chen and Hsu’s algorithm [<a href="#B9-jimaging-03-00029" class="html-bibr">9</a>]; (<b>b</b>,<b>c</b>) shows the corresponding <math display="inline"> <semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>β</mi> <mn>1</mn> </msub> </semantics> </math>. To clarify, let the <math display="inline"> <semantics> <mi>δ</mi> </semantics> </math>- and <math display="inline"> <semantics> <mi>τ</mi> </semantics> </math>-points be marked on the <math display="inline"> <semantics> <msub> <mi>β</mi> <mn>1</mn> </msub> </semantics> </math>-image, as shown in (<b>d</b>). According to the Step 4 of RHDP, the three <math display="inline"> <semantics> <mi>τ</mi> </semantics> </math> points, (1, 1), (5, 4) and (4, 5), are HDPs. They can be further removed from <math display="inline"> <semantics> <msubsup> <mi>T</mi> <mn>1</mn> <mo>*</mo> </msubsup> </semantics> </math>, and the new thinning result <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics> </math> is thus obtained as shown in (<b>e</b>).</p>
Full article ">Figure 12
<p>The thinning result obtained (<b>a</b>) without and (<b>b</b>) with the assistance of RHDP. (<b>c</b>,<b>d</b>) show the corresponding reconstruction result, where the original pattern pixels and the reconstructed pixels are marked by “black region squares” and “gray frame squares” respectively.</p>
Full article ">Figure 13
<p>(<b>a</b>) Seven unit-width letters used as ground truths for MSD measurements. (<b>b</b>) Thickened digital patterns from (<b>a</b>) used for the investigations of MR, NI and MSD.</p>
Full article ">Figure 14
<p>Thinning results (<b>a</b>) without and (<b>b</b>) with the RHDP process are obtained by the three fully-parallel thinning algorithms involved in the proposed approach for comparisons. The biased skeletons of (<b>b</b>) are much less than those of (<b>a</b>).</p>
Full article ">Figure 15
<p>Reconstruction results (<b>a</b>) without and (<b>b</b>) with the RHDP process are obtained by the three fully-parallel thinning algorithms involved in the proposed approach for comparisons. The reconstruction results of (a) are over-reconstructed, whereas those of (b) are closer to the originals.</p>
Full article ">Figure 15 Cont.
<p>Reconstruction results (<b>a</b>) without and (<b>b</b>) with the RHDP process are obtained by the three fully-parallel thinning algorithms involved in the proposed approach for comparisons. The reconstruction results of (a) are over-reconstructed, whereas those of (b) are closer to the originals.</p>
Full article ">Figure 16
<p>(<b>a</b>) Thinning results and (<b>b</b>) reconstruction results of some patterns from the MPEG7 CE-Shape-1 dataset are obtained by Jang and Chin’s MST-based reconstructable parallel thinning algorithm. To completely reconstruct the whole pattern, the feature points obtained by MST are retained in the thinning process, and thus, many unwanted branches are yielded in the resultant skeletons.</p>
Full article ">Figure 17
<p>Results of some patterns from the MPEG7 CE-Shape-1 dataset are obtained by the three fully-parallel thinning algorithms involved in the proposed approach without the RHDP process for comparisons. Here, the thinning and reconstruction results are placed on the first and second row, respectively. The phenomenon of over-reconstruction is very obvious. (<b>a</b>) CH [<a href="#B9-jimaging-03-00029" class="html-bibr">9</a>]; (<b>b</b>) AW [<a href="#B16-jimaging-03-00029" class="html-bibr">16</a>]; (<b>c</b>) Rockett [<a href="#B20-jimaging-03-00029" class="html-bibr">20</a>].</p>
Full article ">Figure 18
<p>Results of some patterns from the MPEG7 CE-Shape-1 dataset are obtained by the three fully-parallel thinning algorithms involved in the proposed approach with the RHDP process for comparisons. Here, the thinning and reconstruction results are placed at the first and second row, respectively. The reconstruction results are near the original ones. (<b>a</b>) CH [<a href="#B9-jimaging-03-00029" class="html-bibr">9</a>]; (<b>b</b>) AW [<a href="#B16-jimaging-03-00029" class="html-bibr">16</a>]; (<b>c</b>) Rockett [<a href="#B20-jimaging-03-00029" class="html-bibr">20</a>].</p>
Full article ">
2370 KiB  
Article
Assessment of Geometric Distortion in Six Clinical Scanners Using a 3D-Printed Grid Phantom
by Maysam Jafar, Yassir M. Jafar, Christopher Dean and Marc E. Miquel
J. Imaging 2017, 3(3), 28; https://doi.org/10.3390/jimaging3030028 - 18 Jul 2017
Cited by 20 | Viewed by 7031
Abstract
A cost-effective regularly structured three-dimensional (3D) printed grid phantom was developed to enable the quantification of machine-related magnetic resonance (MR) distortion. This phantom contains reference features, “point-like” objects, or vertices, which resulted from the intersection of mesh edges in 3D space. 3D distortions [...] Read more.
A cost-effective regularly structured three-dimensional (3D) printed grid phantom was developed to enable the quantification of machine-related magnetic resonance (MR) distortion. This phantom contains reference features, “point-like” objects, or vertices, which resulted from the intersection of mesh edges in 3D space. 3D distortions maps were computed by comparing the locations of corresponding features in both MR and computer tomography (CT) data sets using normalized cross correlation. Results are reported for six MRI scanners at both 1.5 T and 3.0 T field strengths within our institution. Mean Euclidean distance error for all MR volumes in this study, was less than 2 mm. The maximum detected error for the six scanners ranged from 2.4 mm to 6.9 mm. The conclusions in this study agree well with previous studies that indicated that MRI is quite accurate near the centre of the field but is more spatially inaccurate toward the edges of the magnetic field. Full article
(This article belongs to the Special Issue Magnetic Resonance Imaging)
Show Figures

Figure 1

Figure 1
<p>3D-printed grid in white and strong plastic (<b>a</b>) axial view, (<b>b</b>) sagittal view, (<b>c</b>) perspective view (azimuth 45°, elevation 20°), and (<b>d</b>) photograph of the grid fixated inside a leak-tight container.</p>
Full article ">Figure 2
<p>A three-dimensional representation of the reference feature used in normalized cross correlation.</p>
Full article ">Figure 3
<p>(<b>a</b>) 3D Surface rendering of the grid phantom for the data acquired using system D (azimuth 45°, elevation 20°); (<b>b</b>) associated Euclidean distance error volume (azimuth 45°, elevation 20°); (<b>c</b>) same as (a) but showing opposing faces (azimuth 225°, elevation 20°); and (<b>d</b>) associated error volume (azimuth 225°, elevation 20°). Deformation is evident particularly at the edges of the magnetic field.</p>
Full article ">Figure 4
<p>Euclidean distance error volumes for systems A, B, C, E and F (<b>a</b>) (azimuth 45°, elevation 20°) and (<b>b</b>) (azimuth 225°, elevation 20°). For system D see <a href="#jimaging-03-00028-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Mean Euclidean distance in mm for the axial (<b>a</b>) and sagittal (<b>b</b>) control point planes of the phantom. The vertical line in each figure represents the central control point plane in both orientations.</p>
Full article ">
1519 KiB  
Article
Terahertz Application for Non-Destructive Inspection of Coated Al Electrical Conductive Wires
by Kenta Kuroo, Ryo Hasegawa, Tadao Tanabe and Yutaka Oyama
J. Imaging 2017, 3(3), 27; https://doi.org/10.3390/jimaging3030027 - 14 Jul 2017
Cited by 7 | Viewed by 5253
Abstract
At present, one of the main inspection methods of electric wires is visual inspection. The development of a novel non-destructive inspection technology is required because of various problems, such as water invasion by the removal of insulators. Since terahertz (THz) waves have high [...] Read more.
At present, one of the main inspection methods of electric wires is visual inspection. The development of a novel non-destructive inspection technology is required because of various problems, such as water invasion by the removal of insulators. Since terahertz (THz) waves have high transparency to nonpolar substances such as coatings of conductive wire, electric conductive wires are extremely suitable for THz non-destructive inspection. In this research, in order to investigate the quantitative possibility of detecting the defects on aluminum electric wire, THz wave reflection imaging measurement was performed for artificially disconnected wires. It is shown that quantitative detection is possible for the disconnect status of the aluminum electric wire by using THz waves. Full article
(This article belongs to the Special Issue THz and MMW Imaging)
Show Figures

Figure 1

Figure 1
<p>The optical photo of aluminum wire disconnection process reproduced sample (tilt is only for illustration).</p>
Full article ">Figure 2
<p>The optical schematic system of THz reflection imaging measurement. SBD: Schottky barrier diode.</p>
Full article ">Figure 3
<p>Quantitative analysis of imaging results: (<b>a</b>) THz imaging measurement result (the dotted line shows the defect part); (<b>b</b>) Reflection intensity in horizontal movement at the defect part; (<b>c</b>) Reflection intensity change in horizontal movement at the defect part.</p>
Full article ">Figure 4
<p>(<b>a</b>) Relationship between actual defect width and estimated defect width; (<b>b</b>) Relationship between actual defect width and integrated intensity.</p>
Full article ">
3010 KiB  
Article
A PDE-Free Variational Method for Multi-Phase Image Segmentation Based on Multiscale Sparse Representations
by Julia Dobrosotskaya and Weihong Guo
J. Imaging 2017, 3(3), 26; https://doi.org/10.3390/jimaging3030026 - 13 Jul 2017
Cited by 4 | Viewed by 5232
Abstract
We introduce a variational model for multi-phase image segmentation that uses a multiscale sparse representation frame (wavelets or other) in a modified diffuse interface context. The segmentation model we present differs from other state-of-the-art models in several ways. The diffusive nature of the [...] Read more.
We introduce a variational model for multi-phase image segmentation that uses a multiscale sparse representation frame (wavelets or other) in a modified diffuse interface context. The segmentation model we present differs from other state-of-the-art models in several ways. The diffusive nature of the method originates from the sparse representations and thus propagates information in a different manner comparing to any existing PDE models, allowing one to combine the advantages of non-local information processing with sharp edges in the output. The regularizing part of the model is based on the wavelet Ginzburg–Landau (WGL) functional, and the fidelity part consists of two terms: one ensures the mean square proximity of the output to the original image; the other takes care of preserving the main edge set. Multiple numerical experiments show that the model is robust to noise yet can preserve the edge information. This method outperforms the algorithms from other classes in cases of images with significant presence of noise or highly uneven illumination Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Original (‘clean’) MRI image, (<b>b</b>) noisy MRI image (noise: 15 dB, Gaussian).</p>
Full article ">Figure 2
<p>More details about the above MRI image segmentation and the gradient descent setup: (<b>a</b>) the edges to be preserved, (<b>b</b>) initial guess for <math display="inline"> <semantics> <msub> <mi>ϕ</mi> <mn>1</mn> </msub> </semantics> </math>, (<b>c</b>) initial guess for <math display="inline"> <semantics> <msub> <mi>ϕ</mi> <mn>2</mn> </msub> </semantics> </math> (percentiles used for computing the initial guess: <math display="inline"> <semantics> <mrow> <mn>15</mn> <mo>%</mo> <mo>,</mo> <mspace width="3.33333pt"/> <mn>38</mn> <mo>%</mo> <mo>,</mo> <mspace width="3.33333pt"/> <mn>61</mn> <mo>%</mo> <mo>,</mo> <mspace width="3.33333pt"/> <mn>85</mn> <mo>%</mo> </mrow> </semantics> </math>).</p>
Full article ">Figure 3
<p>(<b>a</b>) the overall segmentation output of the proposed method, (<b>b</b>) output <math display="inline"> <semantics> <msub> <mi>ϕ</mi> <mn>1</mn> </msub> </semantics> </math>, (<b>c</b>) output <math display="inline"> <semantics> <msub> <mi>ϕ</mi> <mn>2</mn> </msub> </semantics> </math>.</p>
Full article ">Figure 4
<p>(<b>a</b>) Input image, (<b>b</b>) output of our segmentation method, (<b>c</b>) fuzzy segmentation [<a href="#B28-jimaging-03-00026" class="html-bibr">28</a>], (<b>d</b>) graph cuts [<a href="#B14-jimaging-03-00026" class="html-bibr">14</a>], (<b>e</b>) Vese–Chan [<a href="#B19-jimaging-03-00026" class="html-bibr">19</a>].</p>
Full article ">Figure 5
<p>Segmentation of the ‘Leaf’ image: (<b>a</b>) the original image; the segmented outputs of (<b>b</b>) the proposed method, (<b>c</b>) graph cut method, (<b>d</b>) Vese–Chan and (<b>e</b>) fuzzy segmentation methods.</p>
Full article ">Figure 6
<p>Compare the segmentation results of (<b>a</b>) the peppers image with added Gaussian noise (SNR = 15 dB) using (<b>b</b>) the proposed method, (<b>c</b>) graph cut, (<b>d</b>) Vese–Chan and (<b>e</b>) fuzzy segmentation methods.</p>
Full article ">Figure 7
<p>(<b>a</b>) The original four-valued image (<b>b</b>) with added noise.</p>
Full article ">Figure 8
<p>Each row ((<b>a</b>) to (<b>c</b>)) shows the input image, the minimization output and the thresholded (rounded) output (from left to right) for different levels of additive noise: (<b>a</b>) noise = 15 dB, &gt;99% classified correctly, (<b>b</b>) noise = 10 dB, &gt;98% classified correctly, (<b>c</b>) noise = 5 dB, &gt;95% classified correctly.</p>
Full article ">Figure 8 Cont.
<p>Each row ((<b>a</b>) to (<b>c</b>)) shows the input image, the minimization output and the thresholded (rounded) output (from left to right) for different levels of additive noise: (<b>a</b>) noise = 15 dB, &gt;99% classified correctly, (<b>b</b>) noise = 10 dB, &gt;98% classified correctly, (<b>c</b>) noise = 5 dB, &gt;95% classified correctly.</p>
Full article ">Figure 9
<p>Comparison to other methods: (<b>a</b>) the input image (noisy ‘Sectors’ images); segmentation output of (<b>b</b>) the proposed method, (<b>c</b>) graph cut method, (<b>d</b>) Vese–Chan method, (<b>e</b>) fuzzy segmentation method, (<b>f</b>) k-means.</p>
Full article ">Figure 10
<p>(<b>a</b>) Original image (from BSDS500), (<b>b</b>) ground truth boundaries obtained by manual segmentation (from BSDS500), (<b>c</b>) ground truth boundaries shown on the original, (<b>d</b>) the result of the proposed four-class segmentation method, (<b>e</b>) boundaries of the regions shown in (<b>d</b>,<b>f</b>) boundaries of the segmented regions shown on the original image.</p>
Full article ">Figure 11
<p>(<b>a</b>) Original image, (<b>b</b>) image after background removal (<b>c</b>) segmented output, (<b>d</b>) segmented output with post-processing.</p>
Full article ">Figure 12
<p>(<b>a</b>) Original image, (<b>b</b>) k-means segmentation that is used to initialize <math display="inline"> <semantics> <msub> <mi>ϕ</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>ϕ</mi> <mn>2</mn> </msub> </semantics> </math> (<b>c</b>) segmentation output.</p>
Full article ">Figure 13
<p>The histogram of the four-valued image after Gaussian noise had been added.</p>
Full article ">Figure 14
<p>(<b>a</b>) Original ‘Horse’ image, (<b>b</b>) ground truth boundary information (from [<a href="#B59-jimaging-03-00026" class="html-bibr">59</a>]), (<b>c</b>) images (<b>a</b>,<b>b</b>) overlaid, (<b>d</b>) our algorithm output, (<b>e</b>) boundaries of the segmented regions, (<b>f</b>) boundaries for the segmented regions overlaid on the original image.</p>
Full article ">Figure 15
<p>(<b>a</b>) Original ‘Plane’ image (from Berkeley Segmentation Data Set 300 [<a href="#B33-jimaging-03-00026" class="html-bibr">33</a>], (<b>b</b>) ground truth boundary information (also from BSDS300), (<b>c</b>) images (<b>a</b>,<b>b</b>) overlaid, (<b>d</b>) our algorithm output, (<b>e</b>) boundaries of the segmented regions, (<b>f</b>) boundaries for the segmented regions overlaid on the original image.</p>
Full article ">Figure 15 Cont.
<p>(<b>a</b>) Original ‘Plane’ image (from Berkeley Segmentation Data Set 300 [<a href="#B33-jimaging-03-00026" class="html-bibr">33</a>], (<b>b</b>) ground truth boundary information (also from BSDS300), (<b>c</b>) images (<b>a</b>,<b>b</b>) overlaid, (<b>d</b>) our algorithm output, (<b>e</b>) boundaries of the segmented regions, (<b>f</b>) boundaries for the segmented regions overlaid on the original image.</p>
Full article ">Figure 16
<p>Edge information used in the fidelity term for (<b>a</b>) the ‘Peppers’ image with added noise, SNR = 15 dB (<b>b</b>) ‘Leaf’, (<b>c</b>) the ‘Sectors’ image with added noise, SNR = 10 dB, (<b>d</b>) retinal image, (<b>e</b>) ‘Elephants’, (<b>f</b>) ‘Plane’, (<b>g</b>) ‘Horse’.</p>
Full article ">
11088 KiB  
Article
Automatic Recognition of Speed Limits on Speed-Limit Signs by Using Machine Learning
by Shigeharu Miyata
J. Imaging 2017, 3(3), 25; https://doi.org/10.3390/jimaging3030025 - 5 Jul 2017
Cited by 8 | Viewed by 9130
Abstract
This study describes a method for using a camera to automatically recognize the speed limits on speed-limit signs. This method consists of the following three processes: first (1) a method of detecting the speed-limit signs with a machine learning method utilizing the local [...] Read more.
This study describes a method for using a camera to automatically recognize the speed limits on speed-limit signs. This method consists of the following three processes: first (1) a method of detecting the speed-limit signs with a machine learning method utilizing the local binary pattern (LBP) feature quantities as information helpful for identification, then (2) an image processing method using Hue, Saturation and Value (HSV) color spaces for extracting the speed limit numbers on the identified speed-limit signs, and finally (3) a method for recognition of the extracted numbers using a neural network. The method of traffic sign recognition previously proposed by the author consisted of extracting geometric shapes from the sign and recognizing them based on their aspect ratios. This method cannot be used for the numbers on speed-limit signs because the numbers all have the same aspect ratios. In a study that proposed recognition of speed limit numbers using an Eigen space method, a method using only color information was used to detect speed-limit signs from images of scenery. Because this method used only color information for detection, precise color information settings and processing to exclude everything other than the signs are necessary in an environment where many colors similar to the speed-limit signs exist, and further study of the method for sign detection is needed. This study focuses on considering the following three points. (1) Make it possible to detect only the speed-limit sign in an image of scenery using a single process focusing on the local patterns of speed limit signs. (2) Make it possible to separate and extract the two-digit numbers on a speed-limit sign in cases when the two-digit numbers are incorrectly extracted as a single area due to the light environment. (3) Make it possible to identify the numbers using a neural network by focusing on three feature quantities. This study also used the proposed method with still images in order to validate it. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

Figure 1
<p>Typical landscape image that includes a speed-limit sign as seen during driving.</p>
Full article ">Figure 2
<p>(<b>a</b>) Hue, Saturation and Value (HSV) image and (<b>b</b>) image of extracted <math display="inline"> <semantics> <mrow> <mo>−</mo> <mn>1</mn> <mo>/</mo> <mn>3</mn> <mi mathvariant="sans-serif">π</mi> <mo>≤</mo> <mi mathvariant="normal">H</mi> <mo>≤</mo> <mn>1</mn> <mo>/</mo> <mn>6</mn> <mi mathvariant="sans-serif">π</mi> </mrow> </semantics> </math> Hue regions when a threshold value is set for Hue (H) in order to extract the red ring at the outer edge of the sign.</p>
Full article ">Figure 3
<p>Round traffic sign extracted based on the labeling process.</p>
Full article ">Figure 4
<p>Extracted speed limit. (<b>a</b>) RGB image of Speed-limit sign extracted from the landscape image; (b) HSV image converted from RGB image (a); (<b>c</b>) Binarized image of image (<b>b)</b> by setting the threshold for the blue.</p>
Full article ">Figure 5
<p>Examples of images containing speed-limit signs (positive images) in a variety of conditions.</p>
Full article ">Figure 6
<p>Examples of images not containing speed-limit signs (negative images).</p>
Full article ">Figure 7
<p>Change in detection rate when 1000 negative images were used and the positive images were increased from 50 to 200.</p>
Full article ">Figure 8
<p>Changes in false detection rate when 200 positive images are used and the number of negative images is increased from 100 to 7000.</p>
Full article ">Figure 9
<p>Examples of detected and undetected speed-limit signs. Images (<b>a</b>–<b>g</b>) show cases when the speed-limit sign was detected. Images (<b>h</b>) and (<b>i</b>) show cases when the speed-limit sign was not detected.</p>
Full article ">Figure 9 Cont.
<p>Examples of detected and undetected speed-limit signs. Images (<b>a</b>–<b>g</b>) show cases when the speed-limit sign was detected. Images (<b>h</b>) and (<b>i</b>) show cases when the speed-limit sign was not detected.</p>
Full article ">Figure 10
<p>Training images for Number 0, 3, 4, 5 cut out from the images acquired from the camera.</p>
Full article ">Figure 11
<p>Examples of three feature quantities for each number 0, 3, 4, and 5.</p>
Full article ">Figure 11 Cont.
<p>Examples of three feature quantities for each number 0, 3, 4, and 5.</p>
Full article ">Figure 12
<p>When two numbers are combined and detected as a single area.</p>
Full article ">Figure 13
<p>Detailed description of the processing used to extract the numbers on speed-limit signs. (<b>a</b>) Image following application of a variable average binarization processing to the grayscale image of the detected speed-limit sign; (<b>b</b>) Image with central area of image (<b>a</b>) extracted and black/white reversed; (<b>c</b>) Contours image of boundary between white areas and black areas in image (<b>b</b>); (<b>d</b>) Number candidate area obtained from mask processing of image (<b>b</b>) using a mask image obtained by using contours image (<b>c</b>); (<b>e</b>) Image following application of contour processing after dilation-erosion (closing) processing of the number candidate areas to eliminate small black areas inside the white objects; (<b>f</b>) Number image delivered to recognition processing when the contour area satisfies the aspect ratio of numbers.</p>
Full article ">Figure 14
<p>Three feature quantities at each resolution for the extracted number 30.</p>
Full article ">Figure 15
<p>Three feature quantities at each resolution for the extracted number 40.</p>
Full article ">Figure 15 Cont.
<p>Three feature quantities at each resolution for the extracted number 40.</p>
Full article ">Figure 16
<p>Three feature quantities at each resolution for the extracted number 50.</p>
Full article ">Figure 17
<p>Example of recognition results for 30 km speed-limit sign.</p>
Full article ">Figure 18
<p>Example of recognition results for 40 km speed-limit sign.</p>
Full article ">Figure 19
<p>Example of recognition results for 50 km speed-limit sign.</p>
Full article ">Figure 20
<p>Effects of different lower limit values for Saturation and Value on the recognition results.</p>
Full article ">
7229 KiB  
Article
RGB Color Cube-Based Histogram Specification for Hue-Preserving Color Image Enhancement
by Kohei Inoue, Kenji Hara and Kiichi Urahama
J. Imaging 2017, 3(3), 24; https://doi.org/10.3390/jimaging3030024 - 1 Jul 2017
Cited by 9 | Viewed by 6945
Abstract
A large number of color image enhancement methods are based on the methods for grayscale image enhancement in which the main interest is contrast enhancement. However, since colors usually have three attributes, including hue, saturation and intensity of more than only one attribute [...] Read more.
A large number of color image enhancement methods are based on the methods for grayscale image enhancement in which the main interest is contrast enhancement. However, since colors usually have three attributes, including hue, saturation and intensity of more than only one attribute of grayscale values, the naive application of the methods for grayscale images to color images often results in unsatisfactory consequences. Conventional hue-preserving color image enhancement methods utilize histogram equalization (HE) for enhancing the contrast. However, they cannot always enhance the saturation simultaneously. In this paper, we propose a histogram specification (HS) method for enhancing the saturation in hue-preserving color image enhancement. The proposed method computes the target histogram for HS on the basis of the geometry of RGB (rad, green and blue) color space, whose shape is a cube with a unit side length. Therefore, the proposed method includes no parameters to be set by users. Experimental results show that the proposed method achieves higher color saturation than recent parameter-free methods for hue-preserving color image enhancement. As a result, the proposed method can be used for an alternative method of HE in hue-preserving color image enhancement. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Cross sections of RGB color cube with equiintensity planes, <math display="inline"> <semantics> <mrow> <msup> <mrow> <mn mathvariant="bold">1</mn> </mrow> <mi>T</mi> </msup> <mi mathvariant="bold-italic">p</mi> <mo>=</mo> <mrow> <mo>∥</mo> <mn mathvariant="bold">1</mn> <mo>∥</mo> </mrow> <mi>l</mi> </mrow> </semantics> </math>, where <math display="inline"> <semantics> <mi mathvariant="bold-italic">p</mi> </semantics> </math> denotes a point on the plane: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mn>0</mn> <mo>≤</mo> <mi>l</mi> <mo>≤</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mn>1</mn> <mo>≤</mo> <mi>l</mi> <mo>≤</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mn>2</mn> <mo>≤</mo> <mi>l</mi> <mo>≤</mo> <mn>3</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>Area of cross section of RGB color cube and equiintensity plane, <math display="inline"> <semantics> <mrow> <mi>a</mi> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </semantics> </math>, and its integral, <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Results of hue-preserving color image enhancement.</p>
Full article ">Figure 4
<p>Saturation images.</p>
Full article ">Figure 5
<p>Difference maps of saturation images.</p>
Full article ">Figure 6
<p>Mean saturation.</p>
Full article ">Figure 7
<p>Results on the INRIA (Institut National de Recherche en Informatique et en Automatique) dataset with the mean saturation values.</p>
Full article ">
154 KiB  
Erratum
Erratum: Ville V. Lehtola, et al. Radial Distortion from Epipolar Constraint for Rectilinear Cameras. J. Imaging 2017, 3, 8
by Ville V. Lehtola, Matti Kurkela and Petri Rönnholm
J. Imaging 2017, 3(3), 23; https://doi.org/10.3390/jimaging3030023 - 23 Jun 2017
Viewed by 3525
Abstract
Due to a mistake during the production process, the J. Imaging Editorial Office and the authors wish to make this correction to the paper written by Lehtola et al. [1]. [...]
Full article
Previous Issue
Next Issue
Back to TopTop