Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 4, February
Previous Issue
Volume 3, December
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

J. Imaging, Volume 4, Issue 1 (January 2018) – 25 articles

Cover Story (view full-size image): This paper presents the first results of bacterial and cellular load estimation in the human distal lung where pathologies such as pneumonia occur. In pulmonary fibre confocal fluorescence microscopy (FCFM) lung tissue is autofluorescent, while bacteria appear as fluorescent dots when exposed to a targeted substance. Bacterial dots are visible in panel d (post-substance) but not in panel b (pre-substance). We learn spatio-temporal templates of a bacterium and cell from the FCFM videos, and use them as centers of a radial basis function network to count the number of cells and bacteria. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
5 pages, 646 KiB  
Editorial
Acknowledgement to Reviewers of Journal of Imaging in 2017
by J. Imaging Editorial Office
J. Imaging 2018, 4(1), 26; https://doi.org/10.3390/jimaging4010026 - 19 Jan 2018
Viewed by 3489
Abstract
Peer review is an essential part in the publication process, ensuring that Journal of Imaging maintains high quality standards for its published papers[...] Full article
13 pages, 29827 KiB  
Article
Stable Image Registration for In-Vivo Fetoscopic Panorama Reconstruction
by Floris Gaisser, Suzanne H. P. Peeters, Boris A. J. Lenseigne, Pieter P. Jonker and Dick Oepkes
J. Imaging 2018, 4(1), 24; https://doi.org/10.3390/jimaging4010024 - 19 Jan 2018
Cited by 19 | Viewed by 4960
Abstract
A Twin-to-Twin Transfusion Syndrome (TTTS) is a condition that occurs in about 10% of pregnancies involving monochorionic twins. This complication can be treated with fetoscopic laser coagulation. The procedure could greatly benefit from panorama reconstruction to gain an overview of the placenta. In [...] Read more.
A Twin-to-Twin Transfusion Syndrome (TTTS) is a condition that occurs in about 10% of pregnancies involving monochorionic twins. This complication can be treated with fetoscopic laser coagulation. The procedure could greatly benefit from panorama reconstruction to gain an overview of the placenta. In previous work we investigated which steps could improve the reconstruction performance for an in-vivo setting. In this work we improved this registration by proposing a stable region detection method as well as extracting matchable features based on a deep-learning approach. Finally, we extracted a measure for the image registration quality and the visibility condition. With experiments we show that the image registration performance is increased and more constant. Using these methods a system can be developed that supports the surgeon during the surgery, by giving feedback and providing a more complete overview of the placenta. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Ex-vivo view; (<b>b</b>) uneven distribution of light; (<b>c</b>) too much light saturating the sensor; (<b>d</b>) not enough light creating sensor noise.</p>
Full article ">Figure 2
<p>Ex-vivo: (<b>a</b>) sufficient structure; In-vivo: (<b>b</b>) nominal; (<b>c</b>) close and bright; (<b>d</b>) far and dark.</p>
Full article ">Figure 3
<p>Constraints on (<b>a</b>) edge; (<b>b</b>) circular; (<b>c</b>) curve; (<b>d</b>) veins.</p>
Full article ">Figure 4
<p>(<b>a</b>) Definition of Bounding Box (BBox); (<b>b</b>) Definition of Rotated Box (RBox).</p>
Full article ">Figure 5
<p>(<b>a</b>) Sample image; (<b>b</b>) annotated center line; (<b>c</b>) selection of annotated RBoxes.</p>
Full article ">Figure 6
<p>(<b>a</b>) Left: Detection network architecture; (<b>b</b>) Right: Architecture of a single detection scale.</p>
Full article ">Figure A1
<p>Variations in viewing conditions. Top row left to right: ex-vivo-far, ex-vivo-close, ex-vivo with water-far, ex-vivo with water-nominal, ex-vivo with water-close; Middle row: yellow liquid, bottom row: green turbid liquid; left to right: far-dark, far-nominal, nominal for both, close-nominal, close-bright.</p>
Full article ">
8 pages, 1997 KiB  
Review
Imaging with Polarized Neutrons
by Nikolay Kardjilov, André Hilger, Ingo Manke, Markus Strobl and John Banhart
J. Imaging 2018, 4(1), 23; https://doi.org/10.3390/jimaging4010023 - 16 Jan 2018
Cited by 7 | Viewed by 6323
Abstract
Owing to their zero charge, neutrons are able to pass through thick layers of matter (typically several centimeters) while being sensitive to magnetic fields due to their intrinsic magnetic moment. Therefore, in addition to the conventional attenuation contrast image, the magnetic field inside [...] Read more.
Owing to their zero charge, neutrons are able to pass through thick layers of matter (typically several centimeters) while being sensitive to magnetic fields due to their intrinsic magnetic moment. Therefore, in addition to the conventional attenuation contrast image, the magnetic field inside and around a sample can be visualized by detecting changes of polarization in a transmitted beam. The method is based on the spatially resolved measurement of the cumulative precession angles of a collimated, polarized, monochromatic neutron beam that traverses a magnetic field or sample. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic diagram of a set-up used for imaging magnetic materials and fields. The neutron beam is polarized, then precesses in a magnetic field, eventually is analyzed and detected. Note that the intensity (dark blue arrow) behind the analyzer is smaller than the intensity behind the polarizer; (<b>b</b>) Field lines surrounding a simple dipole magnet; (<b>c</b>) Radiograph showing field lines around a bar magnet levitating over an yttrium-barium-copper-oxide (YBCO) superconductor as in its superconducting state the YBCO expels all magnetic fields and thus repels the permanent magnet (Meissner effect); (<b>d</b>) Magnetic field trapped in a polycrystalline lead cylinder at different temperatures around T<sub>c</sub> = 7.2 K. The trapped flux (yellow regions left and right) inside a polycrystalline cylinder of lead is reconstructed in 3D. When cooled to below its critical temperature in the presence of a weak magnetic field some flux is preserved inside the sample due to defects and grain boundaries and this remains trapped even after the field is switched off [<a href="#B2-jimaging-04-00023" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Observation of skin effect in a bulk aluminum sample of 4 cm diameter without applied current and with 25 A alternating current at 10 Hz, 100 Hz, and 1000 Hz; (<b>b</b>) Analysis of the spin-polarized image data shows an inhomogeneous magnetic field distribution that can be related to a current distribution similar to this observed in the experiment. The current flows close to the surface of the conductor at high AC frequencies (1 kHz) which is a proof that the skin effect is observed (Reprinted from [<a href="#B10-jimaging-04-00023" class="html-bibr">10</a>], with the permission of AIP Publishing); (<b>c</b>) Spin-polarized neutron image of magnetic field produced by a 15-loop coil running at 3 kHz current with 10 A rms amplitude measured in a polychromatic neutron beam. The time slice interval of the shown images is 10 µs; (<b>d</b>) Simulations performed based on experimental parameters [<a href="#B11-jimaging-04-00023" class="html-bibr">11</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Comparison between transmission images of a solid-state bender (left) and a polarized <sup>3</sup>He spin filter (right) [<a href="#B12-jimaging-04-00023" class="html-bibr">12</a>]; (<b>b</b>) Mapping of phase transition process from the ferromagnetic to the paramagnetic state of a PdNi crystal (3.24% Ni) using a polarized <sup>3</sup>He spin filter setup [<a href="#B13-jimaging-04-00023" class="html-bibr">13</a>].</p>
Full article ">Figure 4
<p>(<b>a</b>) Example for space discretization used in simulations of imaging with polarized neutrons. (<b>b</b>) Measurements and simulations of spin-precession image of rectangular double coil measured at different current values [<a href="#B3-jimaging-04-00023" class="html-bibr">3</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) Polarimetric arrangement for quantitative imaging with polarized neutrons; (<b>b</b>) Measured and simulated spin precession images for different rotation angles of a solenoid for initial spin direction x and analysis of the three orthogonal components of the spin precession angle.</p>
Full article ">
19 pages, 16305 KiB  
Article
Glomerulus Classification and Detection Based on Convolutional Neural Networks
by Jaime Gallego, Anibal Pedraza, Samuel Lopez, Georg Steiner, Lucia Gonzalez, Arvydas Laurinavicius and Gloria Bueno
J. Imaging 2018, 4(1), 20; https://doi.org/10.3390/jimaging4010020 - 16 Jan 2018
Cited by 72 | Viewed by 12543
Abstract
Glomerulus classification and detection in kidney tissue segments are key processes in nephropathology used for the correct diagnosis of the diseases. In this paper, we deal with the challenge of automating Glomerulus classification and detection from digitized kidney slide segments using a deep [...] Read more.
Glomerulus classification and detection in kidney tissue segments are key processes in nephropathology used for the correct diagnosis of the diseases. In this paper, we deal with the challenge of automating Glomerulus classification and detection from digitized kidney slide segments using a deep learning framework. The proposed method applies Convolutional Neural Networks (CNNs) between two classes: Glomerulus and Non-Glomerulus, to detect the image segments belonging to Glomerulus regions. We configure the CNN with the public pre-trained AlexNet model and adapt it to our system by learning from Glomerulus and Non-Glomerulus regions extracted from training slides. Once the model is trained, labeling is performed by applying the CNN classification to the image blocks under analysis. The results of the method indicate that this technique is suitable for correct Glomerulus detection in Whole Slide Images (WSI), showing robustness while reducing false positive and false negative detections. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>Glomerulus example labeled using Aperio ImageScope tool.</p>
Full article ">Figure 2
<p>Examples of image patches from dataset.</p>
Full article ">Figure 3
<p>Graphical representation of blocks generation for Glomerulus and non-Glomerulus classes with 227 × 227 pixels size. (<b>a</b>) Representation of Glomerulus block extraction; (<b>b</b>) Representation of non-Glomerulus blocks.</p>
Full article ">Figure 4
<p>Example of 227 × 227 pixels training blocks for Glomerulus and non-Glomerulus classes.</p>
Full article ">Figure 5
<p>Color normalization by means of Reinhard’s method.</p>
Full article ">Figure 6
<p>Example of non-Glomerulus regions belonging to tubuli, interstitium and blood vessels structures.</p>
Full article ">Figure 7
<p>Redundancy map obtained in pixel classification. Displayed from overlapping 5 to overlapping ≥9.</p>
Full article ">Figure 8
<p>Glomeruli that have been classified as non-Glomerulus by pre-trained AlexNet and GoogleNet models.</p>
Full article ">Figure 9
<p>Results obtained in two WSI (Whole Slide Images) created in different laboratories. Regions detected as Glomerulus appear in blue color.</p>
Full article ">Figure 10
<p>Example of correct and false positive detections. Regions detected as Glomerulus appear in blue color.</p>
Full article ">Figure 11
<p>Filters learned by pre-trained models in the first convolutional layer.</p>
Full article ">Figure 12
<p>Filters learned by from-scratch models in the first convolutional layer.</p>
Full article ">Figure 13
<p>DeepDream concept class visualization for pre-trained AlexNet.</p>
Full article ">
11 pages, 5070 KiB  
Review
Neutron Imaging in Cultural Heritage Research at the FRM II Reactor of the Heinz Maier-Leibnitz Center
by Burkhard Schillinger, Amélie Beaudet, Anna Fedrigo, Francesco Grazzi, Ottmar Kullmer, Michael Laaß, Malgorzata Makowska, Ingmar Werneburg and Clément Zanolli
J. Imaging 2018, 4(1), 22; https://doi.org/10.3390/jimaging4010022 - 14 Jan 2018
Cited by 27 | Viewed by 8780
Abstract
Neutron Imaging is ideally suited for applications in cultural heritage even at small reactors with moderate image resolution. However, recently, high resolution imaging is being increasingly used for advanced studies, especially in paleontology. The special contrast for hydrogen and between neighboring elements in [...] Read more.
Neutron Imaging is ideally suited for applications in cultural heritage even at small reactors with moderate image resolution. However, recently, high resolution imaging is being increasingly used for advanced studies, especially in paleontology. The special contrast for hydrogen and between neighboring elements in the periodic system allows for new applications that are not accessible for X-rays, like organic material in enclosed containers made of ceramics or metals, fossilized bones in chalk rock or in ferrous “red” beds, and even for animal and hominid teeth. Fission neutrons permit the examination of large samples that otherwise show large attenuation for thermal neutrons. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

Figure 1
<p>The ANTARES facility (modified from [<a href="#B3-jimaging-04-00022" class="html-bibr">3</a>] with permission.).</p>
Full article ">Figure 2
<p>The NECTAR facility, (<b>a</b>) shows a sketch of the beamline, (<b>b</b>) the enriched uranium converter plate position (from [<a href="#B6-jimaging-04-00022" class="html-bibr">6</a>,<a href="#B7-jimaging-04-00022" class="html-bibr">7</a>] with permission).</p>
Full article ">Figure 3
<p>(<b>a</b>) Photo of the Breccia rock, left (<b>b</b>) Neutron computed tomography of the rock, right (photo by FRM II and original image).</p>
Full article ">Figure 4
<p>(<b>a,b</b>) X-ray computed tomography (CT) and neutron computed tomography (N-CT) of the rock (Beaudet et al. 2016 [<a href="#B8-jimaging-04-00022" class="html-bibr">8</a>], with permission).</p>
Full article ">Figure 5
<p>When it is even slightly worn or damaged, the outer surface of hominid teeth gives no real clue for identification between orangutans and humans. The photo of the H. erectus molar is from [<a href="#B9-jimaging-04-00022" class="html-bibr">9</a>] (open access), whereas all other illustrations are original pictures.</p>
Full article ">Figure 6
<p>For some tooth specimens (<b>a</b>), X-CT slices sometimes fail to show contrasts between the mineralized tissues (<b>b</b>), whereas N-CT slices give a clear distinction between enamel and dentine (<b>c</b>), enabling reconstructing the 3D internal structure (<b>d</b>) [<a href="#B11-jimaging-04-00022" class="html-bibr">11</a>] (with permission).</p>
Full article ">Figure 7
<p>In mandibular specimens (<b>a</b>), due to fossilization, we sometimes obtained noisy datasets (<b>b</b>), but it was still possible to retrieve reliable morphological information on the internal tooth structure (<b>c</b>) [<a href="#B11-jimaging-04-00022" class="html-bibr">11</a>] (with permission).</p>
Full article ">Figure 8
<p>N-CT of the brain and ear region of <span class="html-italic">Diictodon feliceps</span> (collection of the Museum of Natural History in Berlin, number MB.R. 1000) (<b>a</b>), Tomographic slice through the ear region; (<b>b</b>), Photograph of the specimen showing the position of the tomographic slice; (<b>c</b>), 3D model of the brain endocast and the inner ears; (<b>d</b>), virtual reconstruction of the inner ear labyrinth (original figures).</p>
Full article ">Figure 9
<p>Tomography studies of the lower jaw of <span class="html-italic">Stahleckeria potens</span> with a length of about 50 cm (Paleontological Collection of Universität Tübingen, number GPIT/RE/7106-2, examined at the NECTAR facility; (<b>a</b>) a photograph of the sample positioned at the sample stage; (<b>b</b>) a single projection (stitched two normalized radiographs); (<b>c</b>) 3D visualization of the reconstructed object (original pictures).</p>
Full article ">Figure 10
<p>Tomography studies of the skull of <span class="html-italic">Stahleckeria potens</span> (GPIT/RE/7107) at the NECTAR facility; (<b>a</b>) photograph of the sample positioned at the sample stage; (<b>b</b>) a single projection (stitched five normalized radiographs); (<b>c</b>) a reconstructed tomography slice; (<b>d</b>) 3D visualization of the reconstructed object (original pictures).</p>
Full article ">Figure 11
<p>Transmission image of a section of two ‘pattern-welded’ Viking swords (see diagram above for the position along the blade). Transmission images have been measured before (λ<sub>1</sub>) and after (λ<sub>2</sub>) the 110 ferrite Bragg edge, and by subtracting the two is was possible to obtain a transmission image enhancing the signal from ferrite [<a href="#B15-jimaging-04-00022" class="html-bibr">15</a>] (original pictures).</p>
Full article ">
11 pages, 3971 KiB  
Article
Hot Shoes in the Room: Authentication of Thermal Imaging for Quantitative Forensic Analysis
by Justin H. J. Chua, Adrian G. Dyer and Jair E. Garcia
J. Imaging 2018, 4(1), 21; https://doi.org/10.3390/jimaging4010021 - 12 Jan 2018
Cited by 3 | Viewed by 5971
Abstract
Thermal imaging has been a mainstay of military applications and diagnostic engineering. However, there is currently no formalised procedure for the use of thermal imaging capable of standing up to judicial scrutiny. Using a scientifically sound characterisation method, we describe the cooling function [...] Read more.
Thermal imaging has been a mainstay of military applications and diagnostic engineering. However, there is currently no formalised procedure for the use of thermal imaging capable of standing up to judicial scrutiny. Using a scientifically sound characterisation method, we describe the cooling function of three common shoe types at an ambient room temperature of 22 °C (295 K) based on the digital output of a consumer-grade FLIR i50 thermal imager. Our method allows the reliable estimation of cooling time from pixel intensity values within a time interval of 3 to 25 min after shoes have been removed. We found a significant linear relationship between pixel intensity level and temperature. The calibration method allows the replicable determination of independent thermal cooling profiles for objects without the need for emissivity values associated with non-ideal black-body thermal radiation or system noise functions. The method has potential applications for law enforcement and forensic research, such as cross-validating statements about time spent by a person in a room. The use of thermal images can thus provide forensic scientists, law enforcement officials, and legislative bodies with an efficient and cost-effective tool for obtaining and interpreting time-based evidence. Full article
Show Figures

Figure 1

Figure 1
<p>Spectral emission from ideal Planckian radiators heated at different temperatures (solid black lines), and spectral bandwidths commonly used for forensic imaging (colour rectangles). Square markers indicate the maximum amplitude (λ<sub>max</sub>) for each spectrum: daylight (correlated colour temperature of 6500 K), halogen–tungsten filament (4000 K), 75 W house bulb (2800 K), wax candle (1900 K), hot plate for technical purposes heated at about 370 °C (644 K), human body at 37 °C (310 K), and a black body heated at 20 °C (293 K) and 0 °C (273 K). Note how some sources emit radiation across several spectral bands: ultraviolet (UV), visible (VIS), near infrared (NIR), and intermediate and far infrared. Spectral bands cover the sensitivity range of most common imaging devices used for forensic and technical applications in the ultraviolet, visible, and infrared regions of the electromagnetic spectrum [<a href="#B2-jimaging-04-00021" class="html-bibr">2</a>,<a href="#B10-jimaging-04-00021" class="html-bibr">10</a>,<a href="#B11-jimaging-04-00021" class="html-bibr">11</a>,<a href="#B19-jimaging-04-00021" class="html-bibr">19</a>,<a href="#B21-jimaging-04-00021" class="html-bibr">21</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Temperature sampling points on three adult male shoes and (<b>b</b>) representation of shoe cooling in pilot studies. See text and <a href="#jimaging-04-00021-f003" class="html-fig">Figure 3</a> for quantitative analysis. Points A–E represent temperature sampling points with Therm-Micro probes on points A (External Toe); B (External Heel); C (Internal Toe); D (Internal Heel); E (Internal Point). Circular boxes represent sampled point on the surface of shoe, square boxes represent points internally sampled.</p>
Full article ">Figure 3
<p>Experimental data and modelling results for a method allowing the prediction of cooling time of three different shoes—Cumulus (first column), leather (middle column), and Fuji shoe (third column)—from pixel values of a thermal imaging device in typical room conditions after wearing three different types of shoes (refer to Materials and Methods section for details). Panels (<b>a</b>–<b>c</b>) show the relationship between camera response, expressed as pixel intensity values (<span class="html-italic">ρ</span>), and shoe temperature reading of the FLIR i50 30 min after shoe removal. Points represent experimental data, the solid black line represents the predicted function, and the shaded area represents the 95% confidence intervals for the function. The curves at the bottom of each panel represent the shape of the probability density distribution (pdf) of the camera responses at 11 different intensity levels modelled assuming a beta distribution (See <a href="#app1-jimaging-04-00021" class="html-app">Supplementary Materials</a> for coefficients defining each distribution); Panels (<b>d</b>–<b>f</b>) show temperature as a function of time for the tested shoe types modelled assuming a bi-exponential function. Points represent the experimental data from the Temperature Sensor Meter, the solid black line represents the nonlinear regression function, and the shaded area represents the 95% confidence intervals. Panels (<b>g</b>–<b>i</b>) represent the cooling function for each shoe type: (<b>g</b>) Cumulus, (<b>h</b>) leather, and (<b>i</b>) Fuji shoe. Error bars on the x-axis represent 95% confidence intervals for camera responses at different pixel intensity values, whilst error bars on the y-axis represent the 95% confidence intervals of the predicted cooling time. In panels (<b>g</b>–<b>h</b>), the green shaded area represents the temperature range for which time can be reliably predicted from camera responses. Blue and red areas represent areas of high uncertainty where data should be interpreted with caution.</p>
Full article ">
15 pages, 8174 KiB  
Article
Application of High-Dynamic Range Imaging Techniques in Architecture: A Step toward High-Quality Daylit Interiors?
by Coralie Cauwerts and María Beatriz Piderit
J. Imaging 2018, 4(1), 19; https://doi.org/10.3390/jimaging4010019 - 12 Jan 2018
Cited by 16 | Viewed by 6398
Abstract
High dynamic range (HDR) imaging techniques are nowadays widely used in building research to capture luminances in the occupant field of view and investigate visual discomfort. This photographic technique also makes it possible to map sky luminances. Such images can be used for [...] Read more.
High dynamic range (HDR) imaging techniques are nowadays widely used in building research to capture luminances in the occupant field of view and investigate visual discomfort. This photographic technique also makes it possible to map sky luminances. Such images can be used for illuminating virtual scenes; the technique is called image-based lighting (IBL). This paper presents a work investigating IBL in a lighting quality research context for accelerating the development of appearance-driven performance indicators. Simulations were carried out using Radiance software. The ability of IBL to accurately predict indoor luminances is discussed by comparison with luminances from HDR photographs and luminances predicted by simulation in modeling the sky in several other more traditional ways. The present study confirms previous observations that IBL leads to similar luminance values than far less laborious simulations in which the sky is modeled based on outdoor illuminance measurements. IBL and these last methods minimize differences with HDR photographs in comparison to sky modeling not based on outdoor measurements. Full article
(This article belongs to the Special Issue Theory and Practice of High-Dynamic Range Imaging)
Show Figures

Figure 1

Figure 1
<p>The lighting quality definition in light of the Vitruvian triad.</p>
Full article ">Figure 2
<p>The four studied rooms (Room#1, Room#2, Room#3, and Room#4).</p>
Full article ">Figure 3
<p>Camera response curves for the two CANON 40D cameras used in the present study, as determined with <span class="html-italic">hdrgen</span>.</p>
Full article ">Figure 4
<p>Creation of the light probe image. For overcast skies, only the f/4 aperture series is used.</p>
Full article ">Figure 5
<p>The virtualization of a real scene with Radiance requires describing the geometry, the materials, and the light source.</p>
Full article ">Figure 6
<p>Zones for the surface-to-surface comparison of Room#3.</p>
Full article ">Figure 7
<p>Mean luminance by surface, in real (REAL) and virtual spaces (<span class="html-italic">gensky_def</span>, <span class="html-italic">gensky_sky</span>, <span class="html-italic">gensky_br</span>, <span class="html-italic">gendaylit</span>, <span class="html-italic">IBL</span>).</p>
Full article ">Figure 8
<p>Relative mean bias error, by room. The reference is the real world.</p>
Full article ">Figure 9
<p>Relative mean bias error, by room. The reference is <span class="html-italic">IBL</span>.</p>
Full article ">
7 pages, 613 KiB  
Review
Image-Guided Cancer Nanomedicine
by Dong-Hyun Kim
J. Imaging 2018, 4(1), 18; https://doi.org/10.3390/jimaging4010018 - 11 Jan 2018
Cited by 31 | Viewed by 7374
Abstract
Multifunctional nanoparticles with superior imaging properties and therapeutic effects have been extensively developed for the nanomedicine. However, tumor-intrinsic barriers and tumor heterogeneity have resulted in low in vivo therapeutic efficacy. The poor in vivo targeting efficiency in passive and active targeting of nano-therapeutics [...] Read more.
Multifunctional nanoparticles with superior imaging properties and therapeutic effects have been extensively developed for the nanomedicine. However, tumor-intrinsic barriers and tumor heterogeneity have resulted in low in vivo therapeutic efficacy. The poor in vivo targeting efficiency in passive and active targeting of nano-therapeutics along with the toxicity of nanoparticles has been a major problem in nanomedicine. Recently, image-guided nanomedicine, which can deliver nanoparticles locally using non-invasive imaging and interventional oncology techniques, has been paid attention as a new opportunity of nanomedicine. This short review will discuss the existing challenges in nanomedicine and describe the prospects for future image-guided nanomedicine. Full article
(This article belongs to the Special Issue Nanoparticles and Medical Imaging for Image Guided Medicine)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Image-guided Cancer Nanomedicine. Image-guided infusion of nanomedicine using interventional procedures allows personalized therapeutics with highly localized nano-therapeutics.</p>
Full article ">
22 pages, 1223 KiB  
Article
Transcription of Spanish Historical Handwritten Documents with Deep Neural Networks
by Emilio Granell, Edgard Chammas, Laurence Likforman-Sulem, Carlos-D. Martínez-Hinarejos, Chafic Mokbel and Bogdan-Ionuţ Cîrstea
J. Imaging 2018, 4(1), 15; https://doi.org/10.3390/jimaging4010015 - 11 Jan 2018
Cited by 28 | Viewed by 9692
Abstract
The digitization of historical handwritten document images is important for the preservation of cultural heritage. Moreover, the transcription of text images obtained from digitization is necessary to provide efficient information access to the content of these documents. Handwritten Text Recognition (HTR) has become [...] Read more.
The digitization of historical handwritten document images is important for the preservation of cultural heritage. Moreover, the transcription of text images obtained from digitization is necessary to provide efficient information access to the content of these documents. Handwritten Text Recognition (HTR) has become an important research topic in the areas of image and computational language processing that allows us to obtain transcriptions from text images. State-of-the-art HTR systems are, however, far from perfect. One difficulty is that they have to cope with image noise and handwriting variability. Another difficulty is the presence of a large amount of Out-Of-Vocabulary (OOV) words in ancient historical texts. A solution to this problem is to use external lexical resources, but such resources might be scarce or unavailable given the nature and the age of such documents. This work proposes a solution to avoid this limitation. It consists of associating a powerful optical recognition system that will cope with image noise and variability, with a language model based on sub-lexical units that will model OOV words. Such a language modeling approach reduces the size of the lexicon while increasing the lexicon coverage. Experiments are first conducted on the publicly available Rodrigo dataset, which contains the digitization of an ancient Spanish manuscript, with a recognizer based on Hidden Markov Models (HMMs). They show that sub-lexical units outperform word units in terms of Word Error Rate (WER), Character Error Rate (CER) and OOV word accuracy rate. This approach is then applied to deep net classifiers, namely Bi-directional Long-Short Term Memory (BLSTMs) and Convolutional Recurrent Neural Nets (CRNNs). Results show that CRNNs outperform HMMs and BLSTMs, reaching the lowest WER and CER for this image dataset and significantly improving OOV recognition. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

Figure 1
<p>Sample image of a Spanish document from the 16th century.</p>
Full article ">Figure 2
<p>Scheme of a handwritten text recognition system.</p>
Full article ">Figure 3
<p>Page 515 of the <span class="html-italic">Rodrigo</span> dataset.</p>
Full article ">Figure 4
<p>Text line sample. “Recognoscio” and “Astragamiento” are rare words; <span class="html-italic">recognoscio</span> is an archaic form of <span class="html-italic">reconoció</span> and <span class="html-italic">Astragamiento</span> an ancient form of <span class="html-italic">Estragamiento</span>.</p>
Full article ">Figure 5
<p>Bi-directional Long-Short Term Memory (BLSTM) system architecture. The BLSTM RNN outputs posterior distributions <span class="html-italic">o</span> at each time step. The decoding is performed with Weighted Finite State Transducers (WFST) using a lexicon and a language model at word level.</p>
Full article ">Figure 6
<p>CRNN system architecture.</p>
Full article ">Figure 7
<p>Results obtained by the HMM word-based system using <span class="html-italic">n</span>-gram language models with size <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>6</mn> <mo>}</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>Results obtained by decoding at the HMM sub-word level by using <span class="html-italic">n</span>-gram language models with size <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>6</mn> <mo>}</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 9
<p>Results obtained by decoding at the HMM character level by using <span class="html-italic">n</span>-gram language models with size <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>15</mn> <mo>}</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p>Distribution of the perplexity presented by the 10-gram character Language Model (LM) per recognized and unrecognized OOV words (decomposed into character sequences) by the HMM system.</p>
Full article ">Figure 11
<p>CER results obtained by the best word-based HMM system and the best character-based HMM system with open and closed vocabulary, with and without using the validation samples for training the LM.</p>
Full article ">Figure 12
<p>Recognition accuracy rate for OOV words by the best word-based HMM system and the best character-based HMM system with open and closed vocabulary, with and without using the validation samples for training the LM.</p>
Full article ">Figure 13
<p>WER results obtained by the best word-based HMM system and the best character-based HMM system with open and closed vocabulary, with and without using the validation samples for training the LM.</p>
Full article ">Figure 14
<p>Results obtained by the RNN word-based system using <span class="html-italic">n</span>-gram language models.</p>
Full article ">Figure 15
<p>Results obtained by the RNN sub-word-based system using <span class="html-italic">n</span>-gram language models.</p>
Full article ">Figure 16
<p>Results obtained by the RNN character-based system using <span class="html-italic">n</span>-gram language models.</p>
Full article ">Figure 17
<p>Results obtained by the CRNN word-based system using <span class="html-italic">n</span>-gram language models with size <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>6</mn> <mo>}</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 18
<p>Results obtained by the CRNN sub-word-based system using <span class="html-italic">n</span>-gram language models with size <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>6</mn> <mo>}</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 19
<p>Results obtained by the CRNN character-based system using <span class="html-italic">n</span>-gram language models with size <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>15</mn> <mo>}</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A1
<p>Example of the best hypotheses obtained for the 12th line of page 500 of <span class="html-italic">Rodrigo</span>.</p>
Full article ">Figure A2
<p>Example of the best hypotheses obtained for the 9th line of page 619 of <span class="html-italic">Rodrigo</span>.</p>
Full article ">Figure A3
<p>Example of the best hypotheses obtained for the 4th line of page 514 of <span class="html-italic">Rodrigo</span>.</p>
Full article ">
17 pages, 3598 KiB  
Article
Secure Image Transmission Using Fractal and 2D-Chaotic Map
by Shafali Agarwal
J. Imaging 2018, 4(1), 17; https://doi.org/10.3390/jimaging4010017 - 10 Jan 2018
Cited by 37 | Viewed by 7334
Abstract
A chaos-based cryptosystem has been suggested and investigated since last decade because of its sensitivity to the initial condition, unpredictability and ergodicity properties. The paper introduces a new chaotic map which helps to enhance the security of image transmission by blending the superior [...] Read more.
A chaos-based cryptosystem has been suggested and investigated since last decade because of its sensitivity to the initial condition, unpredictability and ergodicity properties. The paper introduces a new chaotic map which helps to enhance the security of image transmission by blending the superior fractal function with a new 2D-Sine Tent composite map (2D-STCM) to generate a key stream. A trajectory map of a proposed 2D-STCM show a wider chaotic range implies better unpredictability and ergodicity feature, suitable to design a cryptosystem. A fractal based image encryption increases the key space of the security key up-to hundreds of bits, thus secure the proposed cryptosystem from brute-force attack. The requirement of confusion and diffusion are fulfilled by applying chaotic circular pixel shuffling (CCPS) to change the pixel position repeatedly and the execution of an improved XOR operation i.e., complex XOR, designed to increase the encryption quality. The proposed cryptosystem has been analyzed using statistical analysis, key sensitivity, differential analysis and key space analysis. The experimental result proves that the new scheme has a high security level to protect the image transmission over the network. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Superior Fractal images for (<b>a</b>) β = 0.3; (<b>b</b>) β = 0.5; and (<b>c</b>) β = 0.7.</p>
Full article ">Figure 2
<p>Trajectories of (<b>a</b>) the proposed 2D-STCM Map; (<b>b</b>) 2D-SLMM Map and (<b>c</b>) 2D-Logistic Map.</p>
Full article ">Figure 3
<p>A sample of plain image shuffling using CCPS. (<b>a</b>) Chaotic key sequence (CS); (<b>b</b>) Index matrix (IM); (<b>c</b>) Plain image (P) and (<b>d</b>) Plain shuffled image (SM).</p>
Full article ">Figure 4
<p>The proposed encryption processes.</p>
Full article ">Figure 5
<p>Sample chaotic key sequences generated using 2D-STCM.</p>
Full article ">Figure 6
<p>An example of the proposed image encryption algorithm.</p>
Full article ">Figure 7
<p>(<b>a</b>) Plain image, Cipher Image and Decrypted Image and their Histogram (<b>b</b>) Absolute Matrices Difference between Plain Image and Decrypted Image.</p>
Full article ">Figure 8
<p>Original image; Cipher Image1 = encrypt(P, K); Cipher Image2 = encrypt(P, K1); Decrypted image using correct key; Decrypted Image2 = decrypt(C, K1); Decrypted Image3 = decrypt(C, K2); (From top to bottom and left to right).</p>
Full article ">Figure 9
<p>Adjacent pixel pair distribution of a plain image and its cipher image in horizontal, vertical and diagonal direction.</p>
Full article ">
21 pages, 51694 KiB  
Article
Surface Mesh Reconstruction from Cardiac MRI Contours
by Benjamin Villard, Vicente Grau and Ernesto Zacur
J. Imaging 2018, 4(1), 16; https://doi.org/10.3390/jimaging4010016 - 10 Jan 2018
Cited by 20 | Viewed by 9392
Abstract
We introduce a tool to build a surface mesh able to deal with sparse, heterogeneous, non-parallel, cross-sectional, non-coincidental contours and show its application to reconstruct surfaces of the heart. In recent years, much research has looked at creating personalised 3D anatomical models of [...] Read more.
We introduce a tool to build a surface mesh able to deal with sparse, heterogeneous, non-parallel, cross-sectional, non-coincidental contours and show its application to reconstruct surfaces of the heart. In recent years, much research has looked at creating personalised 3D anatomical models of the heart. These models usually incorporate a geometrical reconstruction of the anatomy in order to better understand cardiovascular functions as well as predict different cardiac processes. As MRIs are becoming the standard for cardiac medical imaging, we tested our methodology on cardiac MRI data from standard acquisitions. However, the ability to accurately reconstruct heart anatomy in three dimensions commonly comes with fundamental challenges—notably, the trade-off between data fitting and expected visual appearance. Most current techniques can either require contours from parallel slices or, if multiple slice orientations are used, require an exact match between these contours. In addition, some methods introduce a bias by the use of prior shape models or by trade-offs between the data matching terms and the smoothing terms. Our approach uses a composition of smooth approximations towards the maximization of the data fitting, ensuring a good matching to the input data as well as pleasant interpolation characteristics. To assess our method in the task of cardiac mesh generations, we evaluated its performance on synthetic data obtained from a cardiac statistical shape model as well as on real data. Using a statistical shape model, we simulated standard cardiac MRI acquisitions planes and contour data. We performed a multi-parameter evaluation study using plausible cardiac shapes generated from the model. We also show that long axes contours as well as the most extremal slices (basal and apical) contain the most amount of structural information, and thus should be taken into account when generating anatomically relevant geometrical cardiovascular surfaces. Our method is both used on epicardial and endocardial left ventricle surfaces as well as on the right ventricle. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>An example of the SSM of the left ventricle epicardium and right ventricle endocardium fitted to real contours. The dark blue contours represent the real contours. The green contours represent the synthetic contours generated by slicing the SSM at the same spatial pose of the CMR acquired contour planes.</p>
Full article ">Figure 2
<p>Overall pipeline for generating a surface mesh from contours illustrated on left epicardial contours. (<b>a</b>) the input contours; (<b>b</b>) <span class="html-italic">tubular</span> initial mesh with its vertices equally spaced along the contours; (<b>c</b>) the initial mesh after the Laplacian smoothing and showing the attractor points on the contours, with a visual illustration of their connections pulling the mesh; (<b>d</b>) the resulting mesh after several small deformations towards the attractor points; (<b>e</b>) the mesh after having undergone the process of subdivision, smoothing and decimation; (<b>f</b>) the resulting mesh, after several iteration of steps (<b>d</b>,<b>e</b>).</p>
Full article ">Figure 3
<p>Parallel Coordinate graph showing the different combinations of parameters giving the minimum average, median, and min mesh to mesh distance, as well as the number of triangles generated. The red lines represent the selected bands for the <span class="html-italic">y</span>-axis. The highlighted blue lines represent the parameters that have generated meshes that satisfies the selected criteria. The faded lines represent every parameter permutation and their respective evaluation metrics. There are 3440 permutations. The fifth to seventh <span class="html-italic">y</span>-axis represent the mean, median and min mesh-to-mesh error distances in mm, between the ground truth meshes and the generated meshes given the respective parameters.</p>
Full article ">Figure 4
<p>(<b>a</b>) the ground truth surface (left epicardium of the mean shape of the SSM) and the synthesized contours in red; (<b>b</b>) our resulting mesh given the red contours; (<b>c</b>) the surface-to-mesh distance error (in mm). The resulting mesh has been clipped at the level of the most basal contour.</p>
Full article ">Figure 5
<p>Impact of increasing the number of SAX contours on the error distance between the generated mesh and the ground truth surface.</p>
Full article ">Figure 6
<p>The effect of increasing the amount of left epicardial LAX contours have on the error distance between the generated mesh and the ground truth mesh. Top to bottom: 1–8 LAX slices added, successively. Each LAX added is a rotated version of the first LAX shown in the first row, first column. Rotations were generated as per the following: 90, 45, 135, 22.5, 157.5, 112.5, 67.5 degrees, respectively.</p>
Full article ">Figure 7
<p>Distance map from the resulting geometrical mesh to the simulated cardiac input data. The contours were transformed by small out of plane rotations and translations. The left panel shows the resulting left ventricular epicardial mesh, and the right panel shows the right ventricular endocardial mesh. The distance errors represent the contour-to-mesh distances.</p>
Full article ">Figure 8
<p>Different segmented CMR acquired contours on a severe abnormal anatomy.</p>
Full article ">Figure 9
<p>Example of resulting surface meshes generated from (<b>a</b>) contours for a normal case, and (<b>b</b>) contours belonging to a severe abnormal anatomy. The left panel shows surface meshes for the left endocardial (in red) and epicardial (in blue) ventricle as well as the right epicardial ventricle (in orange). The right panel shows surface meshes for the left endocardial (in red) and epicardial (in blue) ventricle.</p>
Full article ">Figure 10
<p>Endocardial contour-to-mesh distance having built the mesh with the entire set of contours (labelled <math display="inline"> <semantics> <mrow> <mi>O</mi> <mi>M</mi> </mrow> </semantics> </math>, which stands for “Original Mesh”, in blue) and excluded contour-to-mesh distance having built the mesh without the excluded contour (labelled <math display="inline"> <semantics> <mrow> <mi>C</mi> <mi>M</mi> </mrow> </semantics> </math>, which stands for “Computed Mesh”, in red).</p>
Full article ">Figure 11
<p>Contour-to-mesh distances for all 24 cases, having built the meshes without the respectively excluded contour. Each error bar represents the distribution of distances for each of the respective statistical measurement. Each square represents the mean of each statistical measure, with the upper and lower notches representing the minimum and maximum values, respectively. <b>Top</b>: epicardial contour-to-mesh distances; <b>bottom</b>: endocardial contour-to-mesh distances. The <span class="html-italic">y</span>-axis represents the contour index. 1–2 represent the LAX, 3–11 represent the apex to the base, respectively.</p>
Full article ">Figure 11 Cont.
<p>Contour-to-mesh distances for all 24 cases, having built the meshes without the respectively excluded contour. Each error bar represents the distribution of distances for each of the respective statistical measurement. Each square represents the mean of each statistical measure, with the upper and lower notches representing the minimum and maximum values, respectively. <b>Top</b>: epicardial contour-to-mesh distances; <b>bottom</b>: endocardial contour-to-mesh distances. The <span class="html-italic">y</span>-axis represents the contour index. 1–2 represent the LAX, 3–11 represent the apex to the base, respectively.</p>
Full article ">Figure A1
<p>Illustration of resulting meshes for different extreme sets of parameters in the exploration and tuning of the parameters experiment. The color along the contours represents the distance to the resulting mesh. See the main text in <a href="#app1-jimaging-04-00016" class="html-app">Appendix A</a> for comments on the different panels.</p>
Full article ">Figure A2
<p>Resulting output mesh using our methodology on simulated CMR acquired contours (see <a href="#sec3dot1-jimaging-04-00016" class="html-sec">Section 3.1</a>). The figure shows the various shapes of the 40 generated meshes.</p>
Full article ">
23 pages, 2276 KiB  
Article
Breast Density Classification Using Local Quinary Patterns with Various Neighbourhood Topologies
by Andrik Rampun, Bryan William Scotney, Philip John Morrow, Hui Wang and John Winder
J. Imaging 2018, 4(1), 14; https://doi.org/10.3390/jimaging4010014 - 8 Jan 2018
Cited by 50 | Viewed by 6804
Abstract
This paper presents an extension of work from our previous study by investigating the use of Local Quinary Patterns (LQP) for breast density classification in mammograms on various neighbourhood topologies. The LQP operators are used to capture the texture characteristics of the fibro-glandular [...] Read more.
This paper presents an extension of work from our previous study by investigating the use of Local Quinary Patterns (LQP) for breast density classification in mammograms on various neighbourhood topologies. The LQP operators are used to capture the texture characteristics of the fibro-glandular disk region ( F G D r o i ) instead of the whole breast area as the majority of current studies have done. We take a multiresolution and multi-orientation approach, investigate the effects of various neighbourhood topologies and select dominant patterns to maximise texture information. Subsequently, the Support Vector Machine classifier is used to perform the classification, and a stratified ten-fold cross-validation scheme is employed to evaluate the performance of the method. The proposed method produced competitive results up to 86.13 % and 82.02 % accuracy based on 322 and 206 mammograms taken from the Mammographic Image Analysis Society (MIAS) and InBreast datasets, which is comparable with the state-of-the-art in the literature. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>Examples of breast density according to Breast Imaging-Reporting and Data System classes (BI-RADS).</p>
Full article ">Figure 2
<p>An overview of the proposed breast density methodology.</p>
Full article ">Figure 3
<p>Example of segmentation results using our method in [<a href="#B30-jimaging-04-00014" class="html-bibr">30</a>].</p>
Full article ">Figure 4
<p>An illustration of computing the Local Quinary Pattern (LQP) code using <math display="inline"> <semantics> <mrow> <mi>P</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, resulting in four binary patterns.</p>
Full article ">Figure 5
<p>An overview of the feature extraction using multiresolution LQP operators. Black dots in multiresolution LQP operators mean neighbours with less value than the central pixel (red dot). Note that the outer circle with bigger dots represent a larger <span class="html-italic">R</span> value.</p>
Full article ">Figure 6
<p>Four images of binary patterns generated from the LQP code image.</p>
Full article ">Figure 7
<p>Five different neighbourhood topologies employed in our study.</p>
Full article ">Figure 8
<p>Ellipse topology at different orientations and its combination.</p>
Full article ">Figure 9
<p>Spatial rotation in an ellipse topology resulting in different decimal values. The decimal value is calculated in a clockwise direction.</p>
Full article ">Figure 10
<p>Quantitative results using different multiresolutions based on different neighbourhood topologies. Note that the <span class="html-italic">x-</span>axis and <span class="html-italic">y-</span>axis represent the accuracy (percentage of correctly classified cases) and the percentage of dominant patterns, respectively.</p>
Full article ">Figure 11
<p>Quantitative results using different values of <math display="inline"> <semantics> <msub> <mi>τ</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>τ</mi> <mn>2</mn> </msub> </semantics> </math> on circle neighbourhood topology. Note that the <span class="html-italic">x-</span>axis and <span class="html-italic">y-</span>axis represent the accuracy (percentage of correctly classified cases) and the percentage of dominant patterns, respectively.</p>
Full article ">Figure 12
<p>Quantitative results based on features extracted from the whole breast versus fibro-glandular disk region covering four different topologies. Note that the <span class="html-italic">x-</span>axis and <span class="html-italic">y-</span>axis represent the accuracy (percentage of correctly classified cases) and the percentage of dominant patterns, respectively.</p>
Full article ">Figure 13
<p>Histograms extracted from the <math display="inline"> <semantics> <mrow> <mi>w</mi> <mi>b</mi> </mrow> </semantics> </math> versus region of interest (ROI) with BIRADS class I and IV.</p>
Full article ">Figure 14
<p>Quantitative results (MIAS dataset [<a href="#B18-jimaging-04-00014" class="html-bibr">18</a>]) of <math display="inline"> <semantics> <mrow> <mi>L</mi> <mi>Q</mi> <msubsup> <mi>P</mi> <mrow> <mi>c</mi> <mi>i</mi> <mo>,</mo> <mi>e</mi> <mi>l</mi> <mo>,</mo> <mi>h</mi> <mi>y</mi> <mo>,</mo> <mi>p</mi> <mi>a</mi> <mi>r</mi> </mrow> <mrow> <mi>m</mi> <mi>e</mi> <mi>d</mi> <mi>i</mi> <mi>u</mi> <mi>m</mi> </mrow> </msubsup> </mrow> </semantics> </math> based on features extracted from the fibro-glandular disk region on different neighbourhood topologies. Note that the <span class="html-italic">x-</span>axis and <span class="html-italic">y-</span>axis represent the accuracy (percentage of correctly classified cases) and the percentage of dominant patterns, respectively.</p>
Full article ">Figure 15
<p>Quantitative results (InBreast dataset [<a href="#B15-jimaging-04-00014" class="html-bibr">15</a>]) of <math display="inline"> <semantics> <mrow> <mi>L</mi> <mi>Q</mi> <msubsup> <mi>P</mi> <mrow> <mi>c</mi> <mi>i</mi> <mo>,</mo> <mi>e</mi> <mi>l</mi> <mo>,</mo> <mi>h</mi> <mi>y</mi> <mo>,</mo> <mi>p</mi> <mi>a</mi> <mi>r</mi> </mrow> <mrow> <mi>m</mi> <mi>e</mi> <mi>d</mi> <mi>i</mi> <mi>u</mi> <mi>m</mi> </mrow> </msubsup> </mrow> </semantics> </math> based on features extracted from the fibro-glandular disk region on different neighbourhood topologies. Note that the <span class="html-italic">x-</span>axis and <span class="html-italic">y-</span>axis represent the accuracy (percentage of correctly classified cases) and the percentage of dominant patterns, respectively.</p>
Full article ">Figure 16
<p>Quantitative results using different orientations evaluated on the MIAS dataset [<a href="#B18-jimaging-04-00014" class="html-bibr">18</a>]. Note that the <span class="html-italic">x-</span>axis and <span class="html-italic">y-</span>axis represent the accuracy (percentage of correctly classified cases) and the percentage of dominant patterns, respectively.</p>
Full article ">Figure 17
<p>Quantitative results using different orientations evaluated on the InBreast dataset [<a href="#B15-jimaging-04-00014" class="html-bibr">15</a>]. Note that the <span class="html-italic">x-</span>axis and <span class="html-italic">y-</span>axis represent the accuracy (percentage of correctly classified cases) and the percentage of dominant patterns, respectively.</p>
Full article ">
16 pages, 5057 KiB  
Article
Range Imaging for Motion Compensation in C-Arm Cone-Beam CT of Knees under Weight-Bearing Conditions
by Bastian Bier, Nishant Ravikumar, Mathias Unberath, Marc Levenston, Garry Gold, Rebecca Fahrig and Andreas Maier
J. Imaging 2018, 4(1), 13; https://doi.org/10.3390/jimaging4010013 - 6 Jan 2018
Cited by 14 | Viewed by 7560
Abstract
C-arm cone-beam computed tomography (CBCT) has been used recently to acquire images of the human knee joint under weight-bearing conditions to assess knee joint health under load. However, involuntary patient motion during image acquisition leads to severe motion artifacts in the subsequent reconstructions. [...] Read more.
C-arm cone-beam computed tomography (CBCT) has been used recently to acquire images of the human knee joint under weight-bearing conditions to assess knee joint health under load. However, involuntary patient motion during image acquisition leads to severe motion artifacts in the subsequent reconstructions. The state-of-the-art uses fiducial markers placed on the patient’s knee to compensate for the induced motion artifacts. The placement of markers is time consuming, tedious, and requires user experience, to guarantee reliable motion estimates. To overcome these drawbacks, we recently investigated whether range imaging would allow to track, estimate, and compensate for patient motion using a range camera. We argue that the dense surface information observed by the camera could reveal more information than only a few surface points of the marker-based method. However, the integration of range-imaging with CBCT involves flexibility, such as where to position the camera and what algorithm to align the data with. In this work, three dimensional rigid body motion is estimated for synthetic data acquired with two different range camera trajectories: a static position on the ground and a dynamic position on the C-arm. Motion estimation is evaluated using two different types of point cloud registration algorithms: a pair wise Iterative Closest Point algorithm as well as a probabilistic group wise method. We compare the reconstruction results and the estimated motion signals with the ground truth and the current reference standard, a marker-based approach. To this end, we qualitatively and quantitatively assess image quality. The latter is evaluated using the Structural Similarity (SSIM). We achieved results comparable to the marker-based approach, which highlights the potential of both point set registration methods, for accurately recovering patient motion. The SSIM improved from 0.94 to 0.99 and 0.97 using the static and the dynamic camera trajectory, respectively. Accurate recovery of patient motion resulted in remarkable reduction in motion artifacts in the CBCT reconstructions, which is promising for future work with real data. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>Schematic scene of the acquisition scenario. During scanning, the C-arm rotates around the object and acquires projections. At the same time, a range camera observes the scene. Possible camera positions investigated in this work are either dynamic (<b>a</b>) or static (<b>b</b>).</p>
Full article ">Figure 2
<p>Simulated projection image (<b>a</b>) with its corresponding point cloud (<b>b</b>).</p>
Full article ">Figure 3
<p>Method work flow: point clouds were used to estimate patient motion, which is then used for reconstruction.</p>
Full article ">Figure 4
<p>Reconstruction results of dataset 1: axial slices of the motion free reference volume (<b>a</b>); the motion corrupted (<b>b</b>); the marker-based corrected (<b>c</b>); and the proposed methods (<b>d</b>–<b>g</b>).</p>
Full article ">Figure 5
<p>Reconstruction results of dataset 1 achieved with noisy point clouds.</p>
Full article ">Figure 6
<p>Estimated motion signals. The rotation parameters are Euler angles calculated from the estimated rotation matrices in order to simplify the presentation.</p>
Full article ">Figure 7
<p>Reconstruction results of dataset 2: axial slices of the motion free reference volume (<b>a</b>); the motion corrupted (<b>b</b>); the marker-based corrected (<b>c</b>); and the proposed methods (<b>d</b>–<b>g</b>).</p>
Full article ">
34 pages, 5489 KiB  
Article
Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method
by Andreas Langer
J. Imaging 2018, 4(1), 12; https://doi.org/10.3390/jimaging4010012 - 6 Jan 2018
Cited by 6 | Viewed by 5339
Abstract
In this paper, we investigate the usefulness of adding a box-constraint to the minimization of functionals consisting of a data-fidelity term and a total variation regularization term. In particular, we show that in certain applications an additional box-constraint does not effect the solution [...] Read more.
In this paper, we investigate the usefulness of adding a box-constraint to the minimization of functionals consisting of a data-fidelity term and a total variation regularization term. In particular, we show that in certain applications an additional box-constraint does not effect the solution at all, i.e., the solution is the same whether a box-constraint is used or not. On the contrary, i.e., for applications where a box-constraint may have influence on the solution, we investigate how much it effects the quality of the restoration, especially when the regularization parameter, which weights the importance of the data term and the regularizer, is chosen suitable. In particular, for such applications, we consider the case of a squared L 2 data-fidelity term. For computing a minimizer of the respective box-constrained optimization problems a primal-dual semi-smooth Newton method is presented, which guarantees superlinear convergence. Full article
Show Figures

Figure 1

Figure 1
<p>Original images of size <math display="inline"> <semantics> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> </semantics> </math>. (<b>a</b>) Phantom; (<b>b</b>) Cameraman; (<b>c</b>) Barbara; (<b>d</b>) House; (<b>e</b>) Lena; (<b>f</b>) Bones; (<b>g</b>) Cookies; (<b>h</b>) Numbers.</p>
Full article ">Figure 2
<p>Original images (<b>a</b>) Shepp-Logan phantom of size <math display="inline"> <semantics> <mrow> <mn>128</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics> </math> pixels (<b>b</b>) knee of size <math display="inline"> <semantics> <mrow> <mn>200</mn> <mo>×</mo> <mn>200</mn> </mrow> </semantics> </math> pixels (<b>c</b>) slice of a human brain of size <math display="inline"> <semantics> <mrow> <mn>128</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics> </math> pixels.</p>
Full article ">Figure 3
<p>Reconstruction of the cameraman image corrupted by Gaussian white noise with <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math> (<b>left</b>), corrupted by blurring and Gaussian white noise with <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math> (<b>right</b>) via the semi-smooth Newton method with <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math> for different values <math display="inline"> <semantics> <mi>η</mi> </semantics> </math>.</p>
Full article ">Figure 4
<p>Regularization parameter versus noise-level for the box-constrained pAPS in image denoising.</p>
Full article ">Figure 5
<p>Reconstruction from blurry and noisy data. (<b>a</b>) Noisy observation; (<b>b</b>) pAPS with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> (PSNR: 24.241; MSSIM: 0.58555); (<b>c</b>) pAPS with box-constrained pdN (PSNR: 24.241; MSSIM: 0.58564); (<b>d</b>) ADMM (PSNR: 24.224; MSSIM: 0.5883); (<b>e</b>) Box-constrained ADMM (PSNR: 24.215; MSSIM: 0.58921); (<b>f</b>) pLATV with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> (PSNR: 24.503; MSSIM: 0.58885); (<b>g</b>) pLATV with box-constrained pdN (PSNR: 24.486; MSSIM: 0.58693).</p>
Full article ">Figure 6
<p>Spatially varying regularization parameter generated by the respective pLATV-algorithm. (<b>a</b>) pLATV with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>b</b>) pLATV with box-constrained pdN.</p>
Full article ">Figure 7
<p>Regularization parameter versus noise-level for the box-constrained pAPS in image deblurring.</p>
Full article ">Figure 8
<p>Reconstruction from blurry and noisy data. (<b>a</b>) Blurry and noisy observation; (<b>b</b>) pAPS with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> (PSNR: 24.281; MSSIM: 0.34554); (<b>c</b>) pAPS with box-constrained pdN (PSNR: 24.293; MSSIM: 0.34618); (<b>d</b>) ADMM (PSNR: 24.236; MSSIM: 0.34073); (<b>e</b>) Box-constrained ADMM (PSNR: 24.237; MSSIM: 0.34082); (<b>f</b>) pLATV with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> (PSNR: 24.599; MSSIM: 0.34708); (<b>g</b>) pLATV with box-constrained pdN (PSNR: 24.622; MSSIM: 0.34769).</p>
Full article ">Figure 9
<p>Spatially varying regularization parameter generated by the respective pLATV-algorithm. (<b>a</b>) pLATV with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>b</b>) pLATV-algorithm with box-constrained pdN.</p>
Full article ">Figure 10
<p>Simultaneous image inpainting and denoising with <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>. (<b>a</b>) Observation; (<b>b</b>) pAPS with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> (PSNR: 24.922; MSSIM: 0.44992); (<b>c</b>) pAPS with box-constrained pdN (PSNR: 24.922; MSSIM: 0.44992); (<b>d</b>) pLATV with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> (PSNR: 24.893; MSSIM: 0.4498); (<b>e</b>) pLATV with box-constrained pdN (PSNR: 24.868; MSSIM: 0.45004).</p>
Full article ">Figure 11
<p>Spatially varying regularization parameter generated by the respective pLATV-algorithm. (<b>a</b>) pLATV with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>b</b>) pLATV-algorithm with box-constrained pdN.</p>
Full article ">Figure 12
<p>Sampling domain in the frequency plane, i.e., sampling operator <span class="html-italic">S</span>.</p>
Full article ">Figure 13
<p>Reconstruction from sampled Fourier data. (<b>a</b>) pAPS with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> (PSNR: 25.017; MSSIM: 0.37061); (<b>b</b>) pAPS with box-constrained pdN (PSNR: 25.017; MSSIM: 0.37056); (<b>c</b>) pLATV with box-constrained pdN (PSNR: 23.64; MSSIM: 0.34945); (<b>d</b>) pLATV with pdN with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> (PSNR: 23.525; MSSIM: 0.34652).</p>
Full article ">Figure 14
<p>The Shepp-Logan phantom image of size <math display="inline"> <semantics> <mrow> <mn>64</mn> <mo>×</mo> <mn>64</mn> </mrow> </semantics> </math> pixels and its measured sinogram. (<b>a</b>) Original image; (<b>b</b>) Sinogram.</p>
Full article ">Figure 15
<p>Slice of a human head and its measured sinogram. (<b>a</b>) Original image; (<b>b</b>) Sinogram.</p>
Full article ">Figure 16
<p>Reconstruction from noisy data. (<b>a</b>) Inverse Radon-transform (PSNR: 29.08; MSSIM: 0.3906); (<b>b</b>) <math display="inline"> <semantics> <msup> <mi>L</mi> <mn>2</mn> </msup> </semantics> </math>-TV (PSNR: 29.14; MSSIM: 0.4051); (<b>c</b>) Box-constrained <math display="inline"> <semantics> <msup> <mi>L</mi> <mn>2</mn> </msup> </semantics> </math>-TV (PSNR: 33.31; MSSIM: 0.6128); (<b>d</b>) Inverse Radon-transform (PSNR: 31.75; MSSIM: 0.3699); (<b>e</b>) <math display="inline"> <semantics> <msup> <mi>L</mi> <mn>2</mn> </msup> </semantics> </math>-TV (PSNR: 32.16; MSSIM: 0.3682); (<b>f</b>) Box-constrained <math display="inline"> <semantics> <msup> <mi>L</mi> <mn>2</mn> </msup> </semantics> </math>-TV (PSNR: 36.08; MSSIM: 0.5856).</p>
Full article ">Figure 16 Cont.
<p>Reconstruction from noisy data. (<b>a</b>) Inverse Radon-transform (PSNR: 29.08; MSSIM: 0.3906); (<b>b</b>) <math display="inline"> <semantics> <msup> <mi>L</mi> <mn>2</mn> </msup> </semantics> </math>-TV (PSNR: 29.14; MSSIM: 0.4051); (<b>c</b>) Box-constrained <math display="inline"> <semantics> <msup> <mi>L</mi> <mn>2</mn> </msup> </semantics> </math>-TV (PSNR: 33.31; MSSIM: 0.6128); (<b>d</b>) Inverse Radon-transform (PSNR: 31.75; MSSIM: 0.3699); (<b>e</b>) <math display="inline"> <semantics> <msup> <mi>L</mi> <mn>2</mn> </msup> </semantics> </math>-TV (PSNR: 32.16; MSSIM: 0.3682); (<b>f</b>) Box-constrained <math display="inline"> <semantics> <msup> <mi>L</mi> <mn>2</mn> </msup> </semantics> </math>-TV (PSNR: 36.08; MSSIM: 0.5856).</p>
Full article ">
17 pages, 7607 KiB  
Article
Estimating Bacterial and Cellular Load in FCFM Imaging
by Sohan Seth, Ahsan R. Akram, Kevin Dhaliwal and Christopher K. I. Williams
J. Imaging 2018, 4(1), 11; https://doi.org/10.3390/jimaging4010011 - 5 Jan 2018
Cited by 9 | Viewed by 6758
Abstract
We address the task of estimating bacterial and cellular load in the human distal lung with fibered confocal fluorescence microscopy (FCFM). In pulmonary FCFM some cells can display autofluorescence, and they appear as disc like objects in the FCFM images, whereas bacteria, although [...] Read more.
We address the task of estimating bacterial and cellular load in the human distal lung with fibered confocal fluorescence microscopy (FCFM). In pulmonary FCFM some cells can display autofluorescence, and they appear as disc like objects in the FCFM images, whereas bacteria, although not autofluorescent, appear as bright blinking dots when exposed to a targeted smartprobe. Estimating bacterial and cellular load becomes a challenging task due to the presence of background from autofluorescent human lung tissues, i.e., elastin, and imaging artifacts from motion etc. We create a database of annotated images for both these tasks where bacteria and cells were annotated, and use these databases for supervised learning. We extract image patches around each pixel as features, and train a classifier to predict if a bacterium or cell is present at that pixel. We apply our approach on two datasets for detecting bacteria and cells respectively. For the bacteria dataset, we show that the estimated bacterial load increases after introducing the targeted smartprobe in the presence of bacteria. For the cell dataset, we show that the estimated cellular load agrees with a clinician’s assessment. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>FCFM image frames with or without cells. The images are of 600 by 600 microns.</p>
Full article ">Figure 2
<p>FCFM image frames with or without smartprobe in control or case group. The images are of 600 by 600 microns. The bacteria are usually of 1–2 microns but appear larger due to image smoothing.</p>
Full article ">Figure 3
<p>FCFM image frame with annotated bacteria shown as circles at time <span class="html-italic">t</span>.</p>
Full article ">Figure 4
<p>FCFM image frame with annotated cells shown as dots at time <span class="html-italic">t</span>.</p>
Full article ">Figure 5
<p>Illustration of spatio-temporal and multi-resolution feature extraction. (<b>a</b>) Image patches around positive (red) and negative (green) annotations: the three boxes are of sizes 9 × 9, 27 × 27 and 45 × 45 pixels respectively; (<b>b</b>) Features (neg. sample); (<b>c</b>) Features (pos. sample).</p>
Full article ">Figure 5 Cont.
<p>Illustration of spatio-temporal and multi-resolution feature extraction. (<b>a</b>) Image patches around positive (red) and negative (green) annotations: the three boxes are of sizes 9 × 9, 27 × 27 and 45 × 45 pixels respectively; (<b>b</b>) Features (neg. sample); (<b>c</b>) Features (pos. sample).</p>
Full article ">Figure 6
<p>Illustration of spatio-temporal and multi-resolution feature extraction. (<b>a</b>) Image patches around positive (red) and negative (green) annotations: the three boxes are of sizes 9 × 9, 27 × 27, 45 × 45 and 63 × 63 pixels respectively; (<b>b</b>) Features (neg. sample); (<b>c</b>) Features (pos. sample).</p>
Full article ">Figure 7
<p>Examples of centers of RBF network for bacteria dataset.</p>
Full article ">Figure 8
<p>Examples of centers of RBF network for cell dataset.</p>
Full article ">Figure 9
<p>Illustration of true positives, false negatives and false positives. + are ground truth annotations, • are detections and ∘ are disks of radius <span class="html-italic">r</span> around ground truth annotations.</p>
Full article ">Figure 10
<p>Precision-recall curves for different learning methods (logistic regression (LR) or radial basis function (RBF) network) and different spatio-temporal feature extraction strategies, i.e., different spatial resolutions (res) and different temporal resolutions (frames) in detecting bacteria and cell.</p>
Full article ">Figure 11
<p>Ground truth annotations and detected bacteria at threshold 0.8.</p>
Full article ">Figure 11 Cont.
<p>Ground truth annotations and detected bacteria at threshold 0.8.</p>
Full article ">Figure 12
<p>Ground truth annotations and detected cells at threshold 0.7.</p>
Full article ">Figure 13
<p>Estimated bacterial load in each image frame of 22 FCFM videos for 6 controls and 5 cases, pre- and post-substance.</p>
Full article ">Figure 13 Cont.
<p>Estimated bacterial load in each image frame of 22 FCFM videos for 6 controls and 5 cases, pre- and post-substance.</p>
Full article ">Figure 14
<p>Comparison of median cell count against visual assessment of cellularity.</p>
Full article ">
2610 KiB  
Article
An Ecological Visual Exploration Tool to Support the Analysis of Visual Processing Pathways in Children with Autism Spectrum Disorders
by Dario Cazzato, Marco Leo, Cosimo Distante, Giulia Crifaci, Giuseppe Massimo Bernava, Liliana Ruta, Giovanni Pioggia and Silvia M. Castro
J. Imaging 2018, 4(1), 9; https://doi.org/10.3390/jimaging4010009 - 29 Dec 2017
Cited by 4 | Viewed by 5586
Abstract
Recent improvements in the field of assistive technologies have led to innovative solutions aiming at increasing the capabilities of people with disability, helping them in daily activities with applications that span from cognitive impairments to developmental disabilities. In particular, in the case of [...] Read more.
Recent improvements in the field of assistive technologies have led to innovative solutions aiming at increasing the capabilities of people with disability, helping them in daily activities with applications that span from cognitive impairments to developmental disabilities. In particular, in the case of Autism Spectrum Disorder (ASD), the need to obtain active feedback in order to extract subsequently meaningful data becomes of fundamental importance. In this work, a study about the possibility of understanding the visual exploration in children with ASD is presented. In order to obtain an automatic evaluation, an algorithm for free (i.e., without constraints, nor using additional hardware, infrared (IR) light sources or other intrusive methods) gaze estimation is employed. Furthermore, no initial calibration is required. It allows the user to freely rotate the head in the field of view of the sensor, and it is insensitive to the presence of eyeglasses, hats or particular hairstyles. These relaxations of the constraints make this technique particularly suitable to be used in the critical context of autism, where the child is certainly not inclined to employ invasive devices, nor to collaborate during calibration procedures.The evaluation of children’s gaze trajectories through the proposed solution is presented for the purpose of an Early Start Denver Model (ESDM) program built on the child’s spontaneous interests and game choice delivered in a natural setting. Full article
Show Figures

Figure 1

Figure 1
<p>A block diagram of the gaze estimation method.</p>
Full article ">Figure 2
<p>A schematic representation of the head pose estimation module.</p>
Full article ">Figure 3
<p>Two outputs of the head pose estimation modules.</p>
Full article ">Figure 4
<p>The rough localization of the eye regions. Labels are explained in the text.</p>
Full article ">Figure 5
<p>Scheme of the pupil detection module.</p>
Full article ">Figure 6
<p>Gaze estimation by head pose.</p>
Full article ">Figure 7
<p>A schematic representation of the gaze correction.</p>
Full article ">Figure 8
<p>The technical setup to evaluate the proposed gaze estimator. A person is standing in front of the panel where nine circular markers were stuck. The person is asked to look at each of the markers on the panel, in a predefined order, and then to confirm that the markers are in their requisite positions. In the example, the marker on the upper left of the panel is being observed.</p>
Full article ">Figure 9
<p>An example of toys disposition in the closet.</p>
Full article ">Figure 10
<p>The computed hit-map for Children #1 (<b>left</b>) and #2 (<b>right</b>).</p>
Full article ">Figure 11
<p>The computed hit map for Child #3, Sessions A (<b>left</b>) and B (<b>right</b>).</p>
Full article ">
5979 KiB  
Article
Neutron Imaging with Timepix Coupled Lithium Indium Diselenide
by Elan Herrera, Daniel Hamm, Ashley Stowe, Jeffrey Preston, Brenden Wiggins, Arnold Burger and Eric Lukosi
J. Imaging 2018, 4(1), 10; https://doi.org/10.3390/jimaging4010010 - 29 Dec 2017
Cited by 6 | Viewed by 8274
Abstract
The material lithium indium diselenide, a single crystal neutron sensitive semiconductor, has demonstrated its capabilities as a high resolution imaging device. The sensor was prepared with a 55 μ m pitch array of gold contacts, designed to couple with the Timepix imaging ASIC. [...] Read more.
The material lithium indium diselenide, a single crystal neutron sensitive semiconductor, has demonstrated its capabilities as a high resolution imaging device. The sensor was prepared with a 55 μ m pitch array of gold contacts, designed to couple with the Timepix imaging ASIC. The resulting device was tested at the High Flux Isotope Reactor, demonstrating a response to cold neutrons when enriched in 95% 6 Li. The imaging system performed a series of experiments resulting in a <200 μ m resolution limit with the Paul Scherrer Institute (PSI) Siemens star mask and a feature resolution of 34 μ m with a knife-edge test. Furthermore, the system was able to resolve the University of Tennessee logo inscribed into a 3D printed 1 cm 3 plastic block. This technology marks the application of high resolution neutron imaging using a direct readout semiconductor. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

Figure 1
<p>Neutron detection in lithium indium diselenide (LISe) semiconducting sensor. (<b>a</b>) incident neutron is absorbed by <math display="inline"> <semantics> <msup> <mrow/> <mn>6</mn> </msup> </semantics> </math>Li in semiconductor bulk; (<b>b</b>) emitted <math display="inline"> <semantics> <msup> <mrow/> <mn>3</mn> </msup> </semantics> </math>H and <math display="inline"> <semantics> <mi>α</mi> </semantics> </math> generate charged particle pairs; (<b>c</b>) charge carriers driven to electrodes by applied internal electric field.</p>
Full article ">Figure 2
<p>Photolithography process using negative resist for pixelated LISe semiconductor sensor fabrication (16-channel pixel detector with square pixels and guard ring [<a href="#B16-jimaging-04-00010" class="html-bibr">16</a>] shown for demonstration). (<b>a</b>) clean planchet; (<b>b</b>) apply wax coating; (<b>c</b>) bond sensor; (<b>d</b>) spincoat and bake; (<b>e</b>) align photomask; (<b>f</b>) UV exposure; (<b>g</b>) post bake; (<b>h</b>) develop resist; (<b>i</b>) deposit metal; (<b>j</b>) lift-off resist; (<b>k</b>) clean final sensor.</p>
Full article ">Figure 3
<p>LISe Timepix layering diagram.</p>
Full article ">Figure 4
<p>LISe semiconductor coupled to Timepix readout chip. (<b>a</b>) CAD model of LISe Timepix detection module; (<b>b</b>) bonded LISe Timepix imager PCB .</p>
Full article ">Figure 5
<p>PSI Siemens star neutron imaging resolution test target photographs [<a href="#B24-jimaging-04-00010" class="html-bibr">24</a>]. (<b>a</b>) Siemens star photograph; (<b>b</b>) mask positioned in front of LISe Timepix for size comparison (image captured at an angle showing reflection off broad face contact).</p>
Full article ">Figure 6
<p>Photograph of 3D printed “Power T” target.</p>
Full article ">Figure 7
<p>Neutron field response image to 2 mCi PuBe source. (<b>a</b>) thermalized PuBe source; (<b>b</b>) bare PuBe source; (<b>c</b>) thermalized PuBe source region of interest (ROI).</p>
Full article ">Figure 8
<p>HFIR CG-1D open beam neutron response image. (<b>a</b>) open beam response; (<b>b</b>) open beam intensity ROI.</p>
Full article ">Figure 9
<p>Neutron image of knife edge slit in thin cadmium attenuating sheet. (<b>a</b>) slit: 500 <math display="inline"> <semantics> <mi mathvariant="sans-serif">μ</mi> </semantics> </math>m; (<b>b</b>) slit: 750 <math display="inline"> <semantics> <mi mathvariant="sans-serif">μ</mi> </semantics> </math>m.</p>
Full article ">Figure 10
<p>Modulation transfer function method to calculate edge resolution. (<b>a</b>) 750 <math display="inline"> <semantics> <mi mathvariant="sans-serif">μ</mi> </semantics> </math>m slit with ROI overlay; (<b>b</b>) slit edge spread function; (<b>c</b>) slit line spread function; (<b>d</b>) slit modulation transfer function.</p>
Full article ">Figure 11
<p>Neutron image of Seimens star on PSI resolution test mask.</p>
Full article ">Figure 12
<p>Neutron image of 3D printed “Power T”. (<b>a</b>) threshold: +4 mV; (<b>b</b>) threshold: +24 mV.</p>
Full article ">
1646 KiB  
Article
Reference Tracts and Generative Models for Brain White Matter Tractography
by Susana Muñoz Maniega, Mark E. Bastin, Ian J. Deary, Joanna M. Wardlaw and Jonathan D. Clayden
J. Imaging 2018, 4(1), 8; https://doi.org/10.3390/jimaging4010008 - 28 Dec 2017
Cited by 1 | Viewed by 5869
Abstract
Background: Probabilistic neighborhood tractography aims to automatically segment brain white matter tracts from diffusion magnetic resonance imaging (dMRI) data in different individuals. It uses reference tracts as priors for the shape and length of the tract, and matching models that describe typical deviations [...] Read more.
Background: Probabilistic neighborhood tractography aims to automatically segment brain white matter tracts from diffusion magnetic resonance imaging (dMRI) data in different individuals. It uses reference tracts as priors for the shape and length of the tract, and matching models that describe typical deviations from these. We evaluated new reference tracts and matching models derived from dMRI data acquired from 80 healthy volunteers, aged 25–64 years. Methods: The new reference tracts and models were tested in 50 healthy older people, aged 71.8 ± 0.4 years. The matching models were further assessed by sampling and visualizing synthetic tracts derived from them. Results: We found that data-generated reference tracts improved the success rate of automatic white matter tract segmentations. We observed an increased rate of visually acceptable tracts, and decreased variation in quantitative parameters when using this approach. Sampling from the matching models demonstrated their quality, independently of the testing data. Conclusions: We have improved the automatic segmentation of brain white matter tracts, and demonstrated that matching models can be successfully transferred to novel data. In many cases, this will bypass the need for training data and make the use of probabilistic neighborhood tractography in small testing datasets newly practicable. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>Flow chart of the processes followed in this manuscript. Black paths show the creation of the data-based reference tracts and training data-based supervised models (which represent the deviations of the training data), using the training data. Color paths show the three cases of tract segmentation performed in the LBC1936 data: red paths use the data-based reference tracts and the training data-based models to segment white matter tracts; blue paths use the data-based reference tracts in the LBC1936 data to create models (which represent the deviations of the tracts corresponding to LBC1936 data), and segment the tracts simultaneously using expectation–maximization (EM); and, yellow paths use the atlas-based reference tracts to create models (which represent the deviations of the tracts corresponding to LBC1936 data), and segment the tracts simultaneously using EM.</p>
Full article ">Figure 2
<p>Graphical representation of a candidate tract (<b>a</b>), the median line is fitted to a B-spline with knot points separated by a distance <span class="html-italic">d</span> (straight-line distance). A B-spline representation is also used for the reference tract. The vector between two consecutive knot points in the candidate and the equivalent knot points in the reference can be compared and the angular deviations obtained. (<b>b</b>) illustrates the shape model used by probabilistic neighborhood tractography (PNT), based on angular deviations <math display="inline"> <semantics> <mrow> <msub> <mi>ϕ</mi> <mi>u</mi> </msub> </mrow> </semantics> </math>, between equivalent tract segments in the reference and candidate tracts, <math display="inline"> <semantics> <mrow> <msubsup> <mi mathvariant="bold">v</mi> <mi>u</mi> <mo>*</mo> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="bold">v</mi> <mi>u</mi> </msub> </mrow> </semantics> </math>, respectively. The putative direction of each segment is always away from the anchor point. Adapted from [<a href="#B5-jimaging-04-00008" class="html-bibr">5</a>,<a href="#B6-jimaging-04-00008" class="html-bibr">6</a>].</p>
Full article ">Figure 3
<p>Graphical representation of the sampling process for step vectors, <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="bold">v</mi> <mi>u</mi> </msub> </mrow> </semantics> </math>. (<b>a</b>) From the voxel corresponding to the anchor point, the “left” and “right” tract lengths are sampled from the model length distributions, obtaining the total length of the streamline. (<b>b</b>) From the first step on one side, the vector <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="bold">v</mi> <mi>u</mi> </msub> </mrow> </semantics> </math> is sampled, leading to the next knot in the streamline. This vector is obtained from the angle <math display="inline"> <semantics> <mrow> <msub> <mi>ϕ</mi> <mi>u</mi> </msub> </mrow> </semantics> </math> sampled from the model angle distribution at that knot. This is replicated for every step until the distance <math display="inline"> <semantics> <mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> is reached. The process is then repeated for the “left” tract lengths. (<b>c</b>,<b>d</b>) Geometric representation of the sub-steps for the sampling of <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="bold">v</mi> <mi>u</mi> </msub> </mrow> </semantics> </math>: given a reference tract direction, <math display="inline"> <semantics> <mrow> <msubsup> <mi mathvariant="bold">v</mi> <mi>u</mi> <mo>*</mo> </msubsup> </mrow> </semantics> </math>, and an angular deviation from it, <math display="inline"> <semantics> <mrow> <msub> <mi>ϕ</mi> <mi>u</mi> </msub> </mrow> </semantics> </math> (<b>c</b>). These jointly specify a circular locus of possible directions (<b>d</b>), from which a final vector is chosen by additionally sampling <span class="html-italic">θ</span> <math display="inline"> <semantics> <mo>∈</mo> </semantics> </math> [0, 2π].</p>
Full article ">Figure 4
<p>Group maps projections for the 16 tracts of interest segmented using the data-based (left panel) and atlas-based (right panel) reference tracts. Top panels used a matching model trained in the LBC1936 data, and the bottom panel used a model trained in the training data. The tracts represented are: (<b>a</b>) genu and (<b>b</b>) splenium of the corpus callosum, left and right arcuate fasciculus (<b>c</b>,<b>d</b>), left and right anterior thalamic radiation (<b>e</b>,<b>f</b>), left and right inferior longitudinal fasciculus (<b>g</b>,<b>h</b>), left and right dorsal (<b>i</b>,<b>j</b>) and ventral (<b>k</b>,<b>l</b>) cingulum, left and right corticospinal tracts (<b>m</b>,<b>n</b>) and left and right uncinate fasciculus (<b>o</b>,<b>p</b>). Color scale represents the voxel visitation frequency, from 1 (light yellow) to 50 (dark blue). Maps are projected into the plane of the voxel with maximum visitation value. Red arrows point at the main differences obtained between the resulting tracts derived from atlas-based and data-based reference tracts. Figure adapted from [<a href="#B1-jimaging-04-00008" class="html-bibr">1</a>].</p>
Full article ">Figure 5
<p>Overlays of the uncinate (<b>a</b>) and arcuate (<b>b</b>) fasciculi. Atlas tracts represented in red (from [<a href="#B17-jimaging-04-00008" class="html-bibr">17</a>]) and tracts segmented in the LBC1936 data using atlas-based reference tracts and unsupervised models in green (<b>left</b>) and blue (<b>right</b>), in radiological convention.</p>
Full article ">Figure 6
<p>Streamline representations of the synthetic tracts obtained by sampling from the PNT models generated from the training and LBC1936 data. First column: PNT model from the training dataset using the data-based reference tract; second column: PNT model from the LBC1936 dataset using the data-based reference tract; third column: PNT model from the LBC1936 dataset and the atlas-based reference tract. (<b>a</b>) genu (<b>b</b>) splenium, (<b>c</b>) Arc, (<b>d</b>) ATR, (<b>e</b>) Cing, (<b>f</b>) Cing, ventral, (<b>g</b>) ILF, (<b>h</b>) Unc, and (<b>i</b>) CST.</p>
Full article ">
10393 KiB  
Review
Deriving Quantitative Crystallographic Information from the Wavelength-Resolved Neutron Transmission Analysis Performed in Imaging Mode
by Hirotaka Sato
J. Imaging 2018, 4(1), 7; https://doi.org/10.3390/jimaging4010007 - 28 Dec 2017
Cited by 31 | Viewed by 7586
Abstract
Current status of Bragg-edge/dip neutron transmission analysis/imaging methods is presented. The method can visualize real-space distributions of bulk crystallographic information in a crystalline material over a large area (~10 cm) with high spatial resolution (~100 μm). Furthermore, by using suitable spectrum analysis methods [...] Read more.
Current status of Bragg-edge/dip neutron transmission analysis/imaging methods is presented. The method can visualize real-space distributions of bulk crystallographic information in a crystalline material over a large area (~10 cm) with high spatial resolution (~100 μm). Furthermore, by using suitable spectrum analysis methods for wavelength-dependent neutron transmission data, quantitative visualization of the crystallographic information can be achieved. For example, crystallographic texture imaging, crystallite size imaging and crystalline phase imaging with texture/extinction corrections are carried out by the Rietveld-type (wide wavelength bandwidth) profile fitting analysis code, RITS (Rietveld Imaging of Transmission Spectra). By using the single Bragg-edge analysis mode of RITS, evaluations of crystal lattice plane spacing (d-spacing) relating to macro-strain and d-spacing distribution’s FWHM (full width at half maximum) relating to micro-strain can be achieved. Macro-strain tomography is performed by a new conceptual CT (computed tomography) image reconstruction algorithm, the tensor CT method. Crystalline grains and their orientations are visualized by a fast determination method of grain orientation for Bragg-dip neutron transmission spectrum. In this paper, these imaging examples with the spectrum analysis methods and the reliabilities evaluated by optical/electron microscope and X-ray/neutron diffraction, are presented. In addition, the status at compact accelerator driven pulsed neutron sources is also presented. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Bragg-edge transmission spectrum and the included crystallographic information. The specimen is a polycrystalline α-Fe of 5 mm thickness. (<b>b</b>) Bragg-dip transmission spectrum and the included crystallographic information. The specimen is a single-crystal α-Fe of 5 mm thickness. (Note that all the data were measured at J-PARC MLF.) Such transmission spectra are measured at each pixel of a neutron TOF-imaging detector. Therefore, through the transmission spectrum analyses at each pixel, various crystallographic information can be quantitatively mapped over the whole body of a measured specimen.</p>
Full article ">Figure 2
<p>Scheme of the database matching method. (Note that exampled neutron transmission spectra were measured by the experiment presented in Reference [<a href="#B28-jimaging-04-00007" class="html-bibr">28</a>].) Even if a few grains are stacked along the neutron transmission path, the unique crystal-lattice direction [<span class="html-italic">UVW</span>] of each grain, parallel to the neutron transmission direction, are simultaneously identified by this method. This figure schematically indicates that [<span class="html-italic">UVW</span>] of one of stacked grains (Grain 1) is [100 70 53] and [<span class="html-italic">UVW</span>] of the other of stacked grains (Grain 2) is [100 75 27].</p>
Full article ">Figure 3
<p>Bragg-dip transmission spectra of (<b>a</b>) Grain Orientation 1 and (<b>b</b>) Grain Orientation 2 with the pattern matching results (cross × marks) and the indexing results. It is identified by the database matching method that [<span class="html-italic">UVW</span>] of Grain Orientation 1 is [100 30 8] and [<span class="html-italic">UVW</span>] of Grain Orientation 2 is [100 38 0]. These figures are reproduced with permission of the International Union of Crystallography from Reference [<a href="#B28-jimaging-04-00007" class="html-bibr">28</a>].</p>
Full article ">Figure 4
<p>Photograph of rolled/welded α-iron specimens for demonstration of quantitative texture and crystallite-size imaging [<a href="#B1-jimaging-04-00007" class="html-bibr">1</a>]. Weld exists at the center of welded plates (Sample C and Sample D).</p>
Full article ">Figure 5
<p>Three types of Bragg-edge transmission spectrum with the fitting curves [<a href="#B1-jimaging-04-00007" class="html-bibr">1</a>]. Due to the texture effect and the extinction effect, spectrum shape and spectrum intensity are changed. From these profile changes, degree of crystallographic anisotropy, preferred orientation and crystallite size are quantitatively deduced.</p>
Full article ">Figure 6
<p>Quantitative imaging results of (<b>a</b>) preferred orientation parallel to neutron transmission direction, (<b>b</b>) degree of texture evolution and (<b>c</b>) crystallite size averaged along neutron transmission direction [<a href="#B1-jimaging-04-00007" class="html-bibr">1</a>,<a href="#B7-jimaging-04-00007" class="html-bibr">7</a>].</p>
Full article ">Figure 7
<p>Comparison between Bragg-edge neutron transmission method with the RITS code and neutron diffraction method with the Rietveld analysis: (<b>a</b>) March-Dollase coefficient (degree of texture evolution) and (<b>b</b>) crystallite size [<a href="#B2-jimaging-04-00007" class="html-bibr">2</a>,<a href="#B7-jimaging-04-00007" class="html-bibr">7</a>,<a href="#B36-jimaging-04-00007" class="html-bibr">36</a>].</p>
Full article ">Figure 8
<p>Photograph of the knife specimen composed of α-Fe and γ-Fe [<a href="#B44-jimaging-04-00007" class="html-bibr">44</a>]. The left-hand side is the cutting edge.</p>
Full article ">Figure 9
<p>Bragg-edge transmission spectrum of the knife specimen and the RITS fitting curves assuming single-phase and double-phase [<a href="#B44-jimaging-04-00007" class="html-bibr">44</a>]. The double phase assumption is suitable for reconstruction of the experimental data. This means this specimen consists of double phase (α-Fe and γ-Fe).</p>
Full article ">Figure 10
<p>Quantitative imaging of (<b>a</b>) α-Fe phase and (<b>b</b>) γ-Fe phase and the relating images: texture evolution of (<b>c</b>) α-Fe and (<b>d</b>) γ-Fe and crystallite size of (<b>e</b>) α-Fe and (<b>f</b>) γ-Fe [<a href="#B44-jimaging-04-00007" class="html-bibr">44</a>].</p>
Full article ">Figure 11
<p>Photograph of three types of quenched rod [<a href="#B6-jimaging-04-00007" class="html-bibr">6</a>].</p>
Full article ">Figure 12
<p>{110} Bragg-edge of the unquenched center (ferrite) zone and the quenched rim (martensite) zone, with the profile fitting curves given by the single-edge analysis mode of RITS [<a href="#B6-jimaging-04-00007" class="html-bibr">6</a>].</p>
Full article ">Figure 13
<p>(<b>a</b>–<b>c</b>) Images of crystal lattice plane spacing (<span class="html-italic">d</span>-spacing) of {110} [<a href="#B6-jimaging-04-00007" class="html-bibr">6</a>]. (<b>d</b>–<b>f</b>) Images of FWHM of <span class="html-italic">d</span>-spacing distribution of {110} [<a href="#B6-jimaging-04-00007" class="html-bibr">6</a>]. The pictures are visualized about each quenched depth (3 mm, 5 mm and 7 mm). Note that two rods of the same quenched depth are simultaneously visualized.</p>
Full article ">Figure 14
<p>Relation between the Vickers hardness and FWHM of <span class="html-italic">d</span>-spacing distribution, discovered by Bragg-edge neutron transmission imaging [<a href="#B6-jimaging-04-00007" class="html-bibr">6</a>].</p>
Full article ">Figure 15
<p>Comparison between Bragg-edge neutron transmission and neutron diffraction: (<b>a</b>) macro-strain [<a href="#B3-jimaging-04-00007" class="html-bibr">3</a>] and (<b>b</b>) FWHM of <span class="html-italic">d</span>-spacing distribution relating to micro-strain [<a href="#B8-jimaging-04-00007" class="html-bibr">8</a>].</p>
Full article ">Figure 16
<p>Macro-strain scalar components (hoop component <span class="html-italic">ε<sub>θθ</sub></span> and radial component <span class="html-italic">ε<sub>rr</sub></span>) in the axial-symmetric VAMAS sample [<a href="#B54-jimaging-04-00007" class="html-bibr">54</a>].</p>
Full article ">Figure 17
<p>Radial dependence of the projection data evaluated by RITS, its moving averaged (smoothed) data which were actually used for the CT image reconstruction and the theoretical values of projection data from the VAMAS cylinder [<a href="#B54-jimaging-04-00007" class="html-bibr">54</a>].</p>
Full article ">Figure 18
<p>Macro-strain tomography obtained by the ML-EM based tensor CT algorithm: (<b>a</b>) hoop component, (<b>b</b>) radial component and (<b>c</b>) <span class="html-italic">x</span>-direction component, with (<b>d</b>–<b>f</b>) their theoretical values [<a href="#B54-jimaging-04-00007" class="html-bibr">54</a>].</p>
Full article ">Figure 19
<p>Photograph of the Si-steel plate sample for demonstration of grain orientation imaging using Bragg-dip neutron transmission method [<a href="#B28-jimaging-04-00007" class="html-bibr">28</a>].</p>
Full article ">Figure 20
<p>Grain orientation images obtained by Bragg-dip neutron transmission analyses, expressed by inverse pole figure (IPF). (<b>a</b>) and (<b>b</b>) are images of all grains. (<b>c</b>)–(<b>e</b>) are partial images of the images (<b>a</b>) and (<b>b</b>). The image (<b>d</b>) indicates IPF map of a non-stacked grain along the neutron transmission path. The image (<b>c</b>) indicates IPF map of one of stacked grains along the neutron transmission path and the image (<b>e</b>) indicates IPF map of the other of stacked grains along the neutron transmission path. The image (<b>a</b>) is a combined image of the image (<b>d</b>) and the image (<b>c</b>) and the image (<b>b</b>) is a combined image of the image (<b>d</b>) and the image (<b>e</b>). These figures are reproduced with permission of the International Union of Crystallography from Reference [<a href="#B28-jimaging-04-00007" class="html-bibr">28</a>].</p>
Full article ">
1550 KiB  
Article
A Holistic Technique for an Arabic OCR System
by Farhan M. A. Nashwan, Mohsen A. A. Rashwan, Hassanin M. Al-Barhamtoshy, Sherif M. Abdou and Abdullah M. Moussa
J. Imaging 2018, 4(1), 6; https://doi.org/10.3390/jimaging4010006 - 27 Dec 2017
Cited by 21 | Viewed by 7351
Abstract
Analytical based approaches in Optical Character Recognition (OCR) systems can endure a significant amount of segmentation errors, especially when dealing with cursive languages such as the Arabic language with frequent overlapping between characters. Holistic based approaches that consider whole words as single units [...] Read more.
Analytical based approaches in Optical Character Recognition (OCR) systems can endure a significant amount of segmentation errors, especially when dealing with cursive languages such as the Arabic language with frequent overlapping between characters. Holistic based approaches that consider whole words as single units were introduced as an effective approach to avoid such segmentation errors. Still the main challenge for these approaches is their computation complexity, especially when dealing with large vocabulary applications. In this paper, we introduce a computationally efficient, holistic Arabic OCR system. A lexicon reduction approach based on clustering similar shaped words is used to reduce recognition time. Using global word level Discrete Cosine Transform (DCT) based features in combination with local block based features, our proposed approach managed to generalize for new font sizes that were not included in the training data. Evaluation results for the approach using different test sets from modern and historical Arabic books are promising compared with state of art Arabic OCR systems. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

Figure 1
<p>Some examples of Arabic words that contain ligatures with manually segmented characters.</p>
Full article ">Figure 2
<p>Block Diagram of the Holistic OCR System.</p>
Full article ">Figure 3
<p>DCT based Feature Extraction.</p>
Full article ">Figure 4
<p>Clustering accuracy rate of Simplified Arabic font vs. codebook size number using DCT+ DCT_4B feature for different top clusters.</p>
Full article ">Figure 5
<p>An example of a rescoring lattice.</p>
Full article ">Figure 6
<p>Some samples of the scanned images.</p>
Full article ">
26990 KiB  
Article
In-Situ Imaging of Liquid Phase Separation in Molten Alloys Using Cold Neutrons
by Nicholas Alexander Derimow, Louis Joseph Santodonato, Rebecca Mills and Reza Abbaschian
J. Imaging 2018, 4(1), 5; https://doi.org/10.3390/jimaging4010005 - 25 Dec 2017
Cited by 9 | Viewed by 6745
Abstract
Understanding the liquid phases and solidification behaviors of multicomponent alloy systems becomes difficult as modern engineering alloys grow more complex, especially with the discovery of high-entropy alloys (HEAs) in 2004. Information about their liquid state behavior is scarce, and potentially quite complex due [...] Read more.
Understanding the liquid phases and solidification behaviors of multicomponent alloy systems becomes difficult as modern engineering alloys grow more complex, especially with the discovery of high-entropy alloys (HEAs) in 2004. Information about their liquid state behavior is scarce, and potentially quite complex due to the presence of perhaps five or more elements in equimolar ratios. These alloys are showing promise as high strength materials, many composed of solid-solution phases containing equiatomic CoCrCu, which itself does not form a ternary solid solution. Instead, this compound solidifies into highly phase separated regions, and the liquid phase separation that occurs in the alloy also leads to phase separation in systems in which Co, Cr, and Cu are present. The present study demonstrates that in-situ neutron imaging of the liquid phase separation in CoCrCu can be observed. The neutron imaging of the solidification process may resolve questions about phase separation that occurs in these alloys and those that contain Cu. These results show that neutron imaging can be utilized as a characterization technique for solidification research with the potential for imaging the liquid phases of more complex alloys, such as the HEAs which have very little published data about their liquid phases. This imaging technique could potentially allow for observation of immiscible liquid phases becoming miscible at specific temperatures, which cannot be observed with ex-situ analysis of solidified structures. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

Figure 1
<p>Images of the experimental setup at the CG-1D beamline at the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory. (<b>a</b>) Sample of CoCrCu, Al<sub>2</sub>O<sub>3</sub> crucible, lid, and Nb mounting adaptor placed near a ruler for scale. (<b>b</b>) The crucible mounted to the sample stick. (<b>c</b>) The high-vacuum Institut Laue–Langevin (ILL) furnace placed between the detector and incident neutron beam slits.</p>
Full article ">Figure 2
<p>ILL Niobium Foil Vacuum Furnace. Temperature range of 30–1500 °C (A) Interface connection M8 × 1.25 (male) (B) Bore size diameter = 50 mm (C) Distance interface to beam center = 31.75 mm (D) Beam center to sample space bottom = 11.862 cm (E) Distance stick flange to beam center = 41.275 cm. Image of ILL furnace “HOT-A” courtesy of Oak Ridge National Laboratory Sample Environment Group<span class="html-italic">.</span></p>
Full article ">Figure 3
<p>Temperature vs. time of the stacked CoCrCu system heating from 900 to 1500 °C and back to 900 °C in 25 °C increments.</p>
Full article ">Figure 4
<p>(<b>a</b>) Backscattered electron image of CoCrCu displaying 2 distinct phases: Cu-rich (top), CoCr-rich (bottom). Note, tiny black spots are pores generated from the initial grinding/polishing process. (<b>b</b>) Optical micrograph of the bottom-half cross-section of an arc-melted CoCrCu button.</p>
Full article ">Figure 5
<p>Energy dispersive X-ray spectroscopy (EDS) maps of the phase separated regions of the electromagnetically levitated and cast CoCrCu alloy. The colored regions correspond to the atomic composition present in the material: (<b>a</b>) Cobalt only (<b>b</b>) Chromium only (<b>c</b>) Copper only (<b>d</b>) Map of all elements in the system.</p>
Full article ">Figure 6
<p>Room temperature radiograph of two heterogeneous arc-melted CoCrCu samples stacked inside a small crucible. The lighter regions are the Cu-rich phase (&gt;95%) and are segregated to the surface of the buttons as well as randomly distributed globules inside the bulk. The darker regions are Co-Cr-rich and make up the rest of the arc-melted button.</p>
Full article ">Figure 7
<p>Melting and liquid phase separation of stacked CoCrCu samples. (<b>a</b>) During initial heating, the two as-cast buttons are intact. (<b>b</b>) The Cu-rich phase melts first between 1075 and 1100 °C, and (<b>c</b>) pools at the bottom of the crucible. (<b>d</b>) The Cu-lean phase fully melts upon heating to 1500 °C. Full video available in the supplemental.</p>
Full article ">Figure 8
<p>Cooling, macroscopic void formation, and solidification.</p>
Full article ">Figure 9
<p>(<b>a</b>) Room temperature radiograph of CoCrCu after the melt cycle. The darkest region atop is the Co-Cr-rich phase, while the lighter region to the bottom right was the formation of a void. The lighter gray region toward the bottom right is the Cu-rich phase. (<b>b</b>) Photograph of the sample after removal from the crucible, displaying the void that formed during solidification.</p>
Full article ">Figure 10
<p>Reconstructed computed tomography of the CoCrCu system with void present in the bottom left, and CoCr-rich (red) globules dispersed throughout Cu-rich (green) phase.</p>
Full article ">
3084 KiB  
Article
Automatic Detection and Distinction of Retinal Vessel Bifurcations and Crossings in Colour Fundus Photography
by Harry Pratt, Bryan M. Williams, Jae Yee Ku, Charles Vas, Emma McCann, Baidaa Al-Bander, Yitian Zhao, Frans Coenen and Yalin Zheng
J. Imaging 2018, 4(1), 4; https://doi.org/10.3390/jimaging4010004 - 22 Dec 2017
Cited by 18 | Viewed by 7167
Abstract
The analysis of retinal blood vessels present in fundus images, and the addressing of problems such as blood clot location, is important to undertake accurate and appropriate treatment of the vessels. Such tasks are hampered by the challenge of accurately tracing back problems [...] Read more.
The analysis of retinal blood vessels present in fundus images, and the addressing of problems such as blood clot location, is important to undertake accurate and appropriate treatment of the vessels. Such tasks are hampered by the challenge of accurately tracing back problems along vessels to their source. This is due to the unresolved issue of distinguishing automatically between vessel bifurcations and vessel crossings in colour fundus photographs. In this paper, we present a new technique for addressing this problem using a convolutional neural network approach to firstly locate vessel bifurcations and crossings and then to classifying them as either bifurcations or crossings. Our method achieves high accuracies for junction detection and classification on the DRIVE dataset and we show further validation on an unseen dataset from which no data has been used for training. Combined with work in automated segmentation, this method has the potential to facilitate: reconstruction of vessel topography, classification of veins and arteries and automated localisation of blood clots and other disease symptoms leading to improved management of eye disease. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>Kernel functions for skeletonisation [<a href="#B19-jimaging-04-00004" class="html-bibr">19</a>] Reproduced with permission.</p>
Full article ">Figure 2
<p>Example outcomes of first part of algorithm: locating bifurcations and crossings. (<b>a</b>) Fundus Image <math display="inline"> <semantics> <mrow> <mi>z</mi> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>)</mo> </mrow> </semantics> </math>; (<b>b</b>) Vessel Map <math display="inline"> <semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>)</mo> </mrow> </semantics> </math>; (<b>c</b>) Skeletonisation <math display="inline"> <semantics> <mrow> <mi>φ</mi> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>)</mo> </mrow> </semantics> </math>; (<b>d</b>) Patch Boundaries; (<b>e</b>) Patch Classification; (<b>f</b>) Junction Location. Reproduced with permission from [<a href="#B19-jimaging-04-00004" class="html-bibr">19</a>].</p>
Full article ">Figure 3
<p>Example outcomes of second part of algorithm: classifying bifurcations and crossings as bifurcations and crossings. (<b>a</b>) Identified bifurcations and crossings; (<b>b</b>) bifurcation Points; (<b>c</b>) Vessel Crossings. Reproduced with permission from [<a href="#B19-jimaging-04-00004" class="html-bibr">19</a>].</p>
Full article ">Figure 4
<p>Example of <math display="inline"> <semantics> <msub> <mi mathvariant="double-struck">C</mi> <mn>2</mn> </msub> </semantics> </math> input. Rows 1 and 2 (resp. 3 and 4): training patches with crossings (resp. bifurcations) and their enhanced counterparts for presentation. The neural networks were able to achieve good results using the patches without enhancement. Reproduced with permission from [<a href="#B19-jimaging-04-00004" class="html-bibr">19</a>].</p>
Full article ">Figure 5
<p>Example of identifying bifurcations and crossings in fundus images. (<b>a</b>), (<b>b</b>), (<b>d</b>) are left eye fundus images and (<b>c</b>), (<b>e</b>) and (<b>f</b>) are right eye fundus images. Reproduced with permission from [<a href="#B19-jimaging-04-00004" class="html-bibr">19</a>].</p>
Full article ">Figure 6
<p>Both (<b>a</b>) and (<b>b</b>) examples from the test set that have been run through the alrogithm. Here we demonstrate how the patch classification leads to the building up of a vessel map from identifying and classifying the vessel junctions and reconstructing the classified patches. The detected junctions are shown on the fundus image showing that the algorithm clearly identifies junction points. Reproduced with permission from [<a href="#B19-jimaging-04-00004" class="html-bibr">19</a>].</p>
Full article ">Figure 7
<p>(<b>a</b>) and (<b>c</b>) show the identified bifurcations (<b>a</b>) and crossings (<b>c</b>) for an example image from the DRIVE dataset. The results are shown along with the annotations provided by each grader. The annotation of grader 1 is shown by a red x, grader 2 by a blue x and green o, grader 3 by a cyan x and black o; (<b>b</b>) and (<b>d</b>) are zoomed in to demonstrate the negligible difference in the classification of the vessel bifurcation (<b>b</b>) and crossing (<b>d</b>) from the grader’s annotations and the consistency in the annotations provided. Reproduced with permission from [<a href="#B19-jimaging-04-00004" class="html-bibr">19</a>].</p>
Full article ">Figure 8
<p>Example of distinguishing between crossings and bifurcations in fundus images. Rows one shows the classified bifurcations in (<b>a</b>–<b>c</b>) and row two the respective crossings in (<b>d</b>–<b>f</b>). Likewise, row three shows the classified birfucations in (<b>g</b>–<b>i</b>) and row four the respective crossings in (<b>j</b>–<b>l</b>). Reproduced with permission from [<a href="#B19-jimaging-04-00004" class="html-bibr">19</a>].</p>
Full article ">Figure 8 Cont.
<p>Example of distinguishing between crossings and bifurcations in fundus images. Rows one shows the classified bifurcations in (<b>a</b>–<b>c</b>) and row two the respective crossings in (<b>d</b>–<b>f</b>). Likewise, row three shows the classified birfucations in (<b>g</b>–<b>i</b>) and row four the respective crossings in (<b>j</b>–<b>l</b>). Reproduced with permission from [<a href="#B19-jimaging-04-00004" class="html-bibr">19</a>].</p>
Full article ">
4890 KiB  
Article
Texture Based Quality Analysis of Simulated Synthetic Ultrasound Images Using Local Binary Patterns
by Prerna Singh, Ramakrishnan Mukundan and Rex De Ryke
J. Imaging 2018, 4(1), 3; https://doi.org/10.3390/jimaging4010003 - 21 Dec 2017
Cited by 12 | Viewed by 5952
Abstract
Speckle noise reduction is an important area of research in the field of ultrasound image processing. Several algorithms for speckle noise characterization and analysis have been recently proposed in the area. Synthetic ultrasound images can play a key role in noise evaluation methods [...] Read more.
Speckle noise reduction is an important area of research in the field of ultrasound image processing. Several algorithms for speckle noise characterization and analysis have been recently proposed in the area. Synthetic ultrasound images can play a key role in noise evaluation methods as they can be used to generate a variety of speckle noise models under different interpolation and sampling schemes, and can also provide valuable ground truth data for estimating the accuracy of the chosen methods. However, not much work has been done in the area of modeling synthetic ultrasound images, and in simulating speckle noise generation to get images that are as close as possible to real ultrasound images. An important aspect of simulated synthetic ultrasound images is the requirement for extensive quality assessment for ensuring that they have the texture characteristics and gray-tone features of real images. This paper presents texture feature analysis of synthetic ultrasound images using local binary patterns (LBP) and demonstrates the usefulness of a set of LBP features for image quality assessment. Experimental results presented in the paper clearly show how these features could provide an accurate quality metric that correlates very well with subjective evaluations performed by clinical experts. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>Reference ultrasound images [<a href="#B7-jimaging-04-00003" class="html-bibr">7</a>] used in our work.</p>
Full article ">Figure 2
<p>Artist rendered synthetic image and its cropped version using a sector region.</p>
Full article ">Figure 3
<p>The simulation and evaluation stages of the processing pipeline.</p>
Full article ">Figure 4
<p>Sampling models that can be used in simulating speckle noise ([<a href="#B13-jimaging-04-00003" class="html-bibr">13</a>], reproduced with permission).</p>
Full article ">Figure 5
<p>Effect of changing axial resolution (<span class="html-italic">m</span>) in radial-polar sampling ([<a href="#B13-jimaging-04-00003" class="html-bibr">13</a>], reproduced with permission).</p>
Full article ">Figure 6
<p>Effect of changing axial resolution (<span class="html-italic">m</span>) in radial-uniform sampling ([<a href="#B13-jimaging-04-00003" class="html-bibr">13</a>], reproduced with permission).</p>
Full article ">Figure 7
<p>Image artifacts produced by large values of sampling and noise parameters ([<a href="#B13-jimaging-04-00003" class="html-bibr">13</a>], reproduced with permission).</p>
Full article ">Figure 8
<p>Application of the proposed local binary patterns (LBP) features in the evaluation of filtering algorithms.</p>
Full article ">Figure 9
<p>The intermediate steps in the computation of the LBP histogram of an image.</p>
Full article ">Figure 10
<p>(<b>a</b>) A synthetic ultrasound image; (<b>b</b>) The LBP image; (<b>c</b>) the LBP histogram.</p>
Full article ">Figure 11
<p>Synthetic images generated using radial polar sampling with a coarse to fine variation of lateral resolution parameter <span class="html-italic">n</span>.</p>
Full article ">Figure 12
<p>Variations of LBP feature vector components with lateral resolution in radial-polar sampling. The <span class="html-italic">x</span>-axis gives the values of <span class="html-italic">n</span>. The <span class="html-italic">y</span>-axis gives the range of values of an LBP feature shown in the chart title.</p>
Full article ">Figure 13
<p>Synthetic images generated using radial uniform sampling with a coarse to fine variation of lateral resolution parameter <span class="html-italic">n<sub>u</sub></span>.</p>
Full article ">Figure 14
<p>Variations of LBP feature vector components with lateral resolution in radial-uniform sampling. The <span class="html-italic">x</span>-axis gives the values of <span class="html-italic">n<sub>u</sub></span>. The <span class="html-italic">y</span>-axis gives the range of values of an LBP feature shown in the chart title.</p>
Full article ">Figure 15
<p>Synthetic images generated using uniform-grid sampling scheme with increasing values of the grid spacing parameter <span class="html-italic">δ</span>.</p>
Full article ">Figure 16
<p>Variations of LBP feature vector components with grid spacing in uniform-grid sampling. The <span class="html-italic">x</span>-axis gives the values of <span class="html-italic">δ</span>. The <span class="html-italic">y</span>-axis gives the range of values of an LBP feature shown in the chart title.</p>
Full article ">Figure 17
<p>Plots showing the closest matching positions of the LBP feature vector with reference vector for images generated using (<b>a</b>) radial-polar sampling; (<b>b</b>) radial-uniform sampling; (<b>c</b>) uniform-grid sampling.</p>
Full article ">
23329 KiB  
Article
Segmentation and Shape Analysis of Macrophages Using Anglegram Analysis
by José Alonso Solís-Lemus, Brian Stramer, Greg Slabaugh and Constantino Carlos Reyes-Aldasoro
J. Imaging 2018, 4(1), 2; https://doi.org/10.3390/jimaging4010002 - 21 Dec 2017
Cited by 8 | Viewed by 7339
Abstract
Cell migration is crucial in many processes of development and maintenance of multicellular organisms and it can also be related to disease, e.g., Cancer metastasis, when cells migrate to organs different to where they originate. A precise analysis of the cell shapes in [...] Read more.
Cell migration is crucial in many processes of development and maintenance of multicellular organisms and it can also be related to disease, e.g., Cancer metastasis, when cells migrate to organs different to where they originate. A precise analysis of the cell shapes in biological studies could lead to insights about migration. However, in some cases, the interaction and overlap of cells can complicate the detection and interpretation of their shapes. This paper describes an algorithm to segment and analyse the shape of macrophages in fluorescent microscopy image sequences, and compares the segmentation of overlapping cells through different algorithms. A novel 2D matrix with multiscale angle variation, called the anglegram, based on the angles between points of the boundary of an object, is used for this purpose. The anglegram is used to find junctions of cells and applied in two different applications: (i) segmentation of overlapping cells and for non-overlapping cells; (ii) detection of the “corners” or pointy edges in the shapes. The functionalities of the anglegram were tested and validated with synthetic data and on fluorescently labelled macrophages observed on embryos of Drosophila melanogaster. The information that can be extracted from the anglegram shows a good promise for shape determination and analysis, whether this involves overlapping or non-overlapping objects. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

Figure 1
<p>Two representative time frames displaying examples of cell shapes and overlapping. (<b>a</b>) Full frame with (red) squares highlighting the cells that display the aforementioned shapes; (<b>b</b>) Detail of each cell; (<b>c</b>) Presents the full frame with (red) squares highlighting all regions where instances of overlapping cells (clumps) are shown and labelled for easy reference; (<b>d</b>) Detail of CLUMP 2, present in (<b>c</b>). Bars: 10 μm. (<b>c</b>,<b>d</b>) are reproduced with permission from [<a href="#B17-jimaging-04-00002" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>Example of the ground truth at a representative time frame. The ground truth for both red (nuclei) and green (microtubules) channels is shown in coloured lines. The frame is shown in grey scale to allow for a better visualisation of the lines in the ground truth.</p>
Full article ">Figure 3
<p>Overview of the range of paired ellipses investigated. The pairs presented on this image represent a small sample of the ellipses that were tested by the method presented. The overlapped region can be seen in white and the areas that are not overlapping are shown in grey. The boundary of the central ellipse <math display="inline"> <semantics> <msub> <mi mathvariant="script">E</mi> <mn>0</mn> </msub> </semantics> </math> is highlighted in cyan while the second ellipse’s boundary is presented in red.</p>
Full article ">Figure 4
<p>Synthetic generation of random basic shapes. Per shape, 200 cases were generated. The control points are shown in blue (·). The mean shapes are presented in magenta (−); and the mean control points are represented in black (⋄).</p>
Full article ">Figure 4 Cont.
<p>Synthetic generation of random basic shapes. Per shape, 200 cases were generated. The control points are shown in blue (·). The mean shapes are presented in magenta (−); and the mean control points are represented in black (⋄).</p>
Full article ">Figure 5
<p>Graphical representation of the calculation of the inner point angle of point <math display="inline"> <semantics> <msub> <mi mathvariant="bold">p</mi> <mi>i</mi> </msub> </semantics> </math> at separation <span class="html-italic">j</span>. (<b>a</b>) Representation of the inner angle of point <math display="inline"> <semantics> <msub> <mi mathvariant="bold">p</mi> <mi>i</mi> </msub> </semantics> </math> at separation <span class="html-italic">j</span>. Notice that the points in the boundary are taken in clockwise order; (<b>b</b>) Representation of the translation vectors <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="bold">v</mi> <mo>+</mo> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">v</mi> <mo>−</mo> </msub> </mrow> </semantics> </math>. Full explanation in text.</p>
Full article ">Figure 6
<p>Explanation of inner angle of a point in the construction of the <span class="html-italic">anglegram</span>. The diagram shows a representation of nine arbitrary entries <math display="inline"> <semantics> <msub> <mi>θ</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </semantics> </math> of the <span class="html-italic">anglegram</span> matrix. Each entry corresponds to an <span class="html-italic">inner point angle</span> at a specific <span class="html-italic">separation</span>. In the diagram, as in the matrix, the rows (<span class="html-italic">i</span>) correspond to a single point alongside the boundary (red <span style="color:red">⋄</span>) that start at a specific point (marked <span style="color:red">○</span>); the columns (<span class="html-italic">j</span>) correspond to the separation from the point <span style="color:red"><span class="html-italic">i</span>⋄</span> and from there the angle is taken. Each corresponding entry, <math display="inline"> <semantics> <msub> <mi>θ</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </semantics> </math>, of the <span class="html-italic">anglegram</span> matrix <math display="inline"> <semantics> <mi mathvariant="normal">Θ</mi> </semantics> </math> is marked with a green arrow, furthermore, each angle is shaded to match the colour map used in the <span class="html-italic">anglegram</span> in <a href="#jimaging-04-00002-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 7
<p>Junction detection on overlapping objects through the maximum intensity projection of the <span class="html-italic">anglegram</span> matrix. (<b>a</b>–<b>c</b>) Representation of <span class="html-italic">inner point angle</span> calculation and generation of the <span class="html-italic">anglegram matrix</span>. (<b>a</b>) Represents a synthetic clump with its boundary outlined (blue - -), where a point (blue ○) in the boundary will have various inner point angles per separation <span class="html-italic">j</span>; All the inner point angles for the highlighted point are displayed in (<b>b</b>); (<b>c</b>) Shows the <span class="html-italic">anglegram matrix</span>, and the definition of <math display="inline"> <semantics> <msub> <mover accent="true"> <mi>θ</mi> <mo stretchy="false">^</mo> </mover> <mo movablelimits="true" form="prefix">max</mo> </msub> </semantics> </math> is represented in (<b>d</b>) along the boundary points. Detection of junctions are shown with ⋄ markers (magenta). Notice the two horizontal lines representing <math display="inline"> <semantics> <mrow> <mo form="prefix">mean</mo> <msub> <mover accent="true"> <mi>θ</mi> <mo stretchy="false">^</mo> </mover> <mo movablelimits="true" form="prefix">max</mo> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mo form="prefix">mean</mo> <msub> <mover accent="true"> <mi>θ</mi> <mo stretchy="false">^</mo> </mover> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>+</mo> <mn>0.75</mn> <mo form="prefix">std</mo> <msub> <mover accent="true"> <mi>θ</mi> <mo stretchy="false">^</mo> </mover> <mo movablelimits="true" form="prefix">max</mo> </msub> </mrow> </semantics> </math>. [<a href="#B17-jimaging-04-00002" class="html-bibr">17</a>] Reproduced with permission.</p>
Full article ">Figure 8
<p>Illustration of all the methods developed and the workflow to obtain results. (<b>a</b>) shows the detail of CLUMP 2 in the original frame. Clumps are detected and the boundary was extracted. With the boundary information, the <span class="html-italic">anglegram</span> was calculated and the junctions were detected (<b>b</b>). On the second row, a diagram to the methods were presented. From left to right, the (<b>c</b>) Voronoi partition, (<b>d</b>) Junction Slicing (JS), (<b>e</b>) Edge Following (EF) and (<b>f</b>) SOM fitting. In (<b>g</b>) the outputs from each method for both cells within the detected clump are shown. [<a href="#B17-jimaging-04-00002" class="html-bibr">17</a>] Reproduced with permission.</p>
Full article ">Figure 9
<p>Junction (bends) detected for varying angles and separation distances. Five <b>rows</b> show angles ranging from 10 to 90 degrees and eight <b>columns</b> showing different separation distances from 0 to 160 pixels. Images in grey are cases where there is no overlap. The boundary of the clumps is shown in cyan (- -), with the first point in the boundary marked (⋄). The junctions detected by the method are marked in magenta (<math display="inline"> <semantics> <mrow> <mo>∗</mo> </mrow> </semantics> </math>).</p>
Full article ">Figure 10
<p><span class="html-italic">Pointiness</span> assessment of the drop, bidrop and tridrop shapes, see full explanation in text. For each shape presented, three instances of the shape are shown with varying <span class="html-italic">pointiness</span> values; the relationship of the <span class="html-italic">anglegram</span> and the pointiness is shown. The columns are arranged in triads corresponding to each of the basic shapes. Notice that, in all cases, the difference between the maximum and minimum values of the <span class="html-italic">mIP</span> (solid lines in third row) grows proportionally to the pointiness level.</p>
Full article ">Figure 10 Cont.
<p><span class="html-italic">Pointiness</span> assessment of the drop, bidrop and tridrop shapes, see full explanation in text. For each shape presented, three instances of the shape are shown with varying <span class="html-italic">pointiness</span> values; the relationship of the <span class="html-italic">anglegram</span> and the pointiness is shown. The columns are arranged in triads corresponding to each of the basic shapes. Notice that, in all cases, the difference between the maximum and minimum values of the <span class="html-italic">mIP</span> (solid lines in third row) grows proportionally to the pointiness level.</p>
Full article ">Figure 11
<p>Qualitative comparison of junction detection via <span class="html-italic">anglegram</span> (magenta ⋄) versus the Harris corner detector (green +). The strongest corners from the Harris detector per clump are displayed. (<b>a</b>) Only CLUMP 1 has a missing junction (cyan ○), it should be noticed how difficult detection of the junction would be. [<a href="#B17-jimaging-04-00002" class="html-bibr">17</a>] Reproduced with permission. (<b>b</b>) Each basic shape in the data is represented from a segmented frame. Refer to <a href="#sec2dot3-jimaging-04-00002" class="html-sec">Section 2.3</a> for a full explanation of the corner detection algorithm.</p>
Full article ">Figure 12
<p>Qualitative comparison of segmentation technique against the ground truth on three different time frames in the dataset. The columns from left to right present the original image, the manual segmentation (GT), the result of the segmentation described in <a href="#sec2dot4-jimaging-04-00002" class="html-sec">Section 2.4</a> and finally the comparison between both binary images. Regarding the colours in the final column: <b>black</b> is the background, <b>yellow</b> represent the false negatives, <b>blue</b> the false positives and <b>red</b> the true positives.</p>
Full article ">Figure 13
<p>(<b>a</b>) Comparison of the Jaccard index for each object detected, whether it is a clump or not, at each of the ten frames with available ground truth. Two frames are highlighted (45 and 54) and arrows point at values in the ribbon. (<b>b</b>) Depiction of cell in frame 45 that achieved a high Jaccard index in Top row (red arrow). (<b>c</b>) Depiction of clump detected in frame 54, and its comparison with the ground truth. The black dotted arrow in the top row show the Jaccard index value of the clump shown. Regarding the colours in the segmentation comparison: <b>black</b> is the background, <b>yellow</b> represent the false negatives, <b>blue</b> the false positives and <b>red</b> the true positives.</p>
Full article ">Figure 14
<p>Qualitative comparison of different segmentation methods in one frame. The segmentation results for (<b>a</b>) the Voronoi method, (<b>b</b>) Junction Slicing (JS), (<b>c</b>) Edge Following (EF) and (<b>d</b>) SOM fitting are shown. Top and Bottom rows represent the results for CLUMP 2 and CLUMP 3 respectively [<a href="#B17-jimaging-04-00002" class="html-bibr">17</a>]. Reproduced with permission.</p>
Full article ">Figure 15
<p>Comparison of Precision, Recall and Jaccard Index for all methods of segmentation of overlapping in clumps 2 and 3. Horizontal axis correspond to the box plots from the different methods and their summarised performance in the metrics computed. Three groups corresponding to Precision, Recall and Jaccard Index contain four box plots; which, from left to right, correspond to Voronoi, JS, EF and SOM methods. <a href="#jimaging-04-00002-t001" class="html-table">Table 1</a> summarises the information on this image. [<a href="#B17-jimaging-04-00002" class="html-bibr">17</a>] Reproduced with permission. (<b>a</b>) CLUMP 2. Y-axis ranges from <math display="inline"> <semantics> <mrow> <mo>(</mo> <mn>0.82</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics> </math>; (<b>b</b>) CLUMP 3. Y-axis ranges from <math display="inline"> <semantics> <mrow> <mo>(</mo> <mn>0.7</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics> </math>.</p>
Full article ">
2430 KiB  
Article
Integrated Model of Image Protection Techniques
by Anu Aryal, Shoko Imaizumi, Takahiko Horiuchi and Hitoshi Kiya
J. Imaging 2018, 4(1), 1; https://doi.org/10.3390/jimaging4010001 - 21 Dec 2017
Cited by 5 | Viewed by 4759
Abstract
We propose an integrated model of Block-Permutation-Based Encryption (BPBE) and Reversible Data Hiding (RDH). The BPBE scheme involves four processes for encryption, namely block scrambling, block-rotation/inversion, negative-positive transformation and the color component shuffling. A Histogram Shifting (HS) method is adopted for RDH in [...] Read more.
We propose an integrated model of Block-Permutation-Based Encryption (BPBE) and Reversible Data Hiding (RDH). The BPBE scheme involves four processes for encryption, namely block scrambling, block-rotation/inversion, negative-positive transformation and the color component shuffling. A Histogram Shifting (HS) method is adopted for RDH in our model. The proposed scheme can be well suitable for the hierarchical access control system, where the data can be accessed with the different access rights. This scheme encrypts R, G and B components independently. Therefore, we can generate similar output images from different input images. Additionally, the key derivation scheme also provides the security according to the different access rights. Our scheme is also resilient against brute-force attacks and Jigsaw Puzzle Solvers (JPSs). Furthermore, the compression performance is also not severely degraded using a standard lossless compression method. Full article
Show Figures

Figure 1

Figure 1
<p>Block-Permutation-Based Encryption (BPBE) scheme.</p>
Full article ">Figure 2
<p>Encryption and embedding process. RDH, Reversible Data Hiding.</p>
Full article ">Figure 3
<p>Key derivation. (<b>a</b>) Key derivation scheme; (<b>b</b>) decryption extraction process.</p>
Full article ">Figure 4
<p>Simulation results of Japan Image32 obtained by different permissions (single embedding). (<b>a</b>) Original image (Japan Image32); (<b>b</b>) final encrypted image; (<b>c</b>) half encrypted image (decryption: negative-positive transformation and color component shuffling; extraction: Data 3, 4, and 5); (<b>d</b>) decryption-only image.</p>
Full article ">Figure 5
<p>Simulation results of Japan Image22 obtained by different permissions (single embedding). (<b>a</b>) Original image (Japan Image22); (<b>b</b>) final encrypted image; (<b>c</b>) half encrypted image (decryption: negative-positive transformation and color component shuffling; extraction: Data 3, 4, and 5); (<b>d</b>) decryption-only image.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop