Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Image Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (1 December 2018) | Viewed by 83543

Special Issue Editors


E-Mail Website
Guest Editor
DII, University of Trento, Via Sommarive, 9, 38123 Trento, Italy
Interests: modeling and characterization of electron devices; CMOS integrated photodetectors and image sensors; single-Photon Avalanche Diodes; 3D Imaging; radiation detectors
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Sony Europe Technology Development Centre, Via Sommarive 18, 38123 Trento, Italy
Interests: image sensors; analog integrated circuits; terahertz and infrared detectors; microelectronics; single photon imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Fondazione Bruno Kessler, Via Sommarive 14, 38123 Trento, Italy
Interests: image sensors
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Although the quality of mainstream CMOS image sensors has reached outstanding levels in the last few years, new challenges are continuously pushing the image sensor research community. An increasing number of applications calls for dedicated image sensors with custom specifications in terms of space and time resolution, efficiency, power consumption and on-chip processing capabilities. These new requirements can often be met only with a combined effort of process, circuit, and system design. An interdisciplinary approach, involving research in material science, electronics and optics, is thus needed to push image sensors beyond the current state-of-the-art.

This Special Issue aims at providing an overview of current leading-edge research in image sensor technology, focusing on the following topics:

  • Image sensor process technology and packaging

  • Analog and digital circuits for image sensors

  • Image sensor characterization and modelling

  • Photon-counting image sensors

  • Ultra-high frame rate

  • Multispectral and hyperspectral imaging

  • Vision sensors: on-chip processing and computational imaging

  • Ultra-low power imaging

  • CMOS hybridization with organic and inorganic materials

  • Infrared and THz focal plane arrays

  • X-ray and charged particle image sensors

Prof. Lucio Pancheri
Dr. Matteo Perenzoni
Dr. Nicola Massari
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


Keywords

  • Pixel design

  • CMOS Image Sensors

  • CIS process technology

  • 3D stacking

  • Image sensor circuits and architectures

  • Quanta Image Sensors

  • 3D imaging

  • SPAD

  • Large-area image sensors

  • Hybrid image sensors

  • Vision sensors

  • Computational image sensors

  • Low-power image sensors

  • Frame-free vision sensors

  • Image sensor characterization

  • Infrared focal plane arrays

  • THz imaging

  • Above-CMOS detectors

  • Radiation imaging detectors

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

8 pages, 2169 KiB  
Article
GaN-Based Ultraviolet Passive Pixel Sensor on Silicon (111) Substrate
by Chang-Ju Lee, Chul-Ho Won, Jung-Hee Lee, Sung-Ho Hahm and Hongsik Park
Sensors 2019, 19(5), 1051; https://doi.org/10.3390/s19051051 - 1 Mar 2019
Cited by 15 | Viewed by 4466
Abstract
The fabrication of a single pixel sensor, which is a fundamental element device for the fabrication of an array-type pixel sensor, requires an integration technique of a photodetector and transistor on a wafer. In conventional GaN-based ultraviolet (UV) imaging devices, a hybrid-type integration [...] Read more.
The fabrication of a single pixel sensor, which is a fundamental element device for the fabrication of an array-type pixel sensor, requires an integration technique of a photodetector and transistor on a wafer. In conventional GaN-based ultraviolet (UV) imaging devices, a hybrid-type integration process is typically utilized, which involves a backside substrate etching and a wafer-to-wafer bonding process. In this work, we developed a GaN-based UV passive pixel sensor (PPS) by integrating a GaN metal-semiconductor-metal (MSM) UV photodetector and a Schottky-barrier (SB) metal-oxide-semiconductor field-effect transistor (MOSFET) on an epitaxially grown GaN layer on silicon substrate. An MSM-type UV sensor had a low dark current density of 3.3 × 10−7 A/cm2 and a high UV/visible rejection ratio of 103. The GaN SB-MOSFET showed a normally-off operation and exhibited a maximum drain current of 0.5 mA/mm and a maximum transconductance of 30 μS/mm with a threshold voltage of 4.5 V. The UV PPS showed good UV response and a high dark-to-photo contrast ratio of 103 under irradiation of 365-nm UV. This integration technique will provide one possible way for a monolithic integration of the GaN-based optoelectronic devices. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Mask layout with pad names (inset: micro-photograph image of fabricated device), (<b>b</b>) schematic circuit diagram, and (<b>c</b>) cross-sectional view of proposed GaN ultraviolet (UV) passive pixel sensor (PPS) structure.</p>
Full article ">Figure 2
<p>(<b>a</b>) Dark and photoresponsive I–V characteristics of the fabricated GaN metal-semiconductor-metal (MSM) photodetector under varying bias from −10 V to 10 V (forward direction) and from 10 V to −10 V (reverse direction). (<b>b</b>) Poole–Frenkel emission plot, (<b>c</b>) Schottky emission plot, and (<b>d</b>) Fowler–Nordheim tunneling plot of the I–V characteristics under dark and 365-nm UV irradiation conditions.</p>
Full article ">Figure 3
<p>Spectral photo-responsivity characteristics of the fabricated GaN MSM UV photodetector under varying (<b>a</b>) forward and (<b>b</b>) reverse bias conditions.</p>
Full article ">Figure 4
<p>(<b>a</b>) Output I<sub>DS</sub>–V<sub>DS</sub> characteristic under dark, (<b>b</b>) output I<sub>DS</sub>–V<sub>DS</sub> characteristic under 365-nm UV irradiation, and (<b>c</b>) linear and log-scale transfer I<sub>DS</sub>–V<sub>GS</sub> characteristics of the fabricated GaN Schottky-barrier (SB)-metal-oxide-semiconductor field-effect transistor (MOSFET).</p>
Full article ">Figure 5
<p>Transfer I<sub>DS</sub>–V<sub>GS</sub> characteristics of the fabricated GaN SB-MOSFET (<b>a</b>) under dark and (<b>b</b>) under 365-nm UV irradiation.</p>
Full article ">Figure 6
<p>Output I–V characteristics of the fabricated GaN UV PPS with/without UV irradiation under 0–10 V bias conditions. (Inset: linear scale output I–V characteristics).</p>
Full article ">
22 pages, 9686 KiB  
Article
Noise Estimation for Image Sensor Based on Local Entropy and Median Absolute Deviation
by Yongsong Li, Zhengzhou Li, Kai Wei, Weiqi Xiong, Jiangpeng Yu and Bo Qi
Sensors 2019, 19(2), 339; https://doi.org/10.3390/s19020339 - 16 Jan 2019
Cited by 13 | Viewed by 6452
Abstract
Noise estimation for image sensor is a key technique in many image pre-processing applications such as blind de-noising. The existing noise estimation methods for additive white Gaussian noise (AWGN) and Poisson-Gaussian noise (PGN) may underestimate or overestimate the noise level in the situation [...] Read more.
Noise estimation for image sensor is a key technique in many image pre-processing applications such as blind de-noising. The existing noise estimation methods for additive white Gaussian noise (AWGN) and Poisson-Gaussian noise (PGN) may underestimate or overestimate the noise level in the situation of a heavy textured scene image. To cope with this problem, a novel homogenous block-based noise estimation method is proposed to calculate these noises in this paper. Initially, the noisy image is transformed into the map of local gray statistic entropy (LGSE), and the weakly textured image blocks can be selected with several biggest LGSE values in a descending order. Then, the Haar wavelet-based local median absolute deviation (HLMAD) is presented to compute the local variance of these selected homogenous blocks. After that, the noise parameters can be estimated accurately by applying the maximum likelihood estimation (MLE) to analyze the local mean and variance of selected blocks. Extensive experiments on synthesized noised images are induced and the experimental results show that the proposed method could not only more accurately estimate the noise of various scene images with different noise levels than the compared state-of-the-art methods, but also promote the performance of the blind de-noising algorithm. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Real noise analysis for charge-coupled device (CCD) image sensor. (<b>a</b>) Experimental platform for noise analysis of CCD image sensor; (<b>b</b>) computed the Photon Transfer Curve (PTC) according to the measurement of [<a href="#B5-sensors-19-00339" class="html-bibr">5</a>,<a href="#B6-sensors-19-00339" class="html-bibr">6</a>].</p>
Full article ">Figure 2
<p>Local gray statistic entropy <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> of various noise-free image blocks, and weakly textured blocks have relatively bigger local gray statistic entropy (LGSE) values; (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.8135; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.7972; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.7761; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.7618; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.7158; (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.6994.</p>
Full article ">Figure 3
<p>Local gray statistic entropy <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> of noisy image blocks corrupted by additive white Gaussian noise (AWGN) at noise level <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, and weakly textured blocks have bigger LGSE values; (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.8094; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.7155; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.6537; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.4976; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.3127; (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> = 7.1362.</p>
Full article ">Figure 4
<p>An example for histogram-based constraint rule for eliminating outliers. (<b>a</b>) Original “Birds” image; (<b>b</b>) the gray level histogram of (<b>a</b>); (<b>c</b>) the selected gray levels after the operation of histogram constraint rule; (<b>d</b>) The selected blocks according to (<b>c</b>) are labeled with green color.</p>
Full article ">Figure 5
<p>An example for homogenous blocks selection, and those 15 × 15 red boxes in the “Birds” image have outlined the finally selected homogenous blocks.</p>
Full article ">Figure 6
<p>Two examples for homogenous blocks selection in the “Barbara” and “House.” (<b>a</b>) Original “Barbara” image; (<b>b</b>) the selected blocks after histogram constraint in “Barbara” image are labeled with green color; (<b>c</b>) 15 × 15 red boxes in the “Barbara” image have outlined the finally selected homogenous blocks; (<b>d</b>) original “House” image; (<b>e</b>) the selected blocks after histogram constraint in the “House” image are labeled with green color; (<b>f</b>) 15 × 15 red boxes in the “House” image have outlined the finally selected homogenous blocks.</p>
Full article ">Figure 7
<p>Classic standard test images for synthesized noisy images: “Barbara,” “Lena,” “Pirate,” “Cameraman,” “Warcraft,” “Couple,” “Peppers,” “Bridge,” “Hill” and “Einstein.”</p>
Full article ">Figure 8
<p>The mean error-bars of different block sizes. Thus, when the block size <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math>, the proposed algorithm has an overwhelming advantage for its accuracy and stability.</p>
Full article ">Figure 9
<p>Different AWGN level estimation methods on three images, and the proposed method outperforms other several existing noise estimation methods. (<b>a</b>–<b>c</b>) are original images (“Barbara,” “House,” “Birds”); (<b>d</b>–<b>f</b>) are their results of noise level estimation respectively.</p>
Full article ">Figure 10
<p>Different Poisson-Gaussian noise (PGN) level estimation methods on three images, and the proposed method out-performs several existing PGN estimation methods. (<b>a</b>–<b>c</b>) are noisy images (“Barbara,” “House,” “Birds”) with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mi>σ</mi> <mi>n</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; (<b>d</b>–<b>f</b>) are the results of our parameter estimation algorithm respectively; (<b>g</b>–<b>i</b>) are the results of different PGN estimation methods respectively.</p>
Full article ">Figure 11
<p>Structural similarity index measurement (SSIM) and peak signal-to-noise ratio (PSNR) comparison results of different AWGN estimators with BM3D for blind de-noising on the 134 selected images at different noise levels. (<b>a</b>) Comparison results of SSIM; (<b>b</b>) comparison results of PSNR.</p>
Full article ">Figure 12
<p>SSIM and PSNR comparison results of different PGN estimators with VST-BM3D for blind de-noising on the 134 selected images at different noise parameters. (<b>a</b>) Comparison results of SSIM; (<b>b</b>) comparison results of PSNR.</p>
Full article ">
12 pages, 3101 KiB  
Article
Segmentation-Based Color Channel Registration for Disparity Estimation of Dual Color-Filtered Aperture Camera
by Shuxiang Song, Sangwoo Park and Joonki Paik
Sensors 2018, 18(10), 3174; https://doi.org/10.3390/s18103174 - 20 Sep 2018
Cited by 1 | Viewed by 2829
Abstract
Single-lens-based optical range finding systems were developed as an efficient, compact alternative for conventional stereo camera systems. Among various single-lens-based approaches, a multiple color-filtered aperture (MCA) system can generate disparity information among color channels, as well as normal color information. In this paper, [...] Read more.
Single-lens-based optical range finding systems were developed as an efficient, compact alternative for conventional stereo camera systems. Among various single-lens-based approaches, a multiple color-filtered aperture (MCA) system can generate disparity information among color channels, as well as normal color information. In this paper, we consider a dual color-filtered aperture (DCA) system as the most minimal version of the MCA system and present a novel inter-color image registration algorithm for disparity estimation. This proposed registration algorithm consists of three steps: (i) color channel independent feature extraction; (ii) feature-based adaptive weight disparity estimation; and (iii) color mapping matrix (CMM)-based cross-channel image registration. Experimental results show that the proposed method can not only generate an accurate disparity map, but also realize high quality cross-channel registration with a disparity prior for DCA-based range finding and color image enhancement. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Imaging system with single aperture: (<b>a</b>) paths of light rays in a conventional optical system; (<b>b</b>) paths of light rays in a single off-axis aperture system.</p>
Full article ">Figure 2
<p>The dual color-filtered aperture (DCA) system: (<b>a</b>) the DCA configuration with an object at the in-focus position; and (<b>b</b>) the DCA configuration with an object at the out-of-focus position.</p>
Full article ">Figure 3
<p><b>Left</b>: Acquired image using the DCA camera; <b>upper right</b>: the red channel image; and <b>lower right</b>: the green channel image.</p>
Full article ">Figure 4
<p>Block diagram of the proposed system. CMM, color mapping matrix.</p>
Full article ">Figure 5
<p>Poor disparity estimation results of a DCA image using the traditional method: (<b>a</b>,<b>b</b>) are matching results using sum of absolute differences (SAD) and normalized cross-correlation (NCC), respectively.</p>
Full article ">Figure 6
<p>The gradient magnitude and LBP images of channels shown in <a href="#sensors-18-03174-f003" class="html-fig">Figure 3</a>: gradient magnitude (<b>top</b>); LBP (<b>bottom</b>) of (<b>a</b>) red and (<b>b</b>) green channels.</p>
Full article ">Figure 7
<p>Comparison of the gradient magnitude and LBP feature: (<b>a</b>) two blocks selected in gradient magnitude image and (<b>b</b>) LBP feature image.</p>
Full article ">Figure 8
<p>Process of the distance transform (DT): (<b>a</b>) binary image and (<b>b</b>) the result of DT.</p>
Full article ">Figure 9
<p>Disparity map comparison: (<b>a</b>) disparity generated by cross-channel normalized gradient (CCNG); (<b>b</b>) disparity generated by our method with a constant weight matrix, whose pixels value is assigned as 0.5; and (<b>c</b>) disparity generated by our method with adaptive weight.</p>
Full article ">Figure 10
<p>Segmentation process of the DCA image and CMM: (<b>a</b>) initially aligned reference channel <span class="html-italic">R</span> using the estimated disparity; (<b>b</b>) a segmented target channel using superpixels; (<b>c</b>) segmentation results of all three channels; and (<b>d</b>) the segment-wise CMM.</p>
Full article ">Figure 11
<p>Comparison of color registration results: (<b>a</b>–<b>c</b>) registration using the initially aligned reference image, refinement with CMM, and refinement with enhanced CMM.</p>
Full article ">Figure 12
<p>Color mapping matrices: (<b>a</b>) CMM and (<b>b</b>) local points without corresponding pixels in the initially aligned reference channel.</p>
Full article ">Figure 13
<p>Proposed cross-channel disparity estimation and registration results. (<b>a</b>) Input images, (<b>b</b>) disparity maps, (<b>c</b>) registered images and (<b>d</b>,<b>e</b>) magnified regions of (<b>a</b>,<b>c</b>), respectively.</p>
Full article ">Figure 14
<p>Disparity extraction of various matching methods with Middlebury stereo images ‘Rocks’, ‘Cloth’, ‘Aloe’ and ‘Wood’ [<a href="#B17-sensors-18-03174" class="html-bibr">17</a>]. (<b>a</b>–<b>f</b>) Input DCA image, disparity with SAD, NCC, Holloway’s CCNG, our method and the ground truth disparity.</p>
Full article ">Figure 15
<p>Color registration with the corresponding disparity prior shown in <a href="#sensors-18-03174-f014" class="html-fig">Figure 14</a> by using the proposed cross-channel registration and refinement strategy. (<b>a</b>) input DCA image, (<b>b</b>) registration by SAD, (<b>c</b>) NCC, (<b>d</b>) Holloway’s CCNG, (<b>e</b>) the proposed method and (<b>f</b>) the ground truth color image.</p>
Full article ">
13 pages, 8543 KiB  
Article
Structured-Light Based 3D Reconstruction System for Cultural Relic Packaging
by Limei Song, Xinyao Li, Yan-gang Yang, Xinjun Zhu, Qinghua Guo and Hui Liu
Sensors 2018, 18(9), 2981; https://doi.org/10.3390/s18092981 - 6 Sep 2018
Cited by 44 | Viewed by 5299
Abstract
The non-contact three-dimensional measurement and reconstruction techniques have played a significant role in the packaging and transportation of precious cultural relics. This paper develops a structured light based three-dimensional measurement system, with a low-cost for cultural relics packaging. The structured light based system [...] Read more.
The non-contact three-dimensional measurement and reconstruction techniques have played a significant role in the packaging and transportation of precious cultural relics. This paper develops a structured light based three-dimensional measurement system, with a low-cost for cultural relics packaging. The structured light based system performs rapid measurements and generates 3D point cloud data, which is then denoised, registered and merged to achieve accurate 3D reconstruction for cultural relics. The multi-frequency heterodyne method and the method in this paper are compared. It is shown that the relative accuracy of the proposed low-cost system can reach a level of 1/1000. The high efficiency of the system is demonstrated through experimental results. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Neighborhood searching.</p>
Full article ">Figure 2
<p>Octree. (<b>a</b>) Space decomposition, (<b>b</b>) Octree hierarchical structure.</p>
Full article ">Figure 3
<p>ICP algorithm.</p>
Full article ">Figure 4
<p>The binocular 3D reconstruction system developed in our Lab.</p>
Full article ">Figure 5
<p>Comparison of the reconstruction results. (<b>a</b>) The traditional method (grey anime dolls). (<b>b</b>) The proposed method (grey anime dolls). (<b>c</b>) The traditional method (colored puppets). (<b>d</b>) The proposed method (colored puppets). (<b>e</b>) The traditional method (colorful kittens). (<b>f</b>) The proposed method (colorful kittens). (<b>g</b>) The traditional method (air valves). (<b>h</b>) The proposed method (air valves). (<b>i</b>) The traditional method (flat). (<b>j</b>) The proposed method (flat).</p>
Full article ">Figure 5 Cont.
<p>Comparison of the reconstruction results. (<b>a</b>) The traditional method (grey anime dolls). (<b>b</b>) The proposed method (grey anime dolls). (<b>c</b>) The traditional method (colored puppets). (<b>d</b>) The proposed method (colored puppets). (<b>e</b>) The traditional method (colorful kittens). (<b>f</b>) The proposed method (colorful kittens). (<b>g</b>) The traditional method (air valves). (<b>h</b>) The proposed method (air valves). (<b>i</b>) The traditional method (flat). (<b>j</b>) The proposed method (flat).</p>
Full article ">Figure 6
<p>Field map. (<b>a</b>) Terra Cotta Warrior; (<b>b</b>) 3D reconstruction.</p>
Full article ">Figure 7
<p>The results of the 3D reconstruction. (<b>a</b>) Body (<b>b</b>) Head.</p>
Full article ">Figure 8
<p>Geomagic studio sphere fitting comparison. (<b>a</b>) The fitting result of the traditional method (<b>b</b>) The result of the method in this paper.</p>
Full article ">Figure 9
<p>A sample of the packing for transportation. (<b>a</b>) Body, (<b>b</b>) Head.</p>
Full article ">
18 pages, 3801 KiB  
Article
A Low-Noise Direct Incremental A/D Converter for FET-Based THz Imaging Detectors
by Moustafa Khatib and Matteo Perenzoni
Sensors 2018, 18(6), 1867; https://doi.org/10.3390/s18061867 - 7 Jun 2018
Cited by 8 | Viewed by 4724
Abstract
This paper presents the design, implementation and characterization results of a pixel-level readout chain integrated with a FET-based terahertz (THz) detector for imaging applications. The readout chain is fabricated in a standard 150-nm CMOS technology and contains a cascade of a preamplification and [...] Read more.
This paper presents the design, implementation and characterization results of a pixel-level readout chain integrated with a FET-based terahertz (THz) detector for imaging applications. The readout chain is fabricated in a standard 150-nm CMOS technology and contains a cascade of a preamplification and noise reduction stage based on a parametric chopper amplifier and a direct analog-to-digital conversion by means of an incremental ΣΔ converter, performing a lock-in operation with modulated sources. The FET detector is integrated with an on-chip antenna operating in the frequency range of 325–375 GHz and compliant with all process design rules. The cascade of the FET THz detector and readout chain is evaluated in terms of responsivity and Noise Equivalent Power (NEP) measurements. The measured readout input-referred noise of 1.6 μ V r m s allows preserving the FET detector sensitivity by achieving a minimum NEP of 376 pW/ Hz in the optimum bias condition, while directly providing a digital output. The integrated readout chain features 65-dB peak-SNR and 80-μ W power consumption from a 1.8-V supply. The area of the antenna-coupled FET detector and the readout chain fits a pixel pitch of 455 μm, which is suitable for pixel array implementation. The proposed THz pixel has been successfully applied for imaging of concealed objects in a paper envelope under continuous-wave illumination. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>FET-based THz detector model.</p>
Full article ">Figure 2
<p>Simulated FET input impedance versus the gate bias voltage at 325 GHz.</p>
Full article ">Figure 3
<p>Design of the differential bow-tie antenna in the adopted 150-nm CMOS technology.</p>
Full article ">Figure 4
<p>Simulation results of the bow-tie antenna: antenna impedance (<b>a</b>); antenna radiation efficiency and directivity (<b>b</b>).</p>
Full article ">Figure 5
<p>Block diagram of the proposed THz detector and readout structure.</p>
Full article ">Figure 6
<p>1/<span class="html-italic">f</span> noise and offset cancellation by using the parametric chopper amplifier.</p>
Full article ">Figure 7
<p>Gain and noise simulation results of the parametric amplifier at a chopping frequency of 100 kHz.</p>
Full article ">Figure 8
<p>Timing diagram of the THz readout chain.</p>
Full article ">Figure 9
<p>Schematic of the implemented decimator.</p>
Full article ">Figure 10
<p>Schematic of the pseudo-differential <math display="inline"><semantics> <msub> <mi>G</mi> <mi>m</mi> </msub> </semantics></math>-<span class="html-italic">C</span> loop filter: (<b>a</b>) transconductor; and (<b>b</b>) the amplifier used in the Miller integrator.</p>
Full article ">Figure 11
<p>Schematic of the implemented single-bit quantizer.</p>
Full article ">Figure 12
<p>Micrograph of the fabricated THz pixel structure.</p>
Full article ">Figure 13
<p>(<b>a</b>) Measured input noise power: without noise reduction (black), with the conventional chopper technique (red) and with the proposed parametric chopper amplification (blue), chopping <span class="html-italic">f</span> = 100 kHz; and (<b>b</b>) simulation of the input noise power of the readout chain: without noise reduction (black) and with the chopper parametric amplifier (red).</p>
Full article ">Figure 14
<p>Simulated and measured output signal PSD of the incremental sigma-delta converter tested with an input sinusoidal tone at 500 Hz and sampling rate 1 MHz (<b>a</b>); and noise PSD measured with shorted input to ground (<b>b</b>).</p>
Full article ">Figure 15
<p>Block diagram of the THz characterization setup.</p>
Full article ">Figure 16
<p>Measured FET voltage responsivity and Noise Equivalent Power (NEP) versus gate bias voltage (<b>a</b>); and measured FET voltage responsivity versus signal frequency (<b>b</b>).</p>
Full article ">Figure 17
<p>Simulated and measured FET detector noise voltage spectral density versus frequency.</p>
Full article ">Figure 18
<p>Readout responsivity as a function of FET gate bias voltage (<b>a</b>) and signal frequency (<b>b</b>).</p>
Full article ">Figure 19
<p>NEP as a function of FET gate bias voltage (measured at 365 GHz).</p>
Full article ">Figure 20
<p>Block diagram of the THz imaging setup.</p>
Full article ">Figure 21
<p>THz images of different metallic/plastic objects hidden inside a paper envelope acquired at 365 GHz (source modulation <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>130</mn> </mrow> </semantics></math> Hz) along with the photographs of the objects.</p>
Full article ">
16 pages, 8198 KiB  
Article
Virtual Deformable Image Sensors: Towards to a General Framework for Image Sensors with Flexible Grids and Forms
by Wei Wen and Siamak Khatibi
Sensors 2018, 18(6), 1856; https://doi.org/10.3390/s18061856 - 6 Jun 2018
Cited by 4 | Viewed by 3587
Abstract
Our vision system has a combination of different sensor arrangements from hexagonal to elliptical ones. Inspired from this variation in type of arrangements we propose a general framework by which it becomes feasible to create virtual deformable sensor arrangements. In the framework for [...] Read more.
Our vision system has a combination of different sensor arrangements from hexagonal to elliptical ones. Inspired from this variation in type of arrangements we propose a general framework by which it becomes feasible to create virtual deformable sensor arrangements. In the framework for a certain sensor arrangement a configuration of three optional variables are used which includes the structure of arrangement, the pixel form and the gap factor. We show that the histogram of gradient orientations of a certain sensor arrangement has a specific distribution (called ANCHOR) which is obtained by using at least two generated images of the configuration. The results showed that ANCHORs change their patterns by the change of arrangement structure. In this relation pixel size changes have 10-fold more impact on ANCHORs than gap factor changes. A set of 23 images; randomly chosen from a database of 1805 images, are used in the evaluation where each image generates twenty-five different images based on the sensor configuration. The robustness of ANCHORs properties is verified by computing ANCHORs for totally 575 images with different sensor configurations. We believe by using the framework and ANCHOR it becomes feasible to plan a sensor arrangement in the relation to a specific application and its requirements where the sensor arrangement can be planed even as combination of different ANCHORs. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Distribution of photoreceptors in retina of the eye [<a href="#B9-sensors-18-01856" class="html-bibr">9</a>].</p>
Full article ">Figure 2
<p>The enhanced and zoomed images of four segments of a human foveal photoreceptor mosaic from the original image printed in [<a href="#B8-sensors-18-01856" class="html-bibr">8</a>], From left to right, the segment is chosen from (<b>a</b>) the center of fovea, (<b>b</b>) the slope of fovea, and (<b>c</b>,<b>d</b>) the peripheral areas that are 1.35 mm and 5 mm away from the fovea center respectively.</p>
Full article ">Figure 3
<p>A simulated sensor structure according to the distribution of photoreceptors in retina of the eye. Each area of (<b>a</b>–<b>d</b>) corresponds to each segment of (<b>a</b>–<b>d</b>) in <a href="#sensors-18-01856-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>The procedure from square pixels to hexagonal pixels by half-pixel shifting method.</p>
Full article ">Figure 5
<p>Illustration of the square to hexagonal lattice conversion by the hyperpel method (<b>a</b>) the sub-pixels in each surrounded area by red boundary are clustered together for the corresponding hexagonal pixel and (<b>b</b>) the value of each hexagonal pixel is the average intensity of the sub-pixels within each cluster [<a href="#B35-sensors-18-01856" class="html-bibr">35</a>].</p>
Full article ">Figure 6
<p>An example of the rhombus Penrose tiling.</p>
Full article ">Figure 7
<p>An overview of the feature extraction and object detection chain.</p>
Full article ">Figure 8
<p>One of original images and its set of generated images.</p>
Full article ">Figure 9
<p>The angular characteristic of sensor grid structure. The five columns of figures are the histograms of the gradient orientation from five images in the database. From the first to the fourth row, the five image types are SQ, SQ_E, HS_E, Hex_E and Pen_E.</p>
Full article ">Figure 10
<p>The histograms of the gradient orientation with 36 bins are shown in the (<b>left</b>). From top to bottom, there are results related to SQ, SQ_E, HS_E, Hex_E and Pen_E respectively. The ANCHORs show the average values of 23 histograms of the gradient orientation. The (<b>right</b>) shows the comparison between ANCHORs of Pen_E (black) and Hex_E (green). These ANCHORs together, representing combination of two types of sensor arrangements, compensate each other’s week sensitivity areas and become more sensitive to detect directional changes of intensities.</p>
Full article ">Figure 11
<p>MSE results between each two candidates of certain ANCHOR. Each of the candidates are computed as average values of orientation of the gradient for certain number (<span class="html-italic">n</span>) of images.</p>
Full article ">Figure 12
<p>The variance of ANCHORs is demonstrated. The histograms of the gradient orientation with 36 bins are used and from top to bottom, there are results related to SQ, SQ_E, HS_E, Hex_E and Pen_E respectively. The ANCHOR of Pen_E (black) has the lowest variance and the SQ (deep blue) have the highest, indicate the Penrose arrangement has more robust ANCHOR in comparison to the others.</p>
Full article ">Figure 13
<p>One example of the hexagonal sensor having the gap factor of 0%, 20% and 60% from left to right, and the pixel size is kept the same.</p>
Full article ">Figure 14
<p>The results of ANCHORs from different types of images with different configurations of gap factor.</p>
Full article ">Figure 15
<p>The ANCHORs of different types of images when the pixel size is increased by 20% and gap factor is 0%.</p>
Full article ">
18 pages, 21272 KiB  
Article
Sensitivity and Resolution Improvement in RGBW Color Filter Array Sensor
by Seunghoon Jee, Ki Sun Song and Moon Gi Kang
Sensors 2018, 18(5), 1647; https://doi.org/10.3390/s18051647 - 21 May 2018
Cited by 11 | Viewed by 8392
Abstract
Recently, several red-green-blue-white (RGBW) color filter arrays (CFAs), which include highly sensitive W pixels, have been proposed. However, RGBW CFA patterns suffer from spatial resolution degradation owing to the sensor composition having more color components than the Bayer CFA pattern. RGBW CFA demosaicing [...] Read more.
Recently, several red-green-blue-white (RGBW) color filter arrays (CFAs), which include highly sensitive W pixels, have been proposed. However, RGBW CFA patterns suffer from spatial resolution degradation owing to the sensor composition having more color components than the Bayer CFA pattern. RGBW CFA demosaicing methods reconstruct resolution using the correlation between white (W) pixels and pixels of other colors, which does not improve the red-green-blue (RGB) channel sensitivity to the W channel level. In this paper, we thus propose a demosaiced image post-processing method to improve the RGBW CFA sensitivity and resolution. The proposed method decomposes texture components containing image noise and resolution information. The RGB channel sensitivity and resolution are improved through updating the W channel texture component with those of RGB channels. For this process, a cross multilateral filter (CMF) is proposed. It decomposes the smoothness component from the texture component using color difference information and distinguishes color components through that information. Moreover, it decomposes texture components, luminance noise, color noise, and color aliasing artifacts from the demosaiced images. Finally, by updating the texture of the RGB channels with the W channel texture components, the proposed algorithm improves the sensitivity and resolution. Results show that the proposed method is effective, while maintaining W pixel resolution characteristics and improving sensitivity from the signal-to-noise ratio value by approximately 4.5 dB. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Color filter array (CFA) patterns: (<b>a</b>) Bayer [<a href="#B1-sensors-18-01647" class="html-bibr">1</a>]; (<b>b</b>) Sony red-green-blue-white (RGBW) [<a href="#B2-sensors-18-01647" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>Demosaicing result [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>] of the RGBW CFA; (<b>a</b>) red (R) channel; (<b>b</b>) green (G) channel; (<b>c</b>) blue (B) channel; (<b>d</b>) white (W) channel.</p>
Full article ">Figure 3
<p>Comparison of Bayer demosaicing and RGBW demosaicing results: (<b>a</b>) Bayer CFA demosaicing result [<a href="#B19-sensors-18-01647" class="html-bibr">19</a>]; (<b>b</b>) red-green-blue (RGB) channels of Sony RGBW CFA demosaicing result [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>c</b>) W channel of Sony RGBW CFA demosaicing result [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>].</p>
Full article ">Figure 4
<p>Results of Bayer CFA and RGBW CFA demosaicing in low light conditions: (<b>a</b>–<b>d</b>) Bayer CFA demosaicing results [<a href="#B19-sensors-18-01647" class="html-bibr">19</a>]; (<b>e</b>–<b>h</b>) RGB channels of Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>i</b>,<b>j</b>) W channel of Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>].</p>
Full article ">Figure 5
<p>Examples of texture component and smoothness component decomposition: (<b>a</b>) Original images; (<b>b</b>) smoothness components of original images; (<b>c</b>) texture components of original images.</p>
Full article ">Figure 6
<p>Framework of the proposed post-processing method.</p>
Full article ">Figure 7
<p>Examples of the kernel estimation error: (<b>a</b>,<b>d</b>) RGB channels of Sony RGBW CFA demosaicing result [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>b</b>,<b>e</b>) U channel of (<b>a</b>,<b>d</b>); (<b>c</b>,<b>f</b>) V channel of (<b>a</b>,<b>d</b>); (<b>g</b>,<b>j)</b> RGB channels of post-processing result by CBF; (<b>h</b>,<b>k</b>) U channel of (<b>g</b>,<b>j</b>); (<b>i</b>,<b>l</b>) V channel of (<b>g</b>,<b>j</b>); (<b>m</b>,<b>n</b>) W channel of Sony RGBW CFA demosaicing result [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>].</p>
Full article ">Figure 8
<p>Difference in color separation channels between W and their discrimination ability: (<b>a</b>) RGB image; (<b>b</b>) W channel image; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>R</mi> <mo>−</mo> <mi>G</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>G</mi> <mo>−</mo> <mi>B</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>B</mi> <mo>−</mo> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Comparison of texture components decomposed through cross bilateral filter (CBF) and cross multilateral filter (CMF): (<b>a</b>–<b>d</b>) Original image; (<b>e</b>–<b>h</b>) texture components of CBF results; (<b>i</b>–<b>l</b>) texture components of CMF results.</p>
Full article ">Figure 10
<p>Results of post-processing using CBF and CMF; (<b>a</b>,<b>d</b>) Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>b</b>,<b>e</b>) results of post-processing using CBF; (<b>c</b>,<b>f</b>) results of post-processing using CMF.</p>
Full article ">Figure 11
<p>Comparisons of color aliasing artifacts and spatial resolutions: (<b>a</b>) Bayer CFA demosaicing results [<a href="#B19-sensors-18-01647" class="html-bibr">19</a>]; (<b>b</b>) RGB channels of Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>c</b>) W channel of Sony RGBW CFA demosaicing results; (<b>d</b>) proposed method post-processing results.</p>
Full article ">Figure 12
<p>Comparison of color reproduction: (<b>a</b>) Bayer CFA demosaicing results [<a href="#B19-sensors-18-01647" class="html-bibr">19</a>]; (<b>b</b>) RGB channels of Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>c</b>) proposed method post-processing results.</p>
Full article ">Figure 13
<p>Change in sensitivity depending on the process under a low-light condition: (<b>a</b>) RGB channels of Sony RGBW CFA demosaicing results before applying gain [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>b</b>) RGB channels of Sony RGBW CFA demosaicing results after applying gain [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>c</b>) W channel of Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>d</b>) proposed method post-processing results.</p>
Full article ">Figure 14
<p>Comparison of luminance / color noise: (<b>a</b>,<b>e</b>,<b>i</b>,<b>n</b>) Bayer CFA demosaicing results [<a href="#B19-sensors-18-01647" class="html-bibr">19</a>]; (<b>b</b>,<b>f</b>,<b>j</b>,<b>m</b>) RGB channels of Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>c</b>,<b>g</b>,<b>k</b>,<b>o</b>) proposed post-processing results; (<b>d</b>,<b>h</b>,<b>l</b>,<b>p</b>) W channel of Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>].</p>
Full article ">Figure 15
<p>Comparison of luminance / color noise in a flat region: (<b>a</b>,<b>e</b>,<b>h</b>,<b>k</b>) Bayer CFA demosaicing results [<a href="#B19-sensors-18-01647" class="html-bibr">19</a>]; (<b>b</b>,<b>f</b>,<b>i</b>,<b>l</b>) RGB channels of Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>c</b>,<b>g</b>,<b>j</b>,<b>m</b>) proposed post-processing results; (<b>d</b>) W channel of Sony RGBW CFA demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>].</p>
Full article ">Figure 16
<p>Qualitative and quantitative evaluations comparison of various RGBW demosaicing algorithms: (<b>a</b>,<b>e</b>,<b>i</b>) RGB channels of frequency based demosaicing results [<a href="#B25-sensors-18-01647" class="html-bibr">25</a>]; (<b>b</b>,<b>f</b>,<b>j</b>) RGB channels of pan-sharpening based demosaicing results [<a href="#B26-sensors-18-01647" class="html-bibr">26</a>]; (<b>c</b>,<b>g</b>,<b>k</b>) RGB channels of multiscale-gradient (MSG)-based demosaicing results [<a href="#B24-sensors-18-01647" class="html-bibr">24</a>]; (<b>d</b>,<b>h</b>,<b>l</b>) proposed post-processing results.</p>
Full article ">
17 pages, 1963 KiB  
Article
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor
by Takuya Yoda, Hajime Nagahara, Rin-ichiro Taniguchi, Keiichiro Kagawa, Keita Yasutomi and Shoji Kawahito
Sensors 2018, 18(3), 786; https://doi.org/10.3390/s18030786 - 5 Mar 2018
Cited by 9 | Viewed by 6639
Abstract
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot [...] Read more.
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Comparison of time-of-flight (ToF) and photometric stereo methods. (<b>a</b>) shows the target scene. (<b>b</b>,<b>c</b>) show a depth map from the ToF method and a normal map from the photometric stereo method, respectively. The same image sensor was used for both methods. (<b>b</b>) shows the absolute depth of the object from the camera. The image noise affected the estimated depth directly and small structures were contaminated by the noise. (<b>c</b>) shows an object shape that is smoother and more detailed than that in (<b>b</b>).</p>
Full article ">Figure 2
<p>Results of use of classical photometric stereo method for a dynamic scene of a falling ball. (<b>a</b>–<b>c</b>) represent the three captured images with different lighting directions. The positions of the falling ball do not correspond in the images shown in (<b>a</b>–<b>c</b>). (<b>d</b>) shows the normal map, which is incorrect because the classical photometric stereo method assumes that intensity changes in captured images only come from changes in the lighting conditions. Note that the colors in the estimated image indicate the normal directions of the colored sphere, as shown at the bottom right.</p>
Full article ">Figure 3
<p>Timing diagram comparison of the exposure and readout times for a standard camera, a high-speed camera, and the multi-tap CMOS image sensor. In the time taken for a standard camera to capture one image, a high-speed camera can capture several images. However, the exposure time of such a camera must be short and the signal-to-noise ratios (SNRs) of the captured images are low. Multi-tap image sensors can acquire almost identical images through iteration of the short exposure time with a high SNR. In this diagram, a three-tap image sensor is used as the multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor.</p>
Full article ">Figure 4
<p>Single pixel structure in a multi-tap CMOS image sensor. The green, yellow and orange arrows represent the electron flows that are generated via the aperture. The colors of the arrows correspond to the colors of the exposures shown in the bottom section of <a href="#sensors-18-00786-f003" class="html-fig">Figure 3</a>. We can select the floating diffusion (FD) in which the electrons are to be stored by changing the gate signals.</p>
Full article ">Figure 5
<p>Timing chart for synchronization between the light sources and the exposure times of the different FDs. Each color at the bottom of the figure represents a different FD as shown in <a href="#sensors-18-00786-f004" class="html-fig">Figure 4</a>. The light sources are fully synchronized with the gate signals of each of the FDs. After readout, image1 from FD1 contains only light emitted by light1. Image2 and image3 also only contain light emitted by light2 and light3, respectively.</p>
Full article ">Figure 6
<p>Relationship between object speed and the exposure time (Equation (<a href="#FD7-sensors-18-00786" class="html-disp-formula">7</a>)). This figure means that we can ignore the difference of target object position between captured images as long as the target object is projected within the single pixel at imaging plane. This figure shows that the applicable object speed that we can ignore the difference position of moving object depends on a focal length, distance between image sensor and target object, and pixel size of image sensor.</p>
Full article ">Figure 7
<p>Relationship between applicable object speed and exposure duration for various distances between the camera and the target object (<a href="#FD7-sensors-18-00786" class="html-disp-formula">7</a>). In this graph, we calculated the applicable object speed using a sensor pixel size of 16.8 <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics> </math> and a camera lens focal length of 12.5 <math display="inline"> <semantics> <mi>mm</mi> </semantics> </math>. As the figure shows, we need to set the exposure duration to be short while placing the target object far away from the image sensor to capture multiple images of a dynamic scene.</p>
Full article ">Figure 8
<p>Prototype camera lighting system consisting of multi-tap CMOS image sensor and three light sources. Each light source is composed of 16 LEDs. We arranged the light sources such that the distance between each light source and the target object is equal to reduce the differences due to light attenuation effects. We used an Arduino Uno to synchronize the three light sources with the gate signals of the multi-tap CMOS image sensor.</p>
Full article ">Figure 9
<p>Variation of the accuracy of the normal map with distance from the image sensor. We captured a planar object moving horizontal direction at each distance. The speed of the planar board was 1.3 <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics> </math>. We estimated the surface normals using Equation (<a href="#FD6-sensors-18-00786" class="html-disp-formula">6</a>) under point light source assumption and calculated the root mean square error (RMSE) in radians between the estimated surface normal and the orthogonal normal vector of the plane using Equation (<a href="#FD9-sensors-18-00786" class="html-disp-formula">9</a>). The accuracy decreases as the distance between the image sensor and the target scene increases because of lighting attenuation.</p>
Full article ">Figure 10
<p>Results of estimation of surface normals of a dynamic scene composed of a falling ball. (<b>a</b>–<b>c</b>) show images captured with the standard camera settings and (<b>d</b>) is the resulting normal map. (<b>e</b>–<b>g</b>) show images captured using the high-speed camera settings and (<b>h</b>) is the resulting normal map. (<b>i</b>–<b>k</b>) show images captured using the multi-tap CMOS image sensor and (<b>l</b>) is the resulting normal map. To make the results easier to see, we modulated the brightness of the captured images to make (<b>a</b>–<b>c</b>) 300% brighter, (<b>e</b>–<b>g</b>) 250% brighter and (<b>i</b>–<b>k</b>) 335% brighter than the original images.</p>
Full article ">Figure 11
<p>Results obtained using the photometric stereo method for a dynamic scene composed of a facial expression. (<b>a</b>) shows the images that were captured using the multi-tap CMOS image sensor, and (<b>b</b>) shows the normal maps that were estimated based on these images. We were able to estimate the surface normals of the facial expressions. To make the results easier to see, we modulated the brightness of the captured images to be 800% brighter than the original images.</p>
Full article ">Figure 12
<p>Results obtained using the photometric stereo method for a dynamic scene composed of a hand grasping. (<b>a</b>) shows the images that were captured using the multi-tap CMOS image sensor, and (<b>b</b>) shows the normal maps that were estimated based on these images. We were able to estimate the surface normals of the deforming object. To make the results easier to see, we modulated the brightness of the captured images to be 800% brighter than the original images.</p>
Full article ">
10 pages, 5972 KiB  
Article
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications
by Keunyeol Park, Minkyu Song and Soo Youn Kim
Sensors 2018, 18(2), 669; https://doi.org/10.3390/s18020669 - 24 Feb 2018
Cited by 12 | Viewed by 5844
Abstract
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts [...] Read more.
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>A conventional iris recognition process, adopted from [<a href="#B4-sensors-18-00669" class="html-bibr">4</a>].</p>
Full article ">Figure 2
<p>The proposed CMOS image sensor (CIS) with an iris recognition algorithm.</p>
Full article ">Figure 3
<p>A block diagram of the proposed iris recognition sensor.</p>
Full article ">Figure 4
<p>A single column schematic of the proposed CIS.</p>
Full article ">Figure 5
<p>Simulation results of the 1-bit clocked comparator with V<sub>PP</sub> = 1 V, 50 MHz of sampling frequency.</p>
Full article ">Figure 6
<p>Example of static random access memory (SRAM) word-line signals.</p>
Full article ">Figure 7
<p>XOR output of two different images from dual-SRAM.</p>
Full article ">Figure 8
<p>Edge (boundary) detection with the XOR gate.</p>
Full article ">Figure 9
<p>Pixel data distribution at an 8-bit image.</p>
Full article ">Figure 10
<p>Iris region at the ramp signal (V<sub>REF</sub>).</p>
Full article ">Figure 11
<p>SRAM data region when pulse signal gap decreases exponentially.</p>
Full article ">Figure 12
<p>Layout of the proposed CIS.</p>
Full article ">Figure 13
<p>Fabricated chip on a PCB board.</p>
Full article ">Figure 14
<p>Iris recognition processing using the proposed CIS.</p>
Full article ">Figure 15
<p>Images based on the conventional iris recognition algorithms and the proposed CIS.</p>
Full article ">
15 pages, 8657 KiB  
Article
Sub-THz Imaging Using Non-Resonant HEMT Detectors
by Juan A. Delgado-Notario, Jesus E. Velazquez-Perez, Yahya M. Meziani and Kristel Fobelets
Sensors 2018, 18(2), 543; https://doi.org/10.3390/s18020543 - 10 Feb 2018
Cited by 11 | Viewed by 5153
Abstract
Plasma waves in gated 2-D systems can be used to efficiently detect THz electromagnetic radiation. Solid-state plasma wave-based sensors can be used as detectors in THz imaging systems. An experimental study of the sub-THz response of II-gate strained-Si Schottky-gated MODFETs (Modulation-doped Field-Effect Transistor) [...] Read more.
Plasma waves in gated 2-D systems can be used to efficiently detect THz electromagnetic radiation. Solid-state plasma wave-based sensors can be used as detectors in THz imaging systems. An experimental study of the sub-THz response of II-gate strained-Si Schottky-gated MODFETs (Modulation-doped Field-Effect Transistor) was performed. The response of the strained-Si MODFET has been characterized at two frequencies: 150 and 300 GHz: The DC drain-to-source voltage transducing the THz radiation (photovoltaic mode) of 250-nm gate length transistors exhibited a non-resonant response that agrees with theoretical models and physics-based simulations of the electrical response of the transistor. When imposing a weak source-to-drain current of 5 μA, a substantial increase of the photoresponse was found. This increase is translated into an enhancement of the responsivity by one order of magnitude as compared to the photovoltaic mode, while the NEP (Noise Equivalent Power) is reduced in the subthreshold region. Strained-Si MODFETs demonstrated an excellent performance as detectors in THz imaging. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>Left</b>) SEM image of a Π-gate Si/SiGe MODFET under study; (<b>Right</b>) Zoom detailing the top view of the lateral structure of the metal gate, drain and source contacts. The Schottky-gate is placed in a slightly asymmetric position between the source and the drain on the device channel.</p>
Full article ">Figure 2
<p>(<b>Left</b>) Cross section of the Si/SiGe MODFETs showing the vertical layout of the transistor with a schematic of the contacts; the strained-Si layer of thickness <span class="html-italic">w</span> is highlighted. (<b>Right</b>) A plot of the vertical profiles of both bands edges and the Fermi level under the gate in equilibrium is given; the double-deck supply layer structure leads to a double electron channel in the quantum well [<a href="#B32-sensors-18-00543" class="html-bibr">32</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Schematic description of the detection and imaging experimental setup: the THz source generated two output frequencies 150 and 300 GHz; (<b>b</b>) photograph of the experimental setup. On the upper left corner the source by RPG can be seen; in the center, the automated x-y stage is shown (the sample is placed inside the envelope when the system is used for imaging).</p>
Full article ">Figure 4
<p>(<b>a</b>) Experimental and simulated transfer characteristics of the strained-Si MODFET for two values, 20 mV and 200 mV, of the drain voltage; (<b>b</b>) experimental and simulated transfer characteristics with current plotted in log scale.</p>
Full article ">Figure 5
<p>Efficiency of the transconductance vs. the gate overdrive voltage calculated from measurements and numerical TCAD simulations at <span class="html-italic">V<sub>ds</sub></span> = 20 mV.</p>
Full article ">Figure 6
<p>(<b>a</b>) Calculated THz photoresponse of the strained-Si MODFET in the photovoltaic mode (<span class="html-italic">I<sub>ds</sub></span>-off) and for two values of the source-to-drain bias current (<span class="html-italic">I<sub>ds</sub></span> = 0.1 and 1 µA/µm); (<b>b</b>) variation of the gate-to-source and gate-to-drain capacitance vs. <span class="html-italic">V<sub>ds</sub></span> for three different values of the gate bias −0.3, −0.6, and −1 V).</p>
Full article ">Figure 7
<p>(<b>a</b>) Variation of the absolute difference of gate-to-source and gate-to-drain capacitances of the transistor as a function of the asymmetry factor <span class="html-italic">L<sub>gs</sub>/L<sub>gd</sub></span>. <span class="html-italic">L<sub>gs</sub></span>/<span class="html-italic">L<sub>gd</sub></span> = 1 means that the gate is symmetrically disposed between source and drain contacts and <span class="html-italic">L<sub>gs</sub></span> = <span class="html-italic">L<sub>gd</sub></span>; (<b>b</b>) photoresponse as a function of the asymmetry factor. Simulations were performed for three gate biases (−0.4, −0.7 and −1 V).</p>
Full article ">Figure 8
<p>(<b>a</b>) Experimental THz photoresponse of the strained-Si MODFET in the photovoltaic mode (<span class="html-italic">I<sub>ds</sub></span>-off) and for two values of the bias source-to-drain current (<span class="html-italic">I<sub>ds</sub></span> = 2.5 μA, <span class="html-italic">I<sub>ds</sub></span> = 5 μA) under an excitation of 150 GHz; (<b>b</b>) same as (<b>a</b>) under an excitation of 300 GHz.</p>
Full article ">Figure 9
<p>(<b>a</b>) Responsivity of the strained-Si MODFET from measurements under an excitation of 150 GHz in the photovoltaic mode (<span class="html-italic">I<sub>ds</sub></span> off) and for a bias source-to-drain current <span class="html-italic">I<sub>ds</sub></span> = 5 µA; (<b>b</b>) same as (<b>a</b>) under an excitation of 300 GHz.</p>
Full article ">Figure 10
<p>(<b>a</b>) NEP of the strained-Si MODFET from measurements under an excitation of 150 GHz in the photovoltaic mode (<span class="html-italic">I<sub>ds</sub></span> off) and for a bias source-to-drain current <span class="html-italic">I<sub>ds</sub></span> = 5 μA; (<b>b</b>) same as (<b>a</b>) under an excitation of 300 GHz.</p>
Full article ">Figure 11
<p>(<b>a</b>) 300 GHz image of two leaves inside an envelope and, below, photograph of the envelope showing the position of the leaves; (<b>b</b>) 300 GHz image of two metallic objects inside an envelope and, below, photograph of the objects.</p>
Full article ">
9 pages, 2745 KiB  
Article
Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates
by Congzheng Wang, Song Hu, Chunming Gao and Chang Feng
Sensors 2018, 18(2), 514; https://doi.org/10.3390/s18020514 - 8 Feb 2018
Cited by 20 | Viewed by 6629
Abstract
In this work, we irradiated a high-definition (HD) industrial camera based on a commercial-off-the-shelf (COTS) CMOS image sensor (CIS) with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the [...] Read more.
In this work, we irradiated a high-definition (HD) industrial camera based on a commercial-off-the-shelf (COTS) CMOS image sensor (CIS) with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR) versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB). The work is valuable and can provide suggestion for camera users in the radiation field. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Lens for the test.</p>
Full article ">Figure 2
<p>HD camera module under test.</p>
Full article ">Figure 3
<p>Images acquired without light source under the condition of five dose rates, respectively. (<b>a</b>) Image acquired at 1.0 Gy/h. (<b>b</b>) Image acquired at 10.0 Gy/h. (<b>c</b>) Image acquired at 20.0 Gy/h. (<b>d</b>) Image acquired at 50.0 Gy/h. (<b>e</b>) Image acquired at 100.0 Gy/h.</p>
Full article ">Figure 4
<p>Images acquired with light source under the condition of five dose rates, respectively. (<b>a</b>) Image acquired at 1.0 Gy/h. (<b>b</b>) Image acquired at 10.0 Gy/h. (<b>c</b>) Image acquired at 20.0 Gy/h. (<b>d</b>) Image acquired at 50.0 Gy/h. (<b>e</b>) Image acquired at 100.0 Gy/h.</p>
Full article ">Figure 5
<p>(<b>a</b>) The comparison of horizontal cross-section in <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>a (red solid line) and <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>b (green solid line). (<b>b</b>) The comparison of horizontal cross-section in <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>a (red solid line) and <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>c (blue solid line). (<b>c</b>) The comparison of horizontal cross section in <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>a (red solid line) and <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>d (pink solid line). (<b>d</b>) The comparison of horizontal cross-section in <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>a (red solid line) and <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>e (black solid line).</p>
Full article ">Figure 6
<p>(<b>a</b>) The histogram comparison of the dark and bright rectangle areas in <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>a. (<b>b</b>) The histogram comparison of the dark and bright rectangle areas in <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>b. (<b>c</b>) The histogram comparison of the dark and bright rectangle areas in <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>c. (<b>d</b>) The histogram comparison of the dark and bright rectangle areas in <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>d. (<b>e</b>) The histogram comparison of the dark and bright rectangle areas in <a href="#sensors-18-00514-f004" class="html-fig">Figure 4</a>e.</p>
Full article ">Figure 7
<p>(<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>bright</mi> </mrow> </msub> </mrow> </semantics> </math> versus Radiation time at various dose rates. (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>dark</mi> </mrow> </msub> </mrow> </semantics> </math> versus Radiation time at various dose rates.</p>
Full article ">
4260 KiB  
Article
Proton Radiation Effects on Dark Signal Distribution of PPD CMOS Image Sensors: Both TID and DDD Effects
by Yuanyuan Xue, Zujun Wang, Wei Chen, Minbo Liu, Baoping He, Zhibin Yao, Jiangkun Sheng, Wuying Ma, Guantao Dong and Junshan Jin
Sensors 2017, 17(12), 2781; https://doi.org/10.3390/s17122781 - 30 Nov 2017
Cited by 7 | Viewed by 5117
Abstract
Four-transistor (T) pinned photodiode (PPD) CMOS image sensors (CISs) with four-megapixel resolution using 11µm pitch high dynamic range pixel were radiated with 3 MeV and 10MeV protons. The dark signal was measured pre- and post-radiation, with the dark signal post irradiation showing a [...] Read more.
Four-transistor (T) pinned photodiode (PPD) CMOS image sensors (CISs) with four-megapixel resolution using 11µm pitch high dynamic range pixel were radiated with 3 MeV and 10MeV protons. The dark signal was measured pre- and post-radiation, with the dark signal post irradiation showing a remarkable increase. A theoretical method of dark signal distribution pre- and post-radiation is used to analyze the degradation mechanisms of the dark signal distribution. The theoretical results are in good agreement with experimental results. This research would provide a good understanding of the proton radiation effects on the CIS and make it possible to predict the dark signal distribution of the CIS under the complex proton radiation environments. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental setup for the CIS proton radiation test: (<b>a</b>) schematic diagram of the experiment; and (<b>b</b>) photo of the irradiation chamber.</p>
Full article ">Figure 2
<p>3-D surface plot of dark images from CIS (#1) After 3 MeV proton radiation (integration time: 61.56 ms): (<b>a</b>) before radiation; (<b>b</b>) proton fluence: 1 × 10<sup>10</sup> p/cm<sup>2</sup>; (<b>c</b>) proton fluence: 5 × 10<sup>10</sup> p/cm<sup>2</sup>; and (<b>d</b>) proton fluence: 1 × 10<sup>11</sup> p/cm<sup>2</sup>.</p>
Full article ">Figure 3
<p>3-D surface plot of dark images from CIS (#1) After 10 MeV proton radiation (integration time: 61.56 ms): (<b>a</b>) before radiation; (<b>b</b>) proton fluence: 1 × 10<sup>10</sup> p/cm<sup>2</sup>; (<b>c</b>) proton fluence: 5 × 10<sup>10</sup> p/cm<sup>2</sup>; and (<b>d</b>) proton fluence: 1 × 10<sup>11</sup> p/cm<sup>2</sup>.</p>
Full article ">Figure 4
<p>Mean dark signal versus proton fluence at different integration time: (<b>a</b>) proton energy: 3 MeV; and (<b>b</b>) proton energy: 10 MeV.</p>
Full article ">Figure 5
<p>DSNU versus proton fluence at different integration time: (<b>a</b>) proton energy: 3 MeV; and (<b>b</b>) proton energy: 10 MeV.</p>
Full article ">Figure 6
<p>Dark signal distributions of CISs after proton irradiation: (<b>a</b>) proton energy: 3 MeV; and (<b>b</b>) proton energy: 10 MeV.</p>
Full article ">Figure 7
<p>Main defects leading to dark current increase after proton radiation.</p>
Full article ">Figure 8
<p>Dark signal distribution of the CIS after gamma radiation.</p>
Full article ">Figure 9
<p>Mean dark signal and DSNU of CIS verse TID: (<b>a</b>) mean dark signal; and (<b>b</b>) DSNU.</p>
Full article ">Figure 10
<p>Experimental (point) and calculated (lines) distributions for CISs after proton radiation: (<b>a</b>) 3 MeV; and (<b>b</b>) 10 MeV.</p>
Full article ">
1787 KiB  
Article
Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass
by Guoliang Han, Xiaoping Hu, Junxiang Lian, Xiaofeng He, Lilian Zhang, Yujie Wang and Fengliang Dong
Sensors 2017, 17(11), 2623; https://doi.org/10.3390/s17112623 - 14 Nov 2017
Cited by 48 | Viewed by 7003
Abstract
Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, [...] Read more.
Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0 . 15 at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) the pixelated polarized light compass; (<b>b</b>) the installing structure of the sensor; (<b>c</b>) layout design of the pixelated polarizers.</p>
Full article ">Figure 2
<p>Intensity response of the Charge Coupled Device (CCD) pixels under linearly polarized light.</p>
Full article ">Figure 3
<p>Description of the single scattering Rayleigh model.</p>
Full article ">Figure 4
<p>Schematic representation of the response of CCD pixels.</p>
Full article ">Figure 5
<p>Schematic representation of the installation error of pixelated polarizer array.</p>
Full article ">Figure 6
<p>The values of four pixels in a polarization measurement unit fluctuate with the rotation of the turntable. (<b>a</b>) response curves before calibration; (<b>b</b>) response curves after calibration.</p>
Full article ">Figure 7
<p>(<b>a</b>) the error of AOP as light intensity varies; (<b>b</b>) the error of DOP as light intensity varies.</p>
Full article ">Figure 8
<p>Orientation errors in the calibration process.</p>
Full article ">Figure 9
<p>Skylight polarization patterns at four adjacent positions. (<b>a</b>) angle of polarization; (<b>b</b>) degree of polarization.</p>
Full article ">Figure 10
<p>Comparison of three orientation methods.</p>
Full article ">
3534 KiB  
Article
A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems
by Daehyeok Kim, Minkyu Song, Byeongseong Choe and Soo Youn Kim
Sensors 2017, 17(7), 1497; https://doi.org/10.3390/s17071497 - 25 Jun 2017
Cited by 16 | Viewed by 8151
Abstract
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, [...] Read more.
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

Figure 1
<p>A brief explanation of an intelligent surveillance system (ISS).</p>
Full article ">Figure 2
<p>Pixel sub-sampling technique (<b>a</b>) high-resolution mode; (<b>b</b>) low-resolution mode.</p>
Full article ">Figure 3
<p>A block diagram of the proposed CIS.</p>
Full article ">Figure 4
<p>Simplified principle operation of multi-mode pixel resolution with 16 × 16 pixel array and ADC array for (<b>a</b>) high resolution mode and (<b>b</b>) 1/16 resolution mode.</p>
Full article ">Figure 5
<p>Operation principle of (<b>a</b>) SS ADC and (<b>b</b>) TSSS ADC.</p>
Full article ">Figure 6
<p>Circuit diagram of (<b>a</b>) conventional TSSS ADC with analog CDS block and (<b>b</b>) proposed TSSS ADC with DDA.</p>
Full article ">Figure 7
<p>Timing diagram for proposed TSSS ADC.</p>
Full article ">Figure 8
<p>Simulation results showing (<b>a</b>) row control signals and (<b>b</b>) pixel output voltage (V<sub>IN</sub>) for the input of column ADC array with different resolutions.</p>
Full article ">Figure 9
<p>(<b>a</b>) The chip layout of the proposed CIS and (<b>b</b>) microphotograph of the fabricated CIS.</p>
Full article ">Figure 10
<p>(<b>a</b>) Measured images for multi-mode pixel and (<b>b</b>) measured images for standard chart to obtain SNR.</p>
Full article ">
Back to TopTop