Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (164)

Search Parameters:
Keywords = low-cost scanners

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 21893 KiB  
Article
An Example of Using Low-Cost LiDAR Technology for 3D Modeling and Assessment of Degradation of Heritage Structures and Buildings
by Piotr Kędziorski, Marcin Jagoda, Paweł Tysiąc and Jacek Katzer
Materials 2024, 17(22), 5445; https://doi.org/10.3390/ma17225445 - 7 Nov 2024
Viewed by 450
Abstract
This article examines the potential of low-cost LiDAR technology for 3D modeling and assessment of the degradation of historic buildings, using a section of the Koszalin city walls in Poland as a case study. Traditional terrestrial laser scanning (TLS) offers high accuracy but [...] Read more.
This article examines the potential of low-cost LiDAR technology for 3D modeling and assessment of the degradation of historic buildings, using a section of the Koszalin city walls in Poland as a case study. Traditional terrestrial laser scanning (TLS) offers high accuracy but is expensive. The study assessed whether more accessible LiDAR options, such as those integrated with mobile devices such as the Apple iPad Pro, can serve as viable alternatives. This study was conducted in two phases—first assessing measurement accuracy and then assessing degradation detection—using tools such as the FreeScan Combo scanner and the Z+F 5016 IMAGER TLS. The results show that, while low-cost LiDAR is suitable for small-scale documentation, its accuracy decreases for larger, complex structures compared to TLS. Despite these limitations, this study suggests that low-cost LiDAR can reduce costs and improve access to heritage conservation, although further development of mobile applications is recommended. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the object under study.</p>
Full article ">Figure 2
<p>City plan with the existing wall sections plotted on a current orthophotomap [<a href="#B17-materials-17-05445" class="html-bibr">17</a>].</p>
Full article ">Figure 3
<p>Six fragments of walls that survive today, numbered from 1 to 6.</p>
Full article ">Figure 4
<p>Workflow of the research program.</p>
Full article ">Figure 5
<p>Dimensions and weights of the equipment used.</p>
Full article ">Figure 6
<p>Locations of scanner position.</p>
Full article ">Figure 7
<p>Achieved point clouds using TLS.</p>
Full article ">Figure 8
<p>Measurement results from 3DScannerApp for fragment D and M.</p>
Full article ">Figure 9
<p>Location of selected measurement markers. (<b>a</b>). View of fragment D. (<b>b</b>). View of fragment M.</p>
Full article ">Figure 10
<p>Cross-section through the acquired point clouds in relation to the reference cloud (green): (<b>a</b>). 3DScannerApp; (<b>b</b>). Pix4DCatch Captured; (<b>c</b>). Pix4DCatch Depth; (<b>d</b>). Pix4DCatch Fused.</p>
Full article ">Figure 11
<p>Measurement results from the SiteScape application.</p>
Full article ">Figure 12
<p>Differences between Stages 1 and 2 for city wall fragment D.</p>
Full article ">Figure 13
<p>Differences between Stages 1 and 2 for city wall fragment M.</p>
Full article ">Figure 14
<p>Location of selected defects where degradation has occurred.</p>
Full article ">Figure 15
<p>Defect W1 projected onto the plane.</p>
Full article ">Figure 16
<p>Cross-sections through defect W1.</p>
Full article ">Figure 17
<p>W2 defect projected onto the plane.</p>
Full article ">Figure 18
<p>Cross-sections through defect W2.</p>
Full article ">Figure 19
<p>W3 defect projected onto the plane.</p>
Full article ">Figure 20
<p>Cross-sections through defect W3.</p>
Full article ">Figure 21
<p>W4 defect projected onto the plane.</p>
Full article ">Figure 22
<p>Cross-sections through defect W4.</p>
Full article ">Figure 23
<p>Differences between Stages 1 and 2 for measurements taken with a handheld scanner.</p>
Full article ">Figure 24
<p>Defect W2 projected onto the plane—handheld scanner.</p>
Full article ">Figure 25
<p>Cross-sections through defect W2—handheld scanner.</p>
Full article ">Figure 26
<p>Defect W3 projected onto the plane—handheld scanner.</p>
Full article ">Figure 27
<p>Cross-sections through defect W3—handheld scanner.</p>
Full article ">Figure 28
<p>Defect W4 projected onto the plane—handheld scanner.</p>
Full article ">Figure 29
<p>Cross-sections through defect W4—handheld scanner.</p>
Full article ">Figure 30
<p>Example path of a single measurement with marked sample positions of the device.</p>
Full article ">Figure 31
<p>Examples of errors created at corners with the device’s trajectory marked: (<b>a</b>). SiteScape; (<b>b</b>). 3DScannerApp.</p>
Full article ">
14 pages, 7140 KiB  
Article
Hybrid Reconstruction Approach for Polychromatic Computed Tomography in Highly Limited-Data Scenarios
by Alessandro Piol, Daniel Sanderson, Carlos F. del Cerro, Antonio Lorente-Mur, Manuel Desco and Mónica Abella
Sensors 2024, 24(21), 6782; https://doi.org/10.3390/s24216782 - 22 Oct 2024
Viewed by 520
Abstract
Conventional strategies aimed at mitigating beam-hardening artifacts in computed tomography (CT) can be categorized into two main approaches: (1) postprocessing following conventional reconstruction and (2) iterative reconstruction incorporating a beam-hardening model. While the former fails in low-dose and/or limited-data cases, the latter substantially [...] Read more.
Conventional strategies aimed at mitigating beam-hardening artifacts in computed tomography (CT) can be categorized into two main approaches: (1) postprocessing following conventional reconstruction and (2) iterative reconstruction incorporating a beam-hardening model. While the former fails in low-dose and/or limited-data cases, the latter substantially increases computational cost. Although deep learning-based methods have been proposed for several cases of limited-data CT, few works in the literature have dealt with beam-hardening artifacts, and none have addressed the problems caused by randomly selected projections and a highly limited span. We propose the deep learning-based prior image constrained (PICDL) framework, a hybrid method used to yield CT images free from beam-hardening artifacts in different limited-data scenarios based on the combination of a modified version of the Prior Image Constrained Compressed Sensing (PICCS) algorithm that incorporates the L2 norm (L2-PICCS) with a prior image generated from a preliminary FDK reconstruction with a deep learning (DL) algorithm. The model is based on a modification of the U-Net architecture, incorporating ResNet-34 as a replacement of the original encoder. Evaluation with rodent head studies in a small-animal CT scanner showed that the proposed method was able to correct beam-hardening artifacts, recover patient contours, and compensate streak and deformation artifacts in scenarios with a limited span and a limited number of projections randomly selected. Hallucinations present in the prior image caused by the deep learning model were eliminated, while the target information was effectively recovered by the L2-PICCS algorithm. Full article
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed method.</p>
Full article ">Figure 2
<p>Proposed U-Net network architecture.</p>
Full article ">Figure 3
<p>Central axial slice of the reconstructions of one of the tests cases for the target (<b>A</b>) and SD (<b>B</b>), LD (<b>C</b>), LSA (<b>D</b>), and LNP (<b>E</b>) scenarios.</p>
Full article ">Figure 4
<p>Top: central axial slices of the FDK reconstructions scenarios and DeepBH results for SD and LD scenarios for the two test studies. Bottom: zoomed-in images. Arrows in zoomed-in images point to beam hardening (first column) and streaks (third column).</p>
Full article ">Figure 5
<p>Mean and standard deviation of PSNR, SSIM, and CC values calculated for the SD and LD scenarios in each slice.</p>
Full article ">Figure 6
<p>LNP scenario of 42 random projections with random seed = 42 (top) and random seed = 33 (bottom). Axial slices of DeepBH (<b>A</b>,<b>E</b>), prior images (<b>B</b>,<b>F</b>), SART-PICCS (<b>C</b>,<b>G</b>), and PICDL (<b>D</b>,<b>H</b>). Arrows indicate hallucinations.</p>
Full article ">Figure 7
<p>Central axial slices of FDK reconstructions (<b>A</b>,<b>F</b>), DeepBH reconstructions (<b>B</b>,<b>G</b>), prior images (<b>C</b>,<b>H</b>), SART-PICCS reconstructions (<b>D</b>,<b>I</b>), and PICDL reconstructions (<b>E</b>,<b>J</b>) for the two test animals in LSA scenario of 120 and 130 projections, respectively. Arrows indicate the LSA artifacts.</p>
Full article ">Figure 8
<p>Central axial slices of FDK reconstructions (<b>A</b>,<b>F</b>), DeepBH reconstructions (<b>B</b>,<b>G</b>), prior images (<b>C</b>,<b>H</b>), SART-PICCS reconstructions (<b>D</b>,<b>I</b>), and PICDL reconstructions (<b>E</b>,<b>J</b>) for the two test animals in LNP scenario of 49 and 42 projections, respectively. Arrows indicate hallucinations.</p>
Full article ">
24 pages, 3934 KiB  
Article
A Multi-Scale Covariance Matrix Descriptor and an Accurate Transformation Estimation for Robust Point Cloud Registration
by Fengguang Xiong, Yu Kong, Xinhe Kuang, Mingyue Hu, Zhiqiang Zhang, Chaofan Shen and Xie Han
Appl. Sci. 2024, 14(20), 9375; https://doi.org/10.3390/app14209375 - 14 Oct 2024
Viewed by 686
Abstract
This paper presents a robust point cloud registration method based on a multi-scale covariance matrix descriptor and an accurate transformation estimation. Compared with state-of-the-art feature descriptors, such as FPH, 3DSC, spin image, etc., our proposed multi-scale covariance matrix descriptor is superior for dealing [...] Read more.
This paper presents a robust point cloud registration method based on a multi-scale covariance matrix descriptor and an accurate transformation estimation. Compared with state-of-the-art feature descriptors, such as FPH, 3DSC, spin image, etc., our proposed multi-scale covariance matrix descriptor is superior for dealing with registration problems in a higher noise environment since the mean operation in generating the covariance matrix can filter out most of the noise-damaged samples or outliers and also make itself robust to noise. Compared with transformation estimation, such as feature matching, clustering, ICP, RANSAC, etc., our transformation estimation is able to find a better optimal transformation between a pair of point clouds since our transformation estimation is a multi-level point cloud transformation estimator including feature matching, coarse transformation estimation based on clustering, and a fine transformation estimation based on ICP. Experiment findings reveal that our proposed feature descriptor and transformation estimation outperforms state-of-the-art feature descriptors and transformation estimation, and registration effectiveness based on our registration framework of point cloud is extremely successful in the Stanford 3D Scanning Repository, the SpaceTime dataset, and the Kinect dataset, where the Stanford 3D Scanning Repository is known for its comprehensive collection of high-quality 3D scans, and the SpaceTime dataset and the Kinect dataset are captured by a SpaceTime Stereo scanner and a low-cost Microsoft Kinect scanner, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of our point cloud registration.</p>
Full article ">Figure 2
<p>Distribution of a boundary and a non-boundary point with their neighboring points.</p>
Full article ">Figure 3
<p>Geometric relations α, β, and γ between a keypoint <span class="html-italic">p</span> and one of its neighboring points.</p>
Full article ">Figure 4
<p>Samples of point clouds from our dataset.</p>
Full article ">Figure 5
<p>Boundary points under various differences between adjacent included angles.</p>
Full article ">Figure 6
<p>Keypoints on different point clouds. (<b>a</b>) Keypoints illustrator 1 with boundary point retained. (<b>b</b>) Keypoints illustrator 1 with boundary point removed. (<b>c</b>) Keypoints illustrator 2 with boundary point retained. (<b>d</b>) Keypoints illustrator 2 with boundary point removed.</p>
Full article ">Figure 7
<p>Performance of covariance matrix descriptor formed by different feature vectors under different noise conditions.</p>
Full article ">Figure 8
<p>Performance comparison between our proposed covariance matrix descriptor and the state-of-art feature descriptors under different noise conditions.</p>
Full article ">Figure 9
<p>The datasets used in the experiments.</p>
Full article ">
15 pages, 6871 KiB  
Article
A Trianalyte µPAD for Simultaneous Determination of Iron, Zinc, and Manganese Ions
by Barbara Rozbicka, Robert Koncki and Marta Fiedoruk-Pogrebniak
Molecules 2024, 29(20), 4805; https://doi.org/10.3390/molecules29204805 - 11 Oct 2024
Viewed by 454
Abstract
In this work, a microfluidic paper-based analytical device (µPAD) for simultaneous detection of Fe, Zn, and Mn ions using immobilized chromogenic reagents Ferene S, xylenol orange, and 1-(2-pyridylazo)-2-naphthol, respectively, is presented. As the effective recognition of analytes via respective chromogens takes place under [...] Read more.
In this work, a microfluidic paper-based analytical device (µPAD) for simultaneous detection of Fe, Zn, and Mn ions using immobilized chromogenic reagents Ferene S, xylenol orange, and 1-(2-pyridylazo)-2-naphthol, respectively, is presented. As the effective recognition of analytes via respective chromogens takes place under extremely different pH conditions, experiments reported in this publication are focused on optimization of the µPAD architecture allowing for the elimination of potential cross effects. The paper-based microfluidic device was fabricated using low-cost and well-reproducible wax-printing technology. For optical detection of color changes, an ordinary office scanner and self-made RGB-data processing program were applied. Optimized and stable over time, µPADs allow fast, selective, and reproducible multianalyte determinations at submillimolar levels of respective heavy metal ions, which was confirmed by results of the analysis of solutions mimicking real samples of wastewater. The presented concept of simultaneous determination of different analytes that required extremely different conditions for detection can be useful for the development of other multianalyte microfluidic paper-based devices in the µPAD format. Full article
Show Figures

Figure 1

Figure 1
<p>Detection zones after colorful detection reactions for (<b>a</b>) iron(II), (<b>b</b>) zinc(II), and (<b>c</b>) manganese(II) ions (top) and corresponding calibration graphs (bottom).</p>
Full article ">Figure 2
<p>(<b>a</b>) µPAD shape with two detection zones; (<b>b</b>) calibration curves after adding standard solutions of Mn<sup>2+</sup> to the detection zone with PAR immobilized; (<b>c</b>) photo of the tested µPADs (concentrations of Mn<sup>2+</sup> are given in [mmol/L]).</p>
Full article ">Figure 3
<p>(<b>a</b>) Scanned detection zones with PAN immobilized after Mn<sup>2+</sup> standards deposition; (<b>b</b>) corresponding calibration curve with two linear ranges.</p>
Full article ">Figure 4
<p>(<b>a</b>) Comparison of the µPADs with PAR (<b>top</b>) and PAN (<b>bottom</b>) at Mn detection zones; each standard was tested three times (3 consecutive repetitions of the same standard in rows); on the other side of the detection zone, Ferene S was deposited; (<b>b</b>) the second configuration of a bianalyte µPAD with Mn and Fe detection zones and a sample division zone. Standards concentrations used for these experiments are given in <a href="#molecules-29-04805-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 5
<p>(<b>a</b>) µPAD shape with three detection zones and a sample division zone; (<b>b</b>) layer by layer view of the designed µPAD; (<b>c</b>) the photo of the designed µPAD (after the reaction with all ions. Numbers refer to the following: 1—cold laminating foil, 2—connector between sample zone and sample division zone, 3—Mn<sup>2+</sup> detection zone/sample zone, 4—sample division zone, 5—Fe<sup>2+</sup> detection zone.</p>
Full article ">Figure 6
<p>Calibration curves obtained using a trianalyte µPAD for (<b>a</b>) Fe<sup>2+</sup>, (<b>b</b>) Zn<sup>2+</sup>, and (<b>c</b>) Mn<sup>2+</sup> ions using standards containing a mix of the three ions.</p>
Full article ">Figure 7
<p>The obtained calibration curves using systems stored for two weeks at room temperature (<b>a</b>–<b>c</b>) and at 4 °C (<b>d</b>–<b>f</b>) for (<b>a</b>,<b>d</b>) Fe<sup>2+</sup>, (<b>b</b>,<b>e</b>) Zn<sup>2+</sup>, and (<b>c</b>,<b>f</b>) Mn<sup>2+</sup> ions. Below are the obtained equations and R<sup>2</sup> for each calibration curve.</p>
Full article ">
13 pages, 3708 KiB  
Article
Nonlinear Modeling of a Piezoelectric Actuator-Driven High-Speed Atomic Force Microscope Scanner Using a Variant DenseNet-Type Neural Network
by Thi Thu Nguyen, Luke Oduor Otieno, Oyoo Michael Juma, Thi Ngoc Nguyen and Yong Joong Lee
Actuators 2024, 13(10), 391; https://doi.org/10.3390/act13100391 - 2 Oct 2024
Viewed by 532
Abstract
Piezoelectric actuators (PEAs) are extensively used for scanning and positioning in scanning probe microscopy (SPM) due to their high precision, simple construction, and fast response. However, there are significant challenges for instrument designers due to their nonlinear properties. Nonlinear properties make precise and [...] Read more.
Piezoelectric actuators (PEAs) are extensively used for scanning and positioning in scanning probe microscopy (SPM) due to their high precision, simple construction, and fast response. However, there are significant challenges for instrument designers due to their nonlinear properties. Nonlinear properties make precise and accurate control difficult in cases where position feedback sensors cannot be employed. However, the performance of PEA-driven scanners can be significantly improved without position feedback sensors if an accurate mathematical model with low computational costs is applied to reduce hysteresis and other nonlinear effects. Various methods have been proposed for modeling PEAs, but most of them have limitations in terms of their accuracy and computational efficiencies. In this research, we propose a variant DenseNet-type neural network (NN) model for modeling PEAs in an AFM scanner where position feedback sensors are not available. To improve the performance of this model, the mapping of the forward and backward directions is carried out separately. The experimental results successfully demonstrate the efficacy of the proposed model by reducing the relative root-mean-square (RMS) error to less than 0.1%. Full article
(This article belongs to the Section Actuator Materials)
Show Figures

Figure 1

Figure 1
<p>Experimental setup used to collect data for analyzing the hysteresis of the piezoelectric actuator.</p>
Full article ">Figure 2
<p>Hysteresis curve between the input voltage and the displacement for the X-axis of the homemade scanner.</p>
Full article ">Figure 3
<p>Structure of a fully connected layer model.</p>
Full article ">Figure 4
<p>Mathematical formulation behind an ANN node.</p>
Full article ">Figure 5
<p>Structure of the variant DenseNet-type fully connected model.</p>
Full article ">Figure 6
<p>Identification process.</p>
Full article ">Figure 7
<p>Hysteresis curves fitted with a DenseNet-type neural network for (<b>a</b>) the X-axis and (<b>b</b>) the Y-axis.</p>
Full article ">Figure 8
<p>Comparison of uncompensated/compensated trajectories with desired trajectories (left) and hysteresis error (right) for different driving voltages: (<b>a</b>) 100 V, (<b>b</b>) 75 V, (<b>c</b>) 50 V, (<b>d</b>) 25 V for the X-axis and (<b>e</b>) 100 V, (<b>f</b>) 75 V, (<b>g</b>) 50 V, (<b>h</b>) 25 V for the Y-axis.</p>
Full article ">Figure 9
<p>Tapping mode images of DVD data tracks obtained at 1 Hz with (<b>a</b>) an uncompensated scanner (trace) and (<b>b</b>) an uncompensated scanner (retrace). (<b>c</b>) A line profile for the red and blue lines in (<b>a</b>,<b>b</b>). Tapping mode images of DVD data tracks obtained at 1 Hz with(<b>d</b>) a compensated scanner (trace) and (<b>e</b>) a compensated scanner (retrace). (<b>f</b>) A line profile for the red and blue line in (<b>d</b>,<b>e</b>).</p>
Full article ">Figure 10
<p>Tapping mode images of DVD data tracks obtained at 5 Hz with (<b>a</b>) an uncompensated scanner (trace), (<b>b</b>) an uncompensated scanner (retrace), (<b>c</b>) a compensated scanner (trace), and (<b>d</b>) a compensated scanner (retrace), and at 30 Hz with (<b>e</b>) an uncompensated scanner (trace), (<b>f</b>) an uncompensated scanner (retrace), (<b>g</b>) a compensated scanner (trace), and (<b>h</b>) a compensated scanner (retrace).</p>
Full article ">
19 pages, 33004 KiB  
Article
Laboratory Tests of Metrological Characteristics of a Non-Repetitive Low-Cost Mobile Handheld Laser Scanner
by Bartosz Mitka, Przemysław Klapa and Pelagia Gawronek
Sensors 2024, 24(18), 6010; https://doi.org/10.3390/s24186010 - 17 Sep 2024
Viewed by 3081
Abstract
The popularity of mobile laser scanning systems as a surveying tool is growing among construction contractors, architects, land surveyors, and urban planners. The user-friendliness and rapid capture of precise and complete data on places and objects make them serious competitors for traditional surveying [...] Read more.
The popularity of mobile laser scanning systems as a surveying tool is growing among construction contractors, architects, land surveyors, and urban planners. The user-friendliness and rapid capture of precise and complete data on places and objects make them serious competitors for traditional surveying approaches. Considering the low cost and constantly improving availability of Mobile Laser Scanning (MLS), mainly handheld surveying tools, the measurement possibilities seem unlimited. We conducted a comprehensive investigation into the quality and accuracy of a point cloud generated by a recently marketed low-cost mobile surveying system, the MandEye MLS. The purpose of the study is to conduct exhaustive laboratory tests to determine the actual metrological characteristics of the device. The test facility was the surveying laboratory of the University of Agriculture in Kraków. The results of the MLS measurements (dynamic and static) were juxtaposed with a reference base, a geometric system of reference points in the laboratory, and in relation to a reference point cloud from a higher-class laser scanner: Leica ScanStation P40 TLS. The Authors verified the geometry of the point cloud, technical parameters, and data structure, as well as whether it can be used for surveying and mapping objects by assessing the point cloud density, noise and measurement errors, and detectability of objects in the cloud. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The laboratory: (<b>a</b>) general view, (<b>b</b>) survey points.</p>
Full article ">Figure 2
<p>Test laboratory.</p>
Full article ">Figure 3
<p>The Livox Mid-360 sensor: (<b>a</b>) the sensor; (<b>b</b>) scanning process; (<b>c</b>) point cloud patterns of the Livox Mid-360 accumulated over different integration times; source: Livox Mid-360 User Manual v1.2, 2024.</p>
Full article ">Figure 4
<p>Measurement Range of the Scanner; source: <a href="https://www.livoxtech.com/mid-360" target="_blank">https://www.livoxtech.com/mid-360</a> (accessed on 15 May 2024) [<a href="#B36-sensors-24-06010" class="html-bibr">36</a>].</p>
Full article ">Figure 5
<p>The measuring suite: (<b>a</b>) MandEye MLS, source: <a href="http://www.datcap.eu" target="_blank">www.datcap.eu</a> (accessed on 1 July 2024 ) [<a href="#B39-sensors-24-06010" class="html-bibr">39</a>]; (<b>b</b>) Leica P40 TLS, source: <a href="http://www.leica-geosystems.com" target="_blank">www.leica-geosystems.com</a> (accessed on 5 July 2024) [<a href="#B38-sensors-24-06010" class="html-bibr">38</a>].</p>
Full article ">Figure 6
<p>Basic measurement series: data sampling: (<b>a</b>) the smallest part of the measurement, four measurement lines, the first measured points; (<b>b</b>) a single measurement series.</p>
Full article ">Figure 7
<p>Sampling resolution: (<b>a</b>) vertical; (<b>b</b>) horizontal.</p>
Full article ">Figure 8
<p>Noise analysis for an object at (<b>a</b>) 5 m; (<b>b</b>) 15 m.</p>
Full article ">Figure 9
<p>Noise distribution (blue) in relation to the point cloud (red) for the (<b>a</b>) isometric view of the object, (<b>b</b>) long-section of the object, and (<b>c</b>) cross-section of the object.</p>
Full article ">Figure 9 Cont.
<p>Noise distribution (blue) in relation to the point cloud (red) for the (<b>a</b>) isometric view of the object, (<b>b</b>) long-section of the object, and (<b>c</b>) cross-section of the object.</p>
Full article ">Figure 10
<p>MandEye scans: (<b>a</b>) static mode; well-defined points in the space of the object (<b>b</b>) black and white targets on the walls; (<b>c</b>) black and white targets on the ceiling; (<b>d</b>) reference spheres.</p>
Full article ">Figure 11
<p>Identification of the geometric center of a black and white target T002: (<b>a</b>) Leica ScanStation P40, (<b>b</b>) MandEye, static mode, (<b>c</b>) MandEye, dynamic mode.</p>
Full article ">Figure 12
<p>Identification of the geometric center of a black and white target: (<b>a</b>) MandEye, static mode, (<b>b</b>) MandEye, dynamic mode.</p>
Full article ">Figure 12 Cont.
<p>Identification of the geometric center of a black and white target: (<b>a</b>) MandEye, static mode, (<b>b</b>) MandEye, dynamic mode.</p>
Full article ">Figure 13
<p>Identification of the geometric center of a reference sphere: (<b>a</b>) Leica ScanStation P40, (<b>b</b>) MandEye, static mode, (<b>c</b>) MandEye, dynamic mode.</p>
Full article ">
10 pages, 2120 KiB  
Article
Development of a Scanning Protocol for Anthropological Remains: A Preliminary Study
by Matteo Orsi, Roberta Fusco, Alessandra Mazzucchi, Roberto Taglioretti, Maurizio Marinato and Marta Licata
Heritage 2024, 7(9), 4997-5006; https://doi.org/10.3390/heritage7090236 - 10 Sep 2024
Viewed by 623
Abstract
Structured-light scanning is a fast and efficient technique for the acquisition of 3D point clouds. However, the extensive and daily application of this class of scanners can be challenging because of the technical know-how necessary to validate the low-cost instrumentation. This challenge is [...] Read more.
Structured-light scanning is a fast and efficient technique for the acquisition of 3D point clouds. However, the extensive and daily application of this class of scanners can be challenging because of the technical know-how necessary to validate the low-cost instrumentation. This challenge is worth accepting because of the large amount of data that can be collected accurately with the aid of specific technical protocols. This work is a preliminary study of the development of an acquisition protocol for anthropological remains performing tests in two opposite and extreme contexts: one characterised by a dark environment and one located in an open area and characterised by a very bright environment. This second context showed the influence of sunlight in the acquisition process, resulting in a colourless point cloud. It is a first step towards the development of a technical protocol for the acquisition of anthropological remains, based on the research of limits and problems associated with an instrument. Full article
(This article belongs to the Section Archaeological Heritage)
Show Figures

Figure 1

Figure 1
<p>Structured-light Einscan-Portable Handheld 3D Scanner.</p>
Full article ">Figure 2
<p>Funerary Unit 1 from the Church of Santa Maria Maggiore in Vercelli.</p>
Full article ">Figure 3
<p>Tb10 of “Rocca di Monselice” immediately before the digital acquisition.</p>
Full article ">Figure 4
<p>The point cloud obtained from the acquisition of Funerary Unit 1 from the Church of Santa Maria Maggiore in Vercelli.</p>
Full article ">Figure 5
<p>The point cloud obtained from the acquisition of Tb10 from “Rocca di Monselice” regarding an intermediate level during the recovery of the skeletons.</p>
Full article ">Scheme 1
<p>Scheme of context 1 and the surrounding area with identified planes (numbered from 1 to 5). The blue rectangle indicates the joint area between plane 3 and plane 1 and between plane 3 and plane 2.</p>
Full article ">
13 pages, 1907 KiB  
Review
A Review of 3D Modalities Used for the Diagnosis of Scoliosis
by Sampath Kumar, Bhaskar Awadhiya, Rahul Ratnakumar, Ananthakrishna Thalengala, Anu Shaju Areeckal and Yashwanth Nanjappa
Tomography 2024, 10(8), 1192-1204; https://doi.org/10.3390/tomography10080090 - 2 Aug 2024
Viewed by 1169
Abstract
Spine radiographs in the standing position are the recommended standard for diagnosing idiopathic scoliosis. Though the deformity exists in 3D, its diagnosis is currently carried out with the help of 2D radiographs due to the unavailability of an efficient, low-cost 3D alternative. Computed [...] Read more.
Spine radiographs in the standing position are the recommended standard for diagnosing idiopathic scoliosis. Though the deformity exists in 3D, its diagnosis is currently carried out with the help of 2D radiographs due to the unavailability of an efficient, low-cost 3D alternative. Computed tomography (CT) and magnetic resonance imaging (MRI) are not suitable in this case, as they are obtained in the supine position. Research on 3D modelling of scoliotic spine began with multiplanar radiographs and later moved on to biplanar radiographs and finally a single radiograph. Nonetheless, modern advances in diagnostic imaging have the potential to preserve image quality and decrease radiation exposure. They include the DIERS formetric scanner system, the EOS imaging system, and ultrasonography. This review article briefly explains the technology behind each of these methods. They are compared with the standard imaging techniques. The DIERS system and ultrasonography are radiation free but have limitations with respect to the quality of the 3D model obtained. There is a need for 3D imaging technology with less or zero radiation exposure and that can produce a quality 3D model for diseases like adolescent idiopathic scoliosis. Accurate 3D models are crucial in clinical practice for diagnosis, planning surgery, patient follow-up examinations, biomechanical applications, and computer-assisted surgery. Full article
Show Figures

Figure 1

Figure 1
<p>A 3D reconstruction from multiplanar radiographs. (<b>a</b>) Radiographs in three views, (<b>b</b>) 3D reconstruction, (<b>c</b>) 3D reconstruction of vertebrae.</p>
Full article ">Figure 2
<p>(<b>a</b>) Stereo-corresponding points on the vertebra used for SCP reconstruction, (<b>b</b>) sample biplanar radiographs, (<b>c</b>) SCP reconstruction [<a href="#B28-tomography-10-00090" class="html-bibr">28</a>]. SCP stereo corresponding point.</p>
Full article ">Figure 3
<p>(<b>a</b>) Non-stereo-corresponding points on the vertebra used for NSCP reconstruction, (<b>b</b>) sample NSCP reconstruction [<a href="#B35-tomography-10-00090" class="html-bibr">35</a>]. NSCP—non-stereo-corresponding point.</p>
Full article ">Figure 4
<p>3D reconstruction of the spine from a single radiograph. (<b>a</b>) Frontal view, (<b>b</b>) lateral view, (<b>c</b>) test setup.</p>
Full article ">Figure 5
<p>DIERS formetric 4D: surface topography scan. (<b>a</b>) The left and right sacral dimples, (<b>b</b>) images obtained by the DIERS system, (<b>c</b>) verification that dimples were precisely localized on the 3D model constructed by the DIERS system [<a href="#B43-tomography-10-00090" class="html-bibr">43</a>].</p>
Full article ">Figure 6
<p>EOS imaging system. (<b>a</b>) Patient positioning during exam, (<b>b</b>) basic operation, (<b>c</b>) 3D modelling of the spine and lower limb, (<b>d</b>) example of typical EOS lower-limb pair [<a href="#B52-tomography-10-00090" class="html-bibr">52</a>].</p>
Full article ">
29 pages, 19723 KiB  
Article
A Novel Approach for As-Built BIM Updating Using Inertial Measurement Unit and Mobile Laser Scanner
by Yuchen Yang, Yung-Tsang Chen, Craig Hancock, Nicholas A. S. Hamm and Zhiang Zhang
Remote Sens. 2024, 16(15), 2743; https://doi.org/10.3390/rs16152743 - 26 Jul 2024
Viewed by 633
Abstract
Building Information Modeling (BIM) has recently been widely applied in the Architecture, Engineering, and Construction Industry (AEC). BIM graphical information can provide a more intuitive display of the building and its contents. However, during the Operation and Maintenance (O&M) stage of the building [...] Read more.
Building Information Modeling (BIM) has recently been widely applied in the Architecture, Engineering, and Construction Industry (AEC). BIM graphical information can provide a more intuitive display of the building and its contents. However, during the Operation and Maintenance (O&M) stage of the building lifecycle, changes may occur in the building’s contents and cause inaccuracies in the BIM model, which could lead to inappropriate decisions. This study aims to address this issue by proposing a novel approach to creating 3D point clouds for updating as-built BIM models. The proposed approach is based on Pedestrian Dead Reckoning (PDR) for an Inertial Measurement Unit (IMU) integrated with a Mobile Laser Scanner (MLS) to create room-based 3D point clouds. Unlike conventional methods previously undertaken where a Terrestrial Laser Scanner (TLS) is used, the proposed approach utilizes low-cost MLS in combination with IMU to replace the TLS for indoor scanning. The approach eliminates the process of selecting scanning points and leveling of the TLS, enabling a more efficient and cost-effective creation of the point clouds. Scanning of three buildings with varying sizes and shapes was conducted. The results indicated that the proposed approach created room-based 3D point clouds with centimeter-level accuracy; it also proved to be more efficient than the TLS in updating the BIM models. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The proposed system: (<b>a</b>) test setup; (<b>b</b>) a photo of the actual setup.</p>
Full article ">Figure 2
<p>(<b>a</b>) Drawing of the axes of the MLS and IMU; (<b>b</b>) a photo of the system.</p>
Full article ">Figure 3
<p>(<b>a</b>) Sketch (<b>b</b>) photo of the classroom (unit: meter).</p>
Full article ">Figure 4
<p>Photos of each story in the CSET building: (<b>a</b>) ground story; (<b>b</b>) first story; (<b>c</b>) second story; (<b>d</b>) third story; (<b>e</b>) fourth story.</p>
Full article ">Figure 4 Cont.
<p>Photos of each story in the CSET building: (<b>a</b>) ground story; (<b>b</b>) first story; (<b>c</b>) second story; (<b>d</b>) third story; (<b>e</b>) fourth story.</p>
Full article ">Figure 5
<p>Movement trajectory of the MLS (unit: meter; arrows: movement direction).</p>
Full article ">Figure 6
<p>A sketch of the proposed alignment method: (<b>a</b>) reference BIM model; (<b>b</b>) scanned MLS cloud; red dots: feature points used for registration.</p>
Full article ">Figure 7
<p>Flowchart of the proposed approach.</p>
Full article ">Figure 8
<p>A photo of Leica HDS7000 laser scanner.</p>
Full article ">Figure 9
<p>Drawing of the objects of the classroom and their C2C distances: (<b>a</b>) object merged using common points; (<b>b</b>) object merged by the proposed approach.</p>
Full article ">Figure 10
<p>Drawings of the objects of the CSET 2F and their C2C distances: (<b>a</b>) object merged using common points; (<b>b</b>) object merged by the proposed approach.</p>
Full article ">Figure 11
<p>Drawings of the objects of the CSET 3F and their C2C distances: (<b>a</b>) object merged using common points; (<b>b</b>) object merged by the proposed approach.</p>
Full article ">Figure 12
<p>P2P error distribution diagram in each site (Bar chart: number of points in each error range); (<b>a</b>) classroom, (<b>b</b>) CSET ground floor, (<b>c</b>) CSET first floor, (<b>d</b>) CSET second floor, (<b>e</b>) CSET third floor, (<b>f</b>) CSET fourth floor.</p>
Full article ">Figure 12 Cont.
<p>P2P error distribution diagram in each site (Bar chart: number of points in each error range); (<b>a</b>) classroom, (<b>b</b>) CSET ground floor, (<b>c</b>) CSET first floor, (<b>d</b>) CSET second floor, (<b>e</b>) CSET third floor, (<b>f</b>) CSET fourth floor.</p>
Full article ">Figure 13
<p>Identified properties in each room: (<b>a</b>) Classroom; (<b>b</b>) CSET ground floor; (<b>c</b>) CSET first floor; (<b>d</b>) CSET second floor; (<b>e</b>) CSET third floor; (<b>f</b>) CSET fourth floor.</p>
Full article ">Figure 14
<p>Scan time and number of scans comparisons of TLS and MLS methods: (<b>a</b>) number of scans in each room; (<b>b</b>) scanning time.</p>
Full article ">Figure 15
<p>Previous and updated BIM models: left column—previous models, right column—updated models; (<b>a</b>,<b>b</b>) Classroom, (<b>c</b>,<b>d</b>) CSET ground floor, (<b>e</b>,<b>f</b>) CSET first floor, (<b>g</b>,<b>h</b>) CSET second floor, (<b>i</b>,<b>j</b>) CSET third floor, (<b>k</b>,<b>l</b>) CSET fourth floor.</p>
Full article ">Figure 15 Cont.
<p>Previous and updated BIM models: left column—previous models, right column—updated models; (<b>a</b>,<b>b</b>) Classroom, (<b>c</b>,<b>d</b>) CSET ground floor, (<b>e</b>,<b>f</b>) CSET first floor, (<b>g</b>,<b>h</b>) CSET second floor, (<b>i</b>,<b>j</b>) CSET third floor, (<b>k</b>,<b>l</b>) CSET fourth floor.</p>
Full article ">Figure 15 Cont.
<p>Previous and updated BIM models: left column—previous models, right column—updated models; (<b>a</b>,<b>b</b>) Classroom, (<b>c</b>,<b>d</b>) CSET ground floor, (<b>e</b>,<b>f</b>) CSET first floor, (<b>g</b>,<b>h</b>) CSET second floor, (<b>i</b>,<b>j</b>) CSET third floor, (<b>k</b>,<b>l</b>) CSET fourth floor.</p>
Full article ">
15 pages, 6715 KiB  
Article
Real-Time Elemental Analysis Using a Handheld XRF Spectrometer in Scanning Mode in the Field of Cultural Heritage
by Anastasios Asvestas, Demosthenis Chatzipanteliadis, Theofanis Gerodimos, Georgios P. Mastrotheodoros, Anastasia Tzima and Dimitrios F. Anagnostopoulos
Sustainability 2024, 16(14), 6135; https://doi.org/10.3390/su16146135 - 18 Jul 2024
Viewed by 992
Abstract
An X-ray fluorescence handheld spectrometer (hh-XRF) is adapted for real-time qualitative and quantitative elemental analysis in scanning mode for applications in cultural heritage. Specifically, the Tracer-5i (Bruker) is coupled with a low-cost constructed computer-controlled x–y target stage that enables the remote control of [...] Read more.
An X-ray fluorescence handheld spectrometer (hh-XRF) is adapted for real-time qualitative and quantitative elemental analysis in scanning mode for applications in cultural heritage. Specifically, the Tracer-5i (Bruker) is coupled with a low-cost constructed computer-controlled x–y target stage that enables the remote control of the target’s movement under the ionizing X-ray beam. Open-source software synchronizes the spectrometer’s measuring functions and handles data acquisition and data analysis. The spectrometer’s analytical capabilities, such as sensitivity, energy resolution, beam spot size, and characteristic transition intensity as a function of the distance between the spectrometer and the target, are evaluated. The XRF scanner’s potential in real-time imaging, object classification, and quantitative analysis in cultural heritage-related applications is explored and the imaging capabilities are tested by scanning a 19th-century religious icon. The elemental maps provide information on used pigments and reveal an underlying icon. The scanner’s capability to classify metallic objects was verified by analyzing the measured raw spectra of a coin collection using Principal Components Analysis. Finally, the handheld’s capability to perform quantitative analysis in scanning mode is demonstrated in the case of precious metals, applying a pre-installed quantification routine. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Tracer 5i supported on a tripod, the remote-controlled x–y stage, and the control box. (<b>b</b>) System topology.</p>
Full article ">Figure 2
<p>(<b>a</b>) XRF spectrum of the NIST-610 standard glass. The measurement conditions were 50 kV, 35 μA, real time of 120 s, and 1 mm collimator. (<b>b</b>) Sensitivity for the Kα transition of the elements Z = 13–56, for the 1- and 3-mm collimators. The sensitivity of the M6-Jetstream Bruker scanner is shown for comparison.</p>
Full article ">Figure 3
<p>(<b>a</b>) FWHM as a function of the photon energy. (<b>b</b>) Relative intensity as a function of the distance between the sample’s surface and the spectrometer’s nose. A relative intensity equal to 1 corresponds when the sample is attached to the spectrometer’s nose.</p>
Full article ">Figure 4
<p>(<b>a</b>) Spectrometer’s measuring plane and definition of the <span class="html-italic">x</span>-axis (long side) and the <span class="html-italic">y</span>-axis (short side). (<b>b</b>) Knife edge scan across the <span class="html-italic">x</span>-axis of a Cu pure target for the 1 mm diameter collimator and for a space of 0.3 mm between the spectrometer’s measuring plane and target. The Gaussian distribution fitted to the derivative of the spectral distribution determines the beam spot spreading. (<b>c</b>) Evaluated two-dimensional beam spot profile for each of the three collimators according to the FWHM values in <a href="#sustainability-16-06135-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 5
<p>(<b>a</b>) Pattern corresponding to group number {−1}, element {1} of the USAF-1951 resolution test chart (see text). (<b>b</b>) Ag Lα intensity distribution for a 1 mm collimator and 1 mm step. (<b>c</b>) Ag Lα intensity distribution for a 1 mm collimator and 0.5 mm step size (oversampling).</p>
Full article ">Figure 6
<p>(<b>a</b>) St. Fanourios, late 19th century. The scanned area is in the yellow rectangle. (<b>b</b>) The sum of the 1054 XRF spectra acquired during the scanning. The energy position of the Kα, Lα, and Mα elemental characteristic transitions are shown in black, blue, and green, respectively.</p>
Full article ">Figure 7
<p>(<b>a</b>) Dead-time value as a function of the measured pixel. (<b>b</b>) Spectrometer’s ambient air temperature as a function of measured pixel (or equivalently to the measuring time). The scan is horizontal line by horizontal line, and the top left pixel is the first measured.</p>
Full article ">Figure 8
<p>Cu elemental distribution map based on the Cu Kα transition: (<b>a</b>) applying real time using the ROI method on the XRF spectra acquired during the scanning with the Tracer-5i, (<b>b</b>) applying offline fitting of the same spectra using PyMca [<a href="#B17-sustainability-16-06135" class="html-bibr">17</a>], (<b>c</b>) using the M6-Jetstream software (Esprit-M6 v.1.6) on the XRF spectra acquired during the scanning with M6-Jetstream.</p>
Full article ">Figure 9
<p>Εlemental distribution maps applying off-line analysis using PyMca [<a href="#B17-sustainability-16-06135" class="html-bibr">17</a>].</p>
Full article ">Figure 10
<p>(<b>a</b>) 3-D principal component analysis scatterplot. (<b>b</b>–<b>e</b>) Mean spectra of each cluster within the 6–10 keV energy.</p>
Full article ">Figure 11
<p>(<b>a</b>) Mean XRF spectra of the 50-cent, 20-cent, and 10-cent coins. (<b>b</b>) Fe Kα transition intensity distribution during linear scan across the diameter of 1-cent coin, revealing the non-uniformity of the copper layer thickness.</p>
Full article ">Figure 12
<p>(<b>a</b>) XRF spectra of alloy “8” for distances of the spectrometer’s measuring plane to the target of 0 mm and 3.0 mm. (<b>b</b>) The XRF spectra normalized to the peak intensity of the Au Lα transition (9.7 keV).</p>
Full article ">
14 pages, 26108 KiB  
Article
A One-Dimensional Light Detection and Ranging Array Scanner for Mapping Turfgrass Quality
by Arthur Rosenfield, Alexandra Ficht, Eric M. Lyons and Bahram Gharabaghi
Remote Sens. 2024, 16(12), 2215; https://doi.org/10.3390/rs16122215 - 19 Jun 2024
Viewed by 960
Abstract
The turfgrass industry supports golf courses, sports fields, and the landscaping and lawn care industries worldwide. Identifying the problem spots in turfgrass is crucial for targeted remediation for turfgrass treatment. There have been attempts to create vehicle- or drone-based scanners to predict turfgrass [...] Read more.
The turfgrass industry supports golf courses, sports fields, and the landscaping and lawn care industries worldwide. Identifying the problem spots in turfgrass is crucial for targeted remediation for turfgrass treatment. There have been attempts to create vehicle- or drone-based scanners to predict turfgrass quality; however, these methods often have issues associated with high costs and/or a lack of accuracy due to using colour rather than grass height (R2 = 0.30 to 0.90). The new vehicle-mounted turfgrass scanner system developed in this study allows for faster data collection and a more accurate representation of turfgrass quality compared to currently available methods while being affordable and reliable. The Gryphon Turf Canopy Scanner (GTCS), a low-cost one-dimensional LiDAR array, was used to scan turfgrass and provide information about grass height, density, and homogeneity. Tests were carried out over three months in 2021, with ground-truthing taken during the same period. When utilizing non-linear regression, the system could predict the percent bare of a field (R2 = 0.47, root mean square error < 0.5 mm) with an increase in accuracy of 8% compared to the random forest metric. The potential environmental impact of this technology is vast, as a more targeted approach to remediation would reduce water, fertilizer, and herbicide usage. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified illustration of (<b>A</b>) the viewing angle issue that typical 2-D LiDAR faces because of rotating from a fixed point and (<b>B</b>) a 1-D LiDAR array capable of capturing both objects.</p>
Full article ">Figure 2
<p>The field set-up utilized for data collection and ground truthing at the University of Guelph Turfgrass Institute in Guelph, ON, Canada, during the summer/fall of 2021. The field was separated into 187 2 × 2-metre grid spaces. The orange arrows represent the movement of the Gryphon Turf Canopy Scanner (GTCS) vertically across the field in approximately a north-south pattern this was followed by movement horizontally across the field in approximately an east-west pattern.</p>
Full article ">Figure 3
<p>Point quadrat used to ground truth the field at the Guelph Turfgrass Institute in Guelph, ON, Canada, in 2021. Ground truthing was used to identify bare versus healthy grass, where the points directly under the intersection of the wires were evaluated for a total of 36 points per point quadrat measurement.</p>
Full article ">Figure 4
<p>The Gryphon Turf Canopy Scanner system (<b>left</b>), with a Rover EMLID RTK GPS and a 14 Garmin LiteV3 1-D LiDAR array connected individual Arduino Uno units and SD cards on a beam in front of a Toro Triplex Mower with Base EMLID RTK GPS in the foreground.</p>
Full article ">Figure 5
<p>The data workflow for measurements that were obtained by the Gryphon Turf Canopy Scanner system at the Guelph Turfgrass Institute in Guelph, ON, Canada, in 2021.</p>
Full article ">Figure 6
<p>Observed predicted percent bare plots for three scan dates (13 October, 15 August, and 24 September 2021) utilizing (<b>A</b>) a regression model and (<b>B</b>) a random forest model at the Guelph Turfgrass Institute in Guelph, ON, Canada.</p>
Full article ">Figure 7
<p>A cumulative distribution function of two extreme bins (grid squares) with the highest percent grass (G; bin 169) and percent bare (B; bin 154) for the 13 October 2021 scan at the Guelph Turfgrass Institute in Guelph, ON, Canada.</p>
Full article ">Figure 8
<p>A 3D representation of a non-linear regression model representing percent bare spots on plots at the Guelph Turfgrass Institute in Guelph, ON, in 2021. M and S represent the normalized mean and normalized standard deviation of the LiDAR-measured grass heights per plot, respectively.</p>
Full article ">Figure 9
<p>Decision tree model structure based on the Guelph Turf Canopy Scanner (GTCS) measured heights at the Guelph Turfgrass Institute in Guelph, ON, Canada, in 2021.</p>
Full article ">Figure 10
<p>Comparison of detection of turfgrass quality of 187 grid spaces for (<b>A</b>) ground-truth bare percent data, (<b>B</b>) the regression model’s bare percent data, and (<b>C</b>) the random forest model’s bare percent data. Data were collected at the Guelph Turfgrass Institute in Guelph, ON, Canada, in 2021.</p>
Full article ">
25 pages, 69125 KiB  
Article
Quality Analysis of 3D Point Cloud Using Low-Cost Spherical Camera for Underpass Mapping
by Sina Rezaei, Angelina Maier and Hossein Arefi
Sensors 2024, 24(11), 3534; https://doi.org/10.3390/s24113534 - 30 May 2024
Viewed by 849
Abstract
Three-dimensional point cloud evaluation is used in photogrammetry to validate and assess the accuracy of data acquisition in order to generate various three-dimensional products. This paper determines the optimal accuracy and correctness of a 3D point cloud produced by a low-cost spherical camera [...] Read more.
Three-dimensional point cloud evaluation is used in photogrammetry to validate and assess the accuracy of data acquisition in order to generate various three-dimensional products. This paper determines the optimal accuracy and correctness of a 3D point cloud produced by a low-cost spherical camera in comparison to the 3D point cloud produced by laser scanner. The fisheye images were captured from a chessboard using a spherical camera, which was calibrated using the commercial Agisoft Metashape software (version 2.1). For this purpose, the results of different calibration methods are compared. In order to achieve data acquisition, multiple images were captured from the inside area of our case study structure (an underpass in Wiesbaden, Germany) in different configurations with the aim of optimal network design for camera location and orientation. The relative orientation was generated from multiple images obtained by removing the point cloud noise. For assessment purposes, the same scene was captured with a laser scanner to generate a metric comparison between the correspondence point cloud and the spherical one. The geometric features of both point clouds were analyzed for a complete geometric quality assessment. In conclusion, this study highlights the promising capabilities of low-cost spherical cameras for capturing and generating high-quality 3D point clouds by conducting a thorough analysis of the geometric features and accuracy assessments of the absolute and relative orientations of the generated clouds. This research demonstrated the applicability of spherical camera-based photogrammetry to challenging structures, such as underpasses with limited space for data acquisition, and achieved a 0.34 RMS re-projection error in the relative orientation step and a ground control point accuracy of nearly 1 mm. Compared to the laser scanner point cloud, the spherical point cloud reached an average distance of 0.05 m and acceptable geometric consistency. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Location of the Theodor Heuss graffiti underpass (<b>a</b>); The underpass is placed in the German federal state of Hessen. The entrance (<b>b</b>) and the indoor environment (<b>c</b>) of the underpass.</p>
Full article ">Figure 2
<p>All sensors used in this study. (<b>a</b>) Leica RTC 360 laser scanner used as the reference sensor for point cloud quality assessment. (<b>b</b>) Nikon D800, 16 mm equivalent focal length for point cloud quality assessment. (<b>c</b>) Insta360 One X3, one of the best available low-cost spherical cameras (based on <a href="#sensors-24-03534-t001" class="html-table">Table 1</a>). (<b>d</b>) Insta360 One X2, one of the most optimized low-cost spherical cameras.</p>
Full article ">Figure 3
<p>The flowchart of the present study in graffiti underpass point cloud analysis.</p>
Full article ">Figure 4
<p>Two network design strategies were employed to achieve the highest possible accuracy of the 3D point clouds. In the graffiti underpass (top view: (<b>a</b>)), data capturing was repeated two times for each spherical camera with two different orientations: stereo mode (<b>b</b>) and sideways mode (<b>c</b>). Accordingly, the raw file images (<b>d</b>) and the panorama images (<b>e</b>) were exported for each implemented strategy.</p>
Full article ">Figure 5
<p>Total station and GNSS receiver used for defining the network of control points (<b>a</b>). The control point plates with 40 × 40 cm dimensions were big enough to be visible in the fisheye images of the spherical and DSLR cameras (<b>b</b>).</p>
Full article ">Figure 6
<p>Laser scanner registration for two capturing stations in Cyclone Register 360 software (V2021.1.2); 73% overlapping area specified between two stations.</p>
Full article ">Figure 7
<p>The process of chessboard camera calibration involves the use of datasets (<b>a</b>) and camera orientation (<b>b</b>), which are repeated for both the Insta360 One X3 and DSLR cameras (fisheye and normal lenses).</p>
Full article ">Figure 8
<p>Three methods used for noise filtering from the reconstructed 3D point clouds: re-projection error (<b>a</b>), reconstruction uncertainty (<b>b</b>), and image count (<b>c</b>) threshold. For each step, point clouds were reduced by a certain amount.</p>
Full article ">Figure 9
<p>The network design is illustrated on the reconstructed 3D point cloud, accompanied by the relevant software: X3 camera with 20 stations (<b>a</b>), X2 camera with 20 stations (<b>b</b>), and DSLR camera with 38 stations (<b>d</b>), all implemented in Agisoft Metashape software to generate 3D point clouds. The RTC 360 laser scanner with two stations (<b>c</b>) performed the last process of this workflow.</p>
Full article ">Figure 10
<p>Distortion plots of calibrated fisheye lenses of spherical cameras (front side of X2 (<b>a</b>) and front side of X3 (<b>b</b>)) in comparison with fisheye lens of DSLR camera (<b>c</b>).</p>
Full article ">Figure 11
<p>C2C distance models between each pair of point clouds: LS compared with X2 (<b>a</b>); X3 compared with X2 (<b>b</b>); X3 compared with LS (<b>c</b>); X2 compared with DSLR (<b>d</b>); X3 compared with DSLR (<b>e</b>).</p>
Full article ">Figure 11 Cont.
<p>C2C distance models between each pair of point clouds: LS compared with X2 (<b>a</b>); X3 compared with X2 (<b>b</b>); X3 compared with LS (<b>c</b>); X2 compared with DSLR (<b>d</b>); X3 compared with DSLR (<b>e</b>).</p>
Full article ">Figure 12
<p>Quality of colorized 3D point clouds generated by spherical cameras X2 (<b>a</b>) and X3 (<b>b</b>) in comparison with LS (<b>c</b>) and DSLR (<b>d</b>).</p>
Full article ">Figure 13
<p>Mean and standard deviation of distance per point (with C2C solution) between X2 and X3 (<b>a</b>), X2 and LS (<b>b</b>), and X3 and LS (<b>c</b>) cameras in the following Gaussian histogram plots. The values and corresponding colors of plots are mentioned in <a href="#sensors-24-03534-t011" class="html-table">Table 11</a>.</p>
Full article ">Figure 14
<p>Mean and standard deviation of distance using normal vectors (with M3C2 solution) between X2 and LS (<b>a</b>), X3 and LS (<b>b</b>), and X2 and X3 (<b>c</b>) cameras in the following Gaussian histogram plots. The values and corresponding colors of plots are mentioned in <a href="#sensors-24-03534-t011" class="html-table">Table 11</a>.</p>
Full article ">
16 pages, 2600 KiB  
Article
Assessment of Calcaneal Spongy Bone Magnetic Resonance Characteristics in Women: A Comparison between Measures Obtained at 0.3 T, 1.5 T, and 3.0 T
by Silvia Capuani, Alessandra Maiuro, Emiliano Giampà, Marco Montuori, Viviana Varrucciu, Gisela E. Hagberg, Vincenzo Vinicola and Sergio Colonna
Diagnostics 2024, 14(10), 1050; https://doi.org/10.3390/diagnostics14101050 - 18 May 2024
Viewed by 1057
Abstract
Background: There is a growing interest in bone tissue MRI and an even greater interest in using low-cost MR scanners. However, the characteristics of bone MRI remain to be fully defined, especially at low field strength. This study aimed to characterize the signal-to-noise [...] Read more.
Background: There is a growing interest in bone tissue MRI and an even greater interest in using low-cost MR scanners. However, the characteristics of bone MRI remain to be fully defined, especially at low field strength. This study aimed to characterize the signal-to-noise ratio (SNR), T2, and T2* in spongy bone at 0.3 T, 1.5 T, and 3.0 T. Furthermore, relaxation times were characterized as a function of bone-marrow lipid/water ratio content and trabecular bone density. Methods: Thirty-two women in total underwent an MR-imaging investigation of the calcaneus at 0.3 T, 1.5 T, and 3.0 T. MR-spectroscopy was performed at 3.0 T to assess the fat/water ratio. SNR, T2, and T2* were quantified in distinct calcaneal regions (ST, TC, and CC). ANOVA and Pearson correlation statistics were used. Results: SNR increase depends on the magnetic field strength, acquisition sequence, and calcaneal location. T2* was different at 3.0 T and 1.5 T in ST, TC, and CC. Relaxation times decrease as much as the magnetic field strength increases. The significant linear correlation between relaxation times and fat/water found in healthy young is lost in osteoporotic subjects. Conclusion: The results have implications for the possible use of relaxation vs. lipid/water marrow content for bone quality assessment and the development of quantitative MRI diagnostics at low field strength. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) T<sub>2</sub>*–T1-weighted image obtained at 3.0 T. (<b>b</b>) Selection of ROIs that correspond to the three calcaneus areas of interest: ST, subtalar region, TC, tuber calcanei region, CC, cavum calcanei region. Image resolution was 0.35 × 0.35 × 5 mm<sup>3</sup>.</p>
Full article ">Figure 2
<p>T<sub>2</sub>*-weighted images obtained at 0.3 T, 1.5 T, and 3.0 T in young women. Specifically, images (<b>a</b>–<b>c</b>) were obtained using a low-cost 0.3 T scanner dedicated to the extremities using TE = 10 ms, 14 ms, and 16 ms, respectively. Images (<b>d</b>–<b>f</b>) and (<b>g</b>–<b>i</b>) are obtained at 1.5 T whole-body scanner and 3.0 T head dedicated scanner using TE = 5, 10, and 20 ms. Images obtained at 1.5 T and 3.0 T are of the same volunteer. Image resolution is 0.55 × 0.55 × 7 mm<sup>3</sup> for those obtained at 0.3 T and 1.5 × 1.5 × 5 mm<sup>3</sup> for those obtained at 1.5 T and 3.0 T. The different image contrasts are due to the magnetic susceptibility differences between tissues that increase in parallel to the magnetic field strength increase.</p>
Full article ">Figure 3
<p>An example of a bone marrow NMR spectrum obtained in calcaneus at 3 T using SVS PRESS (TE/TR = 22/5000 ms) together with the images for the voxel localization in calcaneus. The LC-Model [<a href="#B44-diagnostics-14-01050" class="html-bibr">44</a>] fit to spectrum data (in black) is reported in red. The extract resonance quantifications and their standard deviation (SD) are displayed in the insert.</p>
Full article ">Figure 4
<p>Thirteen young volunteers’ transverse relaxation times in subtalar (<b>a</b>,<b>d</b>), tuber calcanei (<b>b</b>,<b>e</b>), and cavum calcanei regions (<b>c</b>,<b>f</b>) versus fat-to-water concentration ratio. Their linear correlations (R<sup>2</sup> coefficients) and their functional linear dependency y(x) are also shown.</p>
Full article ">Figure 5
<p>Comparison between the T<sub>2</sub>* dependence on lipid/water in the healthy young and osteoporotic group obtained in the whole calcaneus at 3.0 T. The significant linear correlation in the healthy group is lost when osteoporotic subjects are investigated.</p>
Full article ">
19 pages, 607 KiB  
Article
Simplified Indoor Localization Using Bluetooth Beacons and Received Signal Strength Fingerprinting with Smartwatch
by Leana Bouse, Scott A. King and Tianxing Chu
Sensors 2024, 24(7), 2088; https://doi.org/10.3390/s24072088 - 25 Mar 2024
Cited by 5 | Viewed by 2362
Abstract
Variations in Global Positioning Systems (GPSs) have been used for tracking users’ locations. However, when location tracking is needed for an indoor space, such as a house or building, then an alternative means of precise position tracking may be required because GPS signals [...] Read more.
Variations in Global Positioning Systems (GPSs) have been used for tracking users’ locations. However, when location tracking is needed for an indoor space, such as a house or building, then an alternative means of precise position tracking may be required because GPS signals can be severely attenuated or completely blocked. In our approach to indoor positioning, we developed an indoor localization system that minimizes the amount of effort and cost needed by the end user to put the system to use. This indoor localization system detects the user’s room-level location within a house or indoor space in which the system has been installed. We combine the use of Bluetooth Low Energy beacons and a smartwatch Bluetooth scanner to determine which room the user is located in. Our system has been developed specifically to create a low-complexity localization system using the Nearest Neighbor algorithm and a moving average filter to improve results. We evaluated our system across a household under two different operating conditions: first, using three rooms in the house, and then using five rooms. The system was able to achieve an overall accuracy of 85.9% when testing in three rooms and 92.106% across five rooms. Accuracy also varied by region, with most of the regions performing above 96% accuracy, and most false-positive incidents occurring within transitory areas between regions. By reducing the amount of processing used by our approach, the end-user is able to use other applications and services on the smartwatch concurrently. Full article
(This article belongs to the Collection Sensors and Systems for Indoor Positioning)
Show Figures

Figure 1

Figure 1
<p>Example of RSSI readings of Beacon 1 while in different regions of the testing area. The mean is indicated as a red solid line and the RSSI signal as a fluctuating blue line.</p>
Full article ">Figure 2
<p>Placement of Regions 0–4, Beacons B0–B4, and calibration locations for each region in home for evaluation and data collection.</p>
Full article ">Figure 3
<p>Placement of Positions P0–P4 in each region for evaluation and data collection.</p>
Full article ">Figure 4
<p>Unfiltered and filtered RSSI reading comparison, with mean indicated as a red solid line and the RSSI signal as a fluctuating blue line.</p>
Full article ">Figure 5
<p>Heatmap plot of the accuracy of predicted room locations versus actual room locations for experiment with three beacons at 3-s aggregation with moving average filter.</p>
Full article ">Figure 6
<p>Heatmap plot of accuracy of predicted room locations versus actual room locations for experiment with five Beacons at 10-s aggregation with moving average filter.</p>
Full article ">Figure 7
<p>Accuracy results for each Position P0–P4 in each region.</p>
Full article ">
9 pages, 452 KiB  
Article
Improving Generalizability of PET DL Algorithms: List-Mode Reconstructions Improve DOTATATE PET Hepatic Lesion Detection Performance
by Xinyi Yang, Michael Silosky, Jonathan Wehrend, Daniel V. Litwiller, Muthiah Nachiappan, Scott D. Metzler, Debashis Ghosh, Fuyong Xing and Bennett B. Chin
Bioengineering 2024, 11(3), 226; https://doi.org/10.3390/bioengineering11030226 - 27 Feb 2024
Viewed by 1408
Abstract
Deep learning (DL) algorithms used for DOTATATE PET lesion detection typically require large, well-annotated training datasets. These are difficult to obtain due to low incidence of gastroenteropancreatic neuroendocrine tumors (GEP-NETs) and the high cost of manual annotation. Furthermore, networks trained and tested with [...] Read more.
Deep learning (DL) algorithms used for DOTATATE PET lesion detection typically require large, well-annotated training datasets. These are difficult to obtain due to low incidence of gastroenteropancreatic neuroendocrine tumors (GEP-NETs) and the high cost of manual annotation. Furthermore, networks trained and tested with data acquired from site specific PET/CT instrumentation, acquisition and processing protocols have reduced performance when tested with offsite data. This lack of generalizability requires even larger, more diverse training datasets. The objective of this study is to investigate the feasibility of improving DL algorithm performance by better matching the background noise in training datasets to higher noise, out-of-domain testing datasets. 68Ga-DOTATATE PET/CT datasets were obtained from two scanners: Scanner1, a state-of-the-art digital PET/CT (GE DMI PET/CT; n = 83 subjects), and Scanner2, an older-generation analog PET/CT (GE STE; n = 123 subjects). Set1, the data set from Scanner1, was reconstructed with standard clinical parameters (5 min; Q.Clear) and list-mode reconstructions (VPFXS 2, 3, 4, and 5-min). Set2, data from Scanner2 representing out-of-domain clinical scans, used standard iterative reconstruction (5 min; OSEM). A deep neural network was trained with each dataset: Network1 for Scanner1 and Network2 for Scanner2. DL performance (Network1) was tested with out-of-domain test data (Set2). To evaluate the effect of training sample size, we tested DL model performance using a fraction (25%, 50% and 75%) of Set1 for training. Scanner1, list-mode 2-min reconstructed data demonstrated the most similar noise level compared that of Set2, resulting in the best performance (F1 = 0.713). This was not significantly different compared to the highest performance, upper-bound limit using in-domain training for Network2 (F1 = 0.755; p-value = 0.103). Regarding sample size, the F1 score significantly increased from 25% training data (F1 = 0.478) to 100% training data (F1 = 0.713; p < 0.001). List-mode data from modern PET scanners can be reconstructed to better match the noise properties of older scanners. Using existing data and their associated annotations dramatically reduces the cost and effort in generating these datasets and significantly improves the performance of existing DL algorithms. List-mode reconstructions can provide an efficient, low-cost method to improve DL algorithm generalizability. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

Figure 1
<p>The <span class="html-italic">F</span><sub>1</sub> score of the lesion detection model with different values of (<b>a</b>) COV(SUV) and (<b>b</b>) sample size.</p>
Full article ">Figure 2
<p>Examples of DL lesion detection in transaxial <sup>68</sup>Ga DOTATATE PET. Lesion predictions and gold-standard annotations are marked in red. (Top row) Original images, (middle row) DL prediction, and (bottom row) Gold standard. (<b>A</b>–<b>C</b>) Three different patient examples: (<b>A</b>) true positive and false negative, (<b>B</b>) true positive and false positive and (<b>C</b>) false positive.</p>
Full article ">
Back to TopTop