Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (172)

Search Parameters:
Keywords = low-cost scanners

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2383 KiB  
Article
Masonry and Pictorial Surfaces Study by Laser Diagnostics: The Case of the Diana’s House in Ostia Antica
by Valeria Spizzichino, Luisa Caneve, Antonella Docci, Massimo Francucci, Massimiliano Guarneri, Daniela Tarica and Claudia Tempesta
Appl. Sci. 2025, 15(4), 2172; https://doi.org/10.3390/app15042172 - 18 Feb 2025
Abstract
The aim of the present research is to validate the combined use, through data fusion, of a Laser Induced Fluorescence (LIF) scanning system and a radar scanner (RGB-ITR, Red Green Blue Imaging Topological Radar system), as a unique tool to address the need [...] Read more.
The aim of the present research is to validate the combined use, through data fusion, of a Laser Induced Fluorescence (LIF) scanning system and a radar scanner (RGB-ITR, Red Green Blue Imaging Topological Radar system), as a unique tool to address the need for non-invasive, rapid, and low-cost techniques for both diagnostic and operational needs. The integrated system has been applied to the House of Diana complex in Ostia Antica. The main diagnostic objective of this research was to trace the materials used in different phases of restoration, from antiquity to modernity, on both masonry and pictorial surfaces, to reconstruct the history of the building. Due to the significant interest in this insula, other studies have been recently carried out on the House of Diana, but they once again highlighted the necessity of multiple approaches and non-invasive methods capable of providing quasi-real-time answers, delivering point-by-point information on very large surfaces to overcome the limits related to representativeness of sampling. The data acquired by the RGB-ITR system are quantitative, allowing for morphological and 3-colour analysis of the investigated artwork. In this work, the sensor has been used to create coloured 3D models useful for structural assessments and for locating different classes of materials. In fact, the LIF maps, which integrate knowledge about the original constituent materials and previous conservation interventions, have been used as additional layers of the tridimensional models. Therefore, the method can direct possible new investigations and restoration actions, piecing together the history of the House of Diana to build for it a safer future. Full article
24 pages, 3306 KiB  
Article
Object Recognition and Positioning with Neural Networks: Single Ultrasonic Sensor Scanning Approach
by Ahmet Karagoz and Gokhan Dindis
Sensors 2025, 25(4), 1086; https://doi.org/10.3390/s25041086 - 11 Feb 2025
Viewed by 380
Abstract
Ultrasonic sensing may become a useful technique for distance measurement and object detection when optical visibility is not available. However, the research on detecting multiple target objects and locating their coordinates is limited. This makes it a valuable topic. Reflection signal data obtained [...] Read more.
Ultrasonic sensing may become a useful technique for distance measurement and object detection when optical visibility is not available. However, the research on detecting multiple target objects and locating their coordinates is limited. This makes it a valuable topic. Reflection signal data obtained from a single ultrasonic sensor may be just enough for the measurements of distance and reflection strength. On the other hand, if extracted properly, a scanned set of signal data by the same sensor holds a significant amount of information about the surrounding geometries. Evaluating this dataset from a single sensor scanning can be a perfect application for convolutional neural networks (CNNs). This study proposes an imaging technique based on a scanned dataset obtained by a single low-cost ultrasonic sensor. So that images are suitable for desired outputs in a CNN, a 3D printer is converted to an ultrasonic image scanner and automated to perform as a data acquisition system for the desired datasets. A deep learning model demonstrated by this work extracts object features using convolutional neural networks (CNNs) and performs coordinate estimation using regression layers. With the proposed solution, by training a reasonable amount of obtained data, 90% accuracy was achieved in the classification and position estimation of multiple objects with the CNN algorithm as a result of converting the signals obtained from ultrasonic sensors into images. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Automated data collection mechanism.</p>
Full article ">Figure 2
<p>Improved ultrasonic scanner user interface.</p>
Full article ">Figure 3
<p>(<b>a</b>) Classified objects; (<b>b</b>) designed ultrasonic sensor.</p>
Full article ">Figure 4
<p>Ultrasonic sensor block diagram.</p>
Full article ">Figure 5
<p>Method architecture flowchart.</p>
Full article ">Figure 6
<p>Typical signal information obtained in each scan.</p>
Full article ">Figure 7
<p>Extracting signal envelope.</p>
Full article ">Figure 8
<p>Sample signal information obtained (<b>top</b>), and combined with the others to make one image (<b>bottom</b>).</p>
Full article ">Figure 9
<p>(<b>a</b>) Example of object positioning (single object); (<b>b</b>) representation of single object on the image.</p>
Full article ">Figure 10
<p>(<b>a</b>) Example of object positioning (Object A, Object B and Object C); (<b>b</b>) representation of objects on the image (Object A, Object B and Object C).</p>
Full article ">Figure 11
<p>The superposition of the waves by acquired signals from multiple objects. It should be noted that instances are at the 10 mm apart travel distances.</p>
Full article ">Figure 12
<p>In the process of multiple object recognition, signals obtained as a result of ultrasonic scanning and formation of a single image.</p>
Full article ">Figure 13
<p>(<b>a</b>) Sample location map of objects; (<b>b</b>) placement of objects in pictures.</p>
Full article ">Figure 14
<p>CNN architecture.</p>
Full article ">Figure 15
<p>Object recognition-CNN flowchart.</p>
Full article ">Figure 16
<p>Object and coordinate estimates in test data for 3 objects.</p>
Full article ">Figure 17
<p>k-fold cross validation accuracies.</p>
Full article ">Figure 18
<p>Distribution of coordinate prediction errors in cm.</p>
Full article ">Figure 19
<p>Distribution of mean errors for coordinates in cm.</p>
Full article ">Figure 20
<p>Analysis of images with errors greater than 4 cm.</p>
Full article ">Figure 21
<p>Comparison of original and noisy images.</p>
Full article ">Figure 22
<p>The situation where objects are very close to each other (B (x = 13 cm), C (x = 20 cm)).</p>
Full article ">Figure 23
<p>The situation where objects are very close to each other (B (x = 17 cm), C (x = 20 cm)).</p>
Full article ">Figure 24
<p>Object and coordinate estimates in test data for 4 objects.</p>
Full article ">Figure 25
<p>Object and coordinate estimates in test data for 5 objects.</p>
Full article ">
14 pages, 3842 KiB  
Article
Morphology-Based In-Ovo Sexing of Chick Embryos Utilizing a Low-Cost Imaging Apparatus and Machine Learning
by Daniel Zhang and Leonie Jacobs
Animals 2025, 15(3), 384; https://doi.org/10.3390/ani15030384 - 29 Jan 2025
Viewed by 625
Abstract
The routine culling of male chicks in the laying hen industry raises significant ethical, animal welfare, and sustainability concerns. Current methods to determine chick embryo sex before hatching are costly, time-consuming, and invasive. This study aimed to develop a low-cost, non-invasive solution to [...] Read more.
The routine culling of male chicks in the laying hen industry raises significant ethical, animal welfare, and sustainability concerns. Current methods to determine chick embryo sex before hatching are costly, time-consuming, and invasive. This study aimed to develop a low-cost, non-invasive solution to predict chick embryo sex before hatching using the morphological features of eggs. A custom imaging apparatus was created using a smartphone and light box, enabling consistent image capture of chicken eggs. Egg length, width, area, eccentricity, and extent were measured, and machine learning models were trained to predict chick embryo sex. The wide neural network model achieved the highest accuracy of 88.9% with a mean accuracy of 81.5%. Comparison of the imaging apparatus to a high-cost industrial 3D scanner demonstrated comparable accuracy in capturing egg morphology. The findings suggest that this method can contribute to the prevention of up to 6.2 billion male chicks from being culled annually by destroying male embryos before they develop the capacity to feel pain. This approach offers a feasible, ethical, and scalable alternative to current practices, with potential for further improvements in accuracy and adaptability to different industry settings. Full article
(This article belongs to the Section Animal Welfare)
Show Figures

Figure 1

Figure 1
<p>Imaging apparatus with labeled components. This low-cost apparatus was used to standardize image capturing of chicken eggs using a smartphone camera.</p>
Full article ">Figure 2
<p>Confusion matrix for neural network (wide neural network). A total of 18 of the 121 eggs were reserved to test the predictive accuracy of the model. The diagonal and off-diagonal cells of the matrix show the number of correct and incorrect predictions. The tables to the right and below the matrix display the true positive rates (TPRs), false negative rates (FNRs), positive predictive values (PPVs), and false discovery rates (FDRs). The TPR shows how accurately the model performed in each of the two classes (88.9%) and the FNR is the error rate (11.1%).</p>
Full article ">Figure 3
<p>Receiver Operating Characteristic (ROC) curve for neural network (wide neural network). The Model Operating Point, in the top-left corner, represents the ideal compromise between sensitivity and specificity. The area under the curve (AUC) value represents the overall quality of the model, where values closer to one indicate a strong model performance.</p>
Full article ">Figure A1
<p>Confusion matrix for <span class="html-italic">K</span>-nearest neighbors (medium KNN). A total of 18 of the 121 eggs were reserved to test the predictive accuracy of the model. The diagonal and off-diagonal cells of the matrix show the number of correct and incorrect predictions. The tables to the right and below the matrix display the true positive rates (TPRs), false negative rates (FNRs), positive predictive values (PPVs), and false discovery rates (FDRs). The TPR shows how accurately the model performed in each of the two classes and the FNR is the error rate.</p>
Full article ">Figure A2
<p>Receiver Operating Characteristic (ROC) curve for <span class="html-italic">K</span>-nearest neighbors (medium KNN). The Model Operating Point, in the top-left corner, represents the ideal compromise between sensitivity and specificity. The area under the curve (AUC) value represents the overall quality of the model, where values closer to one indicate a strong model performance.</p>
Full article ">Figure A3
<p>Confusion matrix for decision tree (boosted trees). A total of 18 of the 121 eggs were reserved to test the predictive accuracy of the model. The diagonal and off-diagonal cells of the matrix show the number of correct and incorrect predictions. The tables to the right and below the matrix display the true positive rates (TPRs), false negative rates (FNRs), positive predictive values (PPVs), and false discovery rates (FDRs). The TPR shows how accurately the model performed in each of the two classes and the FNR is the error rate. Of note, this model did not perform equally between the two sexes, leading to variations between the TPRs, FNRs, PPVs, and FDRs of each sex.</p>
Full article ">Figure A4
<p>Receiver Operating Characteristic (ROC) curve for decision tree (boosted trees). The Model Operating Point, in the top-left corner, represents the ideal compromise between sensitivity and specificity. The area under the curve (AUC) value represents the overall quality of the model, where values closer to one indicate a strong model performance.</p>
Full article ">Figure A5
<p>Confusion matrix for support vector machine (cubic SVM). A total of 18 of the 121 eggs were reserved to test the predictive accuracy of the model. The diagonal and off-diagonal cells of the matrix show the number of correct and incorrect predictions. The tables to the right and below the matrix display the true positive rates (TPRs), false negative rates (FNRs), positive predictive values (PPVs), and false discovery rates (FDRs). The TPR shows how accurately the model performed in each of the two classes and the FNR is the error rate.</p>
Full article ">Figure A6
<p>Receiver Operating Characteristic (ROC) curve for support vector machine (cubic SVM). The Model Operating Point, in the top-left corner, represents the ideal compromise between sensitivity and specificity. The area under the curve (AUC) value represents the overall quality of the model, where values closer to one indicate a strong model performance.</p>
Full article ">Figure A7
<p>Diagram of an egg with defined variables for eccentricity and extent calculations.</p>
Full article ">
19 pages, 3375 KiB  
Article
Enhancing Cross-Modal Camera Image and LiDAR Data Registration Using Feature-Based Matching
by Jennifer Leahy, Shabnam Jabari, Derek Lichti and Abbas Salehitangrizi
Remote Sens. 2025, 17(3), 357; https://doi.org/10.3390/rs17030357 - 22 Jan 2025
Viewed by 714
Abstract
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This [...] Read more.
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This paper introduces a new pipeline for camera–LiDAR post-registration to produce colorized point clouds. Utilizing deep learning-based matching between 2D spherical projection LiDAR feature layers and camera images, we can map 3D LiDAR coordinates to image grey values. Various LiDAR feature layers, including intensity, bearing angle, depth, and different weighted combinations, are used to find correspondence with camera images utilizing state-of-the-art deep learning matching algorithms, i.e., SuperGlue and LoFTR. Registration is achieved using collinearity equations and RANSAC to remove false matches. The pipeline’s accuracy is tested using survey-grade terrestrial datasets from the TX5 scanner, as well as datasets from a custom-made, low-cost mobile mapping system (MMS) named Simultaneous Localization And Mapping Multi-sensor roBOT (SLAMM-BOT) across diverse scenes, in which both outperformed their baseline solutions. SuperGlue performed best in high-feature scenes, whereas LoFTR performed best in low-feature or sparse data scenes. The LiDAR intensity layer had the strongest matches, but combining feature layers improved matching and reduced errors. Full article
(This article belongs to the Special Issue Remote Sensing Satellites Calibration and Validation)
Show Figures

Figure 1

Figure 1
<p>General flowchart of the methodology.</p>
Full article ">Figure 2
<p>Proposed optical and LiDAR data integration method.</p>
Full article ">Figure 3
<p>Camera-to-ground coordinate system transformations. The rotational extrinsic parameters of the LiDAR sensor are represented by the angles (<span class="html-italic">ω</span>, <span class="html-italic">φ</span>, <span class="html-italic">κ</span>), which describe the orientation of the camera in the 3D space. The camera’s principal point is denoted by (<span class="html-italic">x</span><sub>p</sub>, <span class="html-italic">y</span><sub>p</sub>), and <span class="html-italic">f</span> represents the focal length. The ground coordinates are represented by (<span class="html-italic">X</span>, <span class="html-italic">Y</span>, <span class="html-italic">Z</span>), corresponding to the real-world position in the ground reference system.</p>
Full article ">Figure 4
<p>The experimental scenes employed in this study. The six scenes were acquired in outdoor and indoor environments, representing different object arrangements, lighting conditions, and spatial compositions.</p>
Full article ">Figure 5
<p>Comparison of single frame (<b>left</b>) vs. densified aggregated frames (<b>right</b>).</p>
Full article ">Figure 6
<p>Comparison of different images: (<b>a</b>) optical; (<b>b</b>) bearing angle; (<b>c</b>) intensity; and (<b>d</b>) depth.</p>
Full article ">Figure 7
<p>Before (<b>left</b>) and after (<b>right</b>) attempts to remedy the range dispersions in the SLAMM-BOT depth image.</p>
Full article ">Figure 8
<p>Viable matches from the intensity image (<b>top</b>) vs. false matches from the depth image (<b>bottom</b>). The color scheme represents match confidence, with red representing high confidence and blue representing low confidence.</p>
Full article ">
17 pages, 30535 KiB  
Article
A Method to Evaluate Orientation-Dependent Errors in the Center of Contrast Targets Used with Terrestrial Laser Scanners
by Bala Muralikrishnan, Xinsu Lu, Mary Gregg, Meghan Shilling and Braden Czapla
Sensors 2025, 25(2), 505; https://doi.org/10.3390/s25020505 - 16 Jan 2025
Viewed by 513
Abstract
Terrestrial laser scanners (TLS) are portable dimensional measurement instruments used to obtain 3D point clouds of objects in a scene. While TLSs do not require the use of cooperative targets, they are sometimes placed in a scene to fuse or compare data from [...] Read more.
Terrestrial laser scanners (TLS) are portable dimensional measurement instruments used to obtain 3D point clouds of objects in a scene. While TLSs do not require the use of cooperative targets, they are sometimes placed in a scene to fuse or compare data from different instruments or data from the same instrument but from different positions. A contrast target is an example of such a target; it consists of alternating black/white squares that can be printed using a laser printer. Because contrast targets are planar as opposed to three-dimensional (like a sphere), the center of the target might suffer from errors that depend on the orientation of the target with respect to the TLS. In this paper, we discuss a low-cost method to characterize such errors and present results obtained from a short-range TLS and a long-range TLS. Our method involves comparing the center of a contrast target against the center of spheres and, therefore, does not require the use of a reference instrument or calibrated objects. For the short-range TLS, systematic errors of up to 0.5 mm were observed in the target center as a function of the angle for the two distances (5 m and 10 m) and resolutions (30 points-per-degree (ppd) and 90 ppd) considered for this TLS. For the long-range TLS, systematic errors of about 0.3 mm to 0.8 mm were observed in the target center as a function of the angle for the two distances (5 m and 10 m) at low resolution (28 ppd). Errors of under 0.3 mm were observed in the target center as a function of the angle for the two distances at high resolution (109 ppd). Full article
(This article belongs to the Special Issue Laser Scanning and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Commercially procured contrast target with magnetic/adhesive backing, (<b>b</b>) contrast target printed on cardstock using a laser printer, (<b>c</b>) contrast target mounted on a two-axis gimbal, (<b>d</b>) contrast target with a partial 38.1 mm (1.5 inches) sphere on the back.</p>
Full article ">Figure 2
<p>Artifact comprising four spheres and a contrast target to study errors as a function of orientation.</p>
Full article ">Figure 3
<p>Different orientations of the artifact, (<b>a</b>–<b>c</b>) rotation about the vertical axis, i.e., yaw, (<b>d</b>–<b>f</b>) rotation about the horizontal axis, i.e., pitch. Photos of the artifact oriented so that (<b>g</b>) yaw = 0°, pitch = 0°, (<b>h</b>) yaw = 40°, pitch = 0°, (<b>i</b>) yaw = 0°, pitch = −40°. The TLS is located directly in front of the target in part (<b>g</b>) at a distance of either 5 m or 10 m.</p>
Full article ">Figure 4
<p>(<b>a</b>) Intensity plot of the entire artifact, (<b>b</b>) intensity plot of the contrast target and the edge points (transition between the black (blue dots in figure) and the white (red dots) regions of a target).</p>
Full article ">Figure 5
<p>The 68% data ellipses visualizing the pooled within-sample covariance matrices for the four distance/resolution scenarios. Text annotations correspond to the standard deviations in the X (horizontal) and Y (vertical) coordinates for the far distance (10 m), low resolution (30 ppd) scenario (bolded and italicized values in <a href="#sensors-25-00505-t001" class="html-table">Table 1</a>), visualized by the magnitude of the dashed lines, and near distance (5 m), high resolution (90 ppd) scenario, indicated by solid lines (bolded values in <a href="#sensors-25-00505-t001" class="html-table">Table 1</a>).</p>
Full article ">Figure 6
<p>The 95% data ellipses from low-resolution scans (30 ppd) from TLS I for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t002" class="html-table">Table 2</a> have been added as text annotations.</p>
Full article ">Figure 7
<p>The 95% data ellipses from high-resolution scans (90 ppd) from TLS I for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t002" class="html-table">Table 2</a> have been added as text annotations.</p>
Full article ">Figure 8
<p>The 68% data ellipses visualizing the pooled within-sample covariance matrices for the four distance/resolution scenarios from the TLS II data. Text annotations correspond to the standard deviations in the X (horizontal) and Y (vertical) coordinates for the far distance (10 m), low resolution (28 ppd) scenario, visualized by the magnitude of the dashed lines (bolded and italicized values in <a href="#sensors-25-00505-t003" class="html-table">Table 3</a>), and near distance (5 m), high resolution (109 ppd) scenario, indicated by solid lines (bolded values in <a href="#sensors-25-00505-t003" class="html-table">Table 3</a>).</p>
Full article ">Figure 9
<p>The 95% data ellipses from low-resolution scans (28 ppd) from TLS II for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t004" class="html-table">Table 4</a> have been added as text annotations.</p>
Full article ">Figure 10
<p>The 95% data ellipses from high-resolution scans (109 ppd) from TLS II for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t004" class="html-table">Table 4</a> have been added as text annotations.</p>
Full article ">
17 pages, 4281 KiB  
Article
Optimizing Bacterial Protectant Composition to Enhance Baijiu Yeast Survival and Productivity During Spray Drying
by Jingyu Li, Fengkui Xiong, Zhongbin Liu, Jia Zheng, Guangzhong Hu and Zheng Feng
Fermentation 2025, 11(1), 29; https://doi.org/10.3390/fermentation11010029 - 13 Jan 2025
Viewed by 561
Abstract
The flavor substances produced by the division of baijiu yeast during the winemaking process often determine the quality of white wine, and the difficulty of storing and transporting high-quality baijiu yeast is a bottleneck that restricts the development of China’s baijiu industry. It [...] Read more.
The flavor substances produced by the division of baijiu yeast during the winemaking process often determine the quality of white wine, and the difficulty of storing and transporting high-quality baijiu yeast is a bottleneck that restricts the development of China’s baijiu industry. It is widely accepted that drying microorganisms such as baijiu yeast is the best way to improve its storage and transport performance. Spray drying, as one of the most widely used microbial drying processes, with a high efficiency and low cost, is the hot spot of current research in the field of microbial drying, but it has the inherent defect of a low drying survival rate. In order to address this inherent defect, the present study was carried out with a high-quality white wine yeast, Modified Sporidiobolus Johnsonii A (MSJA), as the target. Firstly, an orthogonal experiment, Steep Hill Climbing experiment, and response surface experiment were sequentially designed to optimize the type and amount of protective agent added in the spray-drying process of MSJA. Then, the effects of glutamyl transaminase (TGase) treatment on the drying process of MSJA were revealed with the help of advanced equipment, such as laser particle sizer, environmental scanning electron microscope (ESEM), and Fourier-transform infrared scanner (FTIR). The results showed that the addition of “TGase-treated soybean isolate protein (SPI) + lactic protein (LP)” as an in vitro bacterial protectant and “14.15% trehalose + 7.10% maltose + 14.04% sucrose” TGase treatment can promote the cross-linking of protective proteins, reduce the distance between MSJA bacteria and protective proteins, and increase the glass transition temperature to enhance the protective effect of protective proteins, so as to improve the survival rate of MSJA during spray drying. Full article
Show Figures

Figure 1

Figure 1
<p>Effect of different protectants on the survival of <span class="html-italic">MSJA</span> after drying. (<b>A</b>) Adding different protein protectants (<b>B</b>) Adding different disaccharide protectants.</p>
Full article ">Figure 2
<p>Diagram of correctness diagnostics of the proposed Box–Behnken model. (<b>A</b>) Normal probability diagram of residuals. (<b>B</b>) Comparison diagram of predicted value and actual value. (<b>C</b>) Comparison diagram of external standardized residuals and predicted values. (<b>D</b>) Comparison diagram of external standardized residuals and yeast biomass operation number.</p>
Full article ">Figure 3
<p>Response surface and contour plots of the effect of each protectant interaction on <span class="html-italic">MSJA</span> survival rate.</p>
Full article ">Figure 3 Cont.
<p>Response surface and contour plots of the effect of each protectant interaction on <span class="html-italic">MSJA</span> survival rate.</p>
Full article ">Figure 4
<p>Distribution of <span class="html-italic">MSJA</span> bacterial cell diameter before and after TGase of each protective protein. In the blue box are MSJA cells that are separated by proteins that are not adsorbed on the yeast surface due to charge repulsion.</p>
Full article ">Figure 5
<p>Morphology of protective protein–<span class="html-italic">MSJA</span> complexes before and after TGase after spray drying.</p>
Full article ">Figure 6
<p>FTIR profiles of <span class="html-italic">MSJA</span> before and after treatment with different protective proteins TGase. (<b>A</b>) Adding SPI protection (<b>B</b>) Adding LP protection (<b>C</b>) Adding SPI+LP protection.</p>
Full article ">Figure 7
<p>Glass transition temperature of the protective protein–<span class="html-italic">MSJA</span> mixture after spray drying before and after TGase treatment.</p>
Full article ">
15 pages, 6769 KiB  
Article
Stationary 3D Scanning System for IoT Applications
by Miłosz Kowalski, Dominik Rybarczyk and Andrzej Milecki
Appl. Sci. 2024, 14(24), 11587; https://doi.org/10.3390/app142411587 - 11 Dec 2024
Viewed by 662
Abstract
In various types of industrial applications, such as reverse engineering, machine operation, technical metrology, or modern factory maintenance, it is important to have systems that enable the quick and easy scanning of selected mechanical parts. This study presents the design process and analysis [...] Read more.
In various types of industrial applications, such as reverse engineering, machine operation, technical metrology, or modern factory maintenance, it is important to have systems that enable the quick and easy scanning of selected mechanical parts. This study presents the design process and analysis of a low-cost, 3D scanning system which can be used in industrial applications. The system collects point cloud data using an infrared distance sensor based on optical triangulation, controlled by a 32-bit microcontroller. Communication with the system is enabled through a serial interface and a dedicated window application, allowing users to monitor and adjust scanning parameters. The output data in the form of a point cloud are saved in a text file in the scanner’s controller memory and then sent wirelessly to an external device, e.g., cloud and/or a diagnostic controller. The electronic system is equipped with a radio module that can be used to communicate with other devices in line with the idea of the Internet of Things and the concept of Industry 4.0. The results of the study are based on the accuracy of the three-dimensional digitization of the tested object and on the determination of the average measurement uncertainty. Full article
(This article belongs to the Special Issue The Future of Manufacturing and Industry 4.0)
Show Figures

Figure 1

Figure 1
<p>Single-point distance measurement using optical triangulation.</p>
Full article ">Figure 2
<p>The determination of a single point on the surface of the object, taking into account the rotation of the table.</p>
Full article ">Figure 3
<p>Design of a 3D scanning system.</p>
Full article ">Figure 4
<p>Main PCB layout.</p>
Full article ">Figure 5
<p>A PCB that completes the entire control system.</p>
Full article ">Figure 6
<p>Diagram of the work state of the 3D scanning system.</p>
Full article ">Figure 7
<p>Algorithm for the operation of the state machine.</p>
Full article ">Figure 8
<p>The view of the main screen of the 3D scanning system control (1—section for recordingand reading data and connection to an external device, 2—console monitoring the performed activities, 3—area for entering the height of the object, 4—section for system restart and service control, 5—place of connection to the new serial port, 6—preview of individual system parameters).</p>
Full article ">Figure 9
<p>The point cloud obtained during the scanning process and the photo of the measured object.</p>
Full article ">Figure 10
<p>Point cloud obtained by scanning an object with a funnel shape.</p>
Full article ">Figure 11
<p>Point cloud obtained by scanning the workshop hammer.</p>
Full article ">Figure 12
<p>The point cloud resulting from an unsuccessful scan of a reflective object.</p>
Full article ">Figure 13
<p>A comparison diagram of the results of two separate scanning processes of the same cross-section plane.</p>
Full article ">
16 pages, 8780 KiB  
Article
Soil Mapping of Small Fields with Limited Number of Samples by Coupling EMI and NIR Spectroscopy
by Leonardo Pace, Simone Priori, Monica Zanini and Valerio Cristofori
Soil Syst. 2024, 8(4), 128; https://doi.org/10.3390/soilsystems8040128 - 7 Dec 2024
Viewed by 902
Abstract
Precision agriculture relies on highly detailed soil maps to optimize resource use. Proximal sensing methods, such as EMI, require a certain number of soil samples and laboratory analysis to interpolate the characteristics of the soil. NIR diffuse reflectance spectroscopy offers a rapid, low-cost [...] Read more.
Precision agriculture relies on highly detailed soil maps to optimize resource use. Proximal sensing methods, such as EMI, require a certain number of soil samples and laboratory analysis to interpolate the characteristics of the soil. NIR diffuse reflectance spectroscopy offers a rapid, low-cost alternative that increases datapoints and map accuracy. This study tests and optimizes a methodology for high-detail soil mapping in a 2.5 ha hazelnut grove in Grosseto, Southern Tuscany, Italy, using both EMI sensors (GF Mini Explorer, Brno, Czech Republic) and a handheld NIR spectrometer (Neospectra Scanner, Si-Ware Systems, Menlo Park, CA, USA). In addition to two profiles selected by clustering, another 35 topsoil augerings (0–30 cm) were added. Laboratory analyses were performed on only five samples (two profiles + three samples from the augerings). Partial least square regression (PLSR) with a national spectral library, augmented by the five local samples, predicted clay, sand, organic carbon (SOC), total nitrogen (TN), and cation exchange capacity (CEC). The 37 predicted datapoints were used for spatial interpolation, using the ECa map, elevation, and DEM derivatives as covariates. Kriging with external drift (KED) was used to spatialize the results. The errors of the predictive maps were calculated using five additional validation points analyzed by conventional methods. The validation showed good accuracy of the predictive maps, particularly for SOC and TN. Full article
Show Figures

Figure 1

Figure 1
<p>Framework for the study area.</p>
Full article ">Figure 2
<p>Pretreatments applied to the spectral library with local samples (n = 377): on the left the Savitzky–Golay filter; on the right, the application of the standard normal variate.</p>
Full article ">Figure 3
<p>Maps of electrical conductivity (ECa) measured at different depths by EMI sensor. The black dots show the soil profiles (P24 and P25), whereas the polygons show the two STUs delineated by k-means clustering.</p>
Full article ">Figure 4
<p>Digital elevation model (DEM) with selected profiles.</p>
Full article ">Figure 5
<p>Profile P24.</p>
Full article ">Figure 6
<p>Profile P25.</p>
Full article ">Figure 7
<p>Total random augerings. The blue dots are the points selected for the local calibration set.</p>
Full article ">Figure 8
<p>Maps of clay (g·100 g<sup>−1</sup>), sand (g·100 g<sup>−1</sup>) and SOC (g·100 g<sup>−1</sup>), interpolated by KED, using the values of the sampling datapoints predicted by NIR spectroscopy and the most related covariates according to Pearson’s correlation index.</p>
Full article ">Figure 9
<p>Maps of TN (g·kg<sup>−1</sup>), CEC (meq·100 g<sup>−1</sup>) and CaCO<sub>3</sub> (g·100 g<sup>−1</sup>), interpolated by KED, using the values of the sampling datapoints predicted by NIR spectroscopy and the most related covariates according to Pearson’s correlation index.</p>
Full article ">Figure 10
<p>Error map of clay (g·100 g<sup>−1</sup>), sand (g·100 g<sup>−1</sup>), SOC (g·100 g<sup>−1</sup>)<sub>,</sub> TN (g·kg<sup>−1</sup>), CEC (meq·100 g<sup>−1</sup>), and CaCO<sub>3</sub> (g·100 g<sup>−1</sup>), interpolated by KED, using the values of the sampling datapoints predicted by NIR spectroscopy and the most related covariates according to Pearson’s index.</p>
Full article ">Figure 11
<p>Total random augerings. Blue dots are the points selected for the local calibration set; red dots are the points collected for the local validation set.</p>
Full article ">
26 pages, 21893 KiB  
Article
An Example of Using Low-Cost LiDAR Technology for 3D Modeling and Assessment of Degradation of Heritage Structures and Buildings
by Piotr Kędziorski, Marcin Jagoda, Paweł Tysiąc and Jacek Katzer
Materials 2024, 17(22), 5445; https://doi.org/10.3390/ma17225445 - 7 Nov 2024
Cited by 1 | Viewed by 858
Abstract
This article examines the potential of low-cost LiDAR technology for 3D modeling and assessment of the degradation of historic buildings, using a section of the Koszalin city walls in Poland as a case study. Traditional terrestrial laser scanning (TLS) offers high accuracy but [...] Read more.
This article examines the potential of low-cost LiDAR technology for 3D modeling and assessment of the degradation of historic buildings, using a section of the Koszalin city walls in Poland as a case study. Traditional terrestrial laser scanning (TLS) offers high accuracy but is expensive. The study assessed whether more accessible LiDAR options, such as those integrated with mobile devices such as the Apple iPad Pro, can serve as viable alternatives. This study was conducted in two phases—first assessing measurement accuracy and then assessing degradation detection—using tools such as the FreeScan Combo scanner and the Z+F 5016 IMAGER TLS. The results show that, while low-cost LiDAR is suitable for small-scale documentation, its accuracy decreases for larger, complex structures compared to TLS. Despite these limitations, this study suggests that low-cost LiDAR can reduce costs and improve access to heritage conservation, although further development of mobile applications is recommended. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the object under study.</p>
Full article ">Figure 2
<p>City plan with the existing wall sections plotted on a current orthophotomap [<a href="#B17-materials-17-05445" class="html-bibr">17</a>].</p>
Full article ">Figure 3
<p>Six fragments of walls that survive today, numbered from 1 to 6.</p>
Full article ">Figure 4
<p>Workflow of the research program.</p>
Full article ">Figure 5
<p>Dimensions and weights of the equipment used.</p>
Full article ">Figure 6
<p>Locations of scanner position.</p>
Full article ">Figure 7
<p>Achieved point clouds using TLS.</p>
Full article ">Figure 8
<p>Measurement results from 3DScannerApp for fragment D and M.</p>
Full article ">Figure 9
<p>Location of selected measurement markers. (<b>a</b>). View of fragment D. (<b>b</b>). View of fragment M.</p>
Full article ">Figure 10
<p>Cross-section through the acquired point clouds in relation to the reference cloud (green): (<b>a</b>). 3DScannerApp; (<b>b</b>). Pix4DCatch Captured; (<b>c</b>). Pix4DCatch Depth; (<b>d</b>). Pix4DCatch Fused.</p>
Full article ">Figure 11
<p>Measurement results from the SiteScape application.</p>
Full article ">Figure 12
<p>Differences between Stages 1 and 2 for city wall fragment D.</p>
Full article ">Figure 13
<p>Differences between Stages 1 and 2 for city wall fragment M.</p>
Full article ">Figure 14
<p>Location of selected defects where degradation has occurred.</p>
Full article ">Figure 15
<p>Defect W1 projected onto the plane.</p>
Full article ">Figure 16
<p>Cross-sections through defect W1.</p>
Full article ">Figure 17
<p>W2 defect projected onto the plane.</p>
Full article ">Figure 18
<p>Cross-sections through defect W2.</p>
Full article ">Figure 19
<p>W3 defect projected onto the plane.</p>
Full article ">Figure 20
<p>Cross-sections through defect W3.</p>
Full article ">Figure 21
<p>W4 defect projected onto the plane.</p>
Full article ">Figure 22
<p>Cross-sections through defect W4.</p>
Full article ">Figure 23
<p>Differences between Stages 1 and 2 for measurements taken with a handheld scanner.</p>
Full article ">Figure 24
<p>Defect W2 projected onto the plane—handheld scanner.</p>
Full article ">Figure 25
<p>Cross-sections through defect W2—handheld scanner.</p>
Full article ">Figure 26
<p>Defect W3 projected onto the plane—handheld scanner.</p>
Full article ">Figure 27
<p>Cross-sections through defect W3—handheld scanner.</p>
Full article ">Figure 28
<p>Defect W4 projected onto the plane—handheld scanner.</p>
Full article ">Figure 29
<p>Cross-sections through defect W4—handheld scanner.</p>
Full article ">Figure 30
<p>Example path of a single measurement with marked sample positions of the device.</p>
Full article ">Figure 31
<p>Examples of errors created at corners with the device’s trajectory marked: (<b>a</b>). SiteScape; (<b>b</b>). 3DScannerApp.</p>
Full article ">
14 pages, 7140 KiB  
Article
Hybrid Reconstruction Approach for Polychromatic Computed Tomography in Highly Limited-Data Scenarios
by Alessandro Piol, Daniel Sanderson, Carlos F. del Cerro, Antonio Lorente-Mur, Manuel Desco and Mónica Abella
Sensors 2024, 24(21), 6782; https://doi.org/10.3390/s24216782 - 22 Oct 2024
Viewed by 805
Abstract
Conventional strategies aimed at mitigating beam-hardening artifacts in computed tomography (CT) can be categorized into two main approaches: (1) postprocessing following conventional reconstruction and (2) iterative reconstruction incorporating a beam-hardening model. While the former fails in low-dose and/or limited-data cases, the latter substantially [...] Read more.
Conventional strategies aimed at mitigating beam-hardening artifacts in computed tomography (CT) can be categorized into two main approaches: (1) postprocessing following conventional reconstruction and (2) iterative reconstruction incorporating a beam-hardening model. While the former fails in low-dose and/or limited-data cases, the latter substantially increases computational cost. Although deep learning-based methods have been proposed for several cases of limited-data CT, few works in the literature have dealt with beam-hardening artifacts, and none have addressed the problems caused by randomly selected projections and a highly limited span. We propose the deep learning-based prior image constrained (PICDL) framework, a hybrid method used to yield CT images free from beam-hardening artifacts in different limited-data scenarios based on the combination of a modified version of the Prior Image Constrained Compressed Sensing (PICCS) algorithm that incorporates the L2 norm (L2-PICCS) with a prior image generated from a preliminary FDK reconstruction with a deep learning (DL) algorithm. The model is based on a modification of the U-Net architecture, incorporating ResNet-34 as a replacement of the original encoder. Evaluation with rodent head studies in a small-animal CT scanner showed that the proposed method was able to correct beam-hardening artifacts, recover patient contours, and compensate streak and deformation artifacts in scenarios with a limited span and a limited number of projections randomly selected. Hallucinations present in the prior image caused by the deep learning model were eliminated, while the target information was effectively recovered by the L2-PICCS algorithm. Full article
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed method.</p>
Full article ">Figure 2
<p>Proposed U-Net network architecture.</p>
Full article ">Figure 3
<p>Central axial slice of the reconstructions of one of the tests cases for the target (<b>A</b>) and SD (<b>B</b>), LD (<b>C</b>), LSA (<b>D</b>), and LNP (<b>E</b>) scenarios.</p>
Full article ">Figure 4
<p>Top: central axial slices of the FDK reconstructions scenarios and DeepBH results for SD and LD scenarios for the two test studies. Bottom: zoomed-in images. Arrows in zoomed-in images point to beam hardening (first column) and streaks (third column).</p>
Full article ">Figure 5
<p>Mean and standard deviation of PSNR, SSIM, and CC values calculated for the SD and LD scenarios in each slice.</p>
Full article ">Figure 6
<p>LNP scenario of 42 random projections with random seed = 42 (top) and random seed = 33 (bottom). Axial slices of DeepBH (<b>A</b>,<b>E</b>), prior images (<b>B</b>,<b>F</b>), SART-PICCS (<b>C</b>,<b>G</b>), and PICDL (<b>D</b>,<b>H</b>). Arrows indicate hallucinations.</p>
Full article ">Figure 7
<p>Central axial slices of FDK reconstructions (<b>A</b>,<b>F</b>), DeepBH reconstructions (<b>B</b>,<b>G</b>), prior images (<b>C</b>,<b>H</b>), SART-PICCS reconstructions (<b>D</b>,<b>I</b>), and PICDL reconstructions (<b>E</b>,<b>J</b>) for the two test animals in LSA scenario of 120 and 130 projections, respectively. Arrows indicate the LSA artifacts.</p>
Full article ">Figure 8
<p>Central axial slices of FDK reconstructions (<b>A</b>,<b>F</b>), DeepBH reconstructions (<b>B</b>,<b>G</b>), prior images (<b>C</b>,<b>H</b>), SART-PICCS reconstructions (<b>D</b>,<b>I</b>), and PICDL reconstructions (<b>E</b>,<b>J</b>) for the two test animals in LNP scenario of 49 and 42 projections, respectively. Arrows indicate hallucinations.</p>
Full article ">
23 pages, 3934 KiB  
Article
A Multi-Scale Covariance Matrix Descriptor and an Accurate Transformation Estimation for Robust Point Cloud Registration
by Fengguang Xiong, Yu Kong, Xinhe Kuang, Mingyue Hu, Zhiqiang Zhang, Chaofan Shen and Xie Han
Appl. Sci. 2024, 14(20), 9375; https://doi.org/10.3390/app14209375 - 14 Oct 2024
Cited by 1 | Viewed by 1032
Abstract
This paper presents a robust point cloud registration method based on a multi-scale covariance matrix descriptor and an accurate transformation estimation. Compared with state-of-the-art feature descriptors, such as FPH, 3DSC, spin image, etc., our proposed multi-scale covariance matrix descriptor is superior for dealing [...] Read more.
This paper presents a robust point cloud registration method based on a multi-scale covariance matrix descriptor and an accurate transformation estimation. Compared with state-of-the-art feature descriptors, such as FPH, 3DSC, spin image, etc., our proposed multi-scale covariance matrix descriptor is superior for dealing with registration problems in a higher noise environment since the mean operation in generating the covariance matrix can filter out most of the noise-damaged samples or outliers and also make itself robust to noise. Compared with transformation estimation, such as feature matching, clustering, ICP, RANSAC, etc., our transformation estimation is able to find a better optimal transformation between a pair of point clouds since our transformation estimation is a multi-level point cloud transformation estimator including feature matching, coarse transformation estimation based on clustering, and a fine transformation estimation based on ICP. Experiment findings reveal that our proposed feature descriptor and transformation estimation outperforms state-of-the-art feature descriptors and transformation estimation, and registration effectiveness based on our registration framework of point cloud is extremely successful in the Stanford 3D Scanning Repository, the SpaceTime dataset, and the Kinect dataset, where the Stanford 3D Scanning Repository is known for its comprehensive collection of high-quality 3D scans, and the SpaceTime dataset and the Kinect dataset are captured by a SpaceTime Stereo scanner and a low-cost Microsoft Kinect scanner, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of our point cloud registration.</p>
Full article ">Figure 2
<p>Distribution of a boundary and a non-boundary point with their neighboring points.</p>
Full article ">Figure 3
<p>Geometric relations α, β, and γ between a keypoint <span class="html-italic">p</span> and one of its neighboring points.</p>
Full article ">Figure 4
<p>Samples of point clouds from our dataset.</p>
Full article ">Figure 5
<p>Boundary points under various differences between adjacent included angles.</p>
Full article ">Figure 6
<p>Keypoints on different point clouds. (<b>a</b>) Keypoints illustrator 1 with boundary point retained. (<b>b</b>) Keypoints illustrator 1 with boundary point removed. (<b>c</b>) Keypoints illustrator 2 with boundary point retained. (<b>d</b>) Keypoints illustrator 2 with boundary point removed.</p>
Full article ">Figure 7
<p>Performance of covariance matrix descriptor formed by different feature vectors under different noise conditions.</p>
Full article ">Figure 8
<p>Performance comparison between our proposed covariance matrix descriptor and the state-of-art feature descriptors under different noise conditions.</p>
Full article ">Figure 9
<p>The datasets used in the experiments.</p>
Full article ">
15 pages, 6871 KiB  
Article
A Trianalyte µPAD for Simultaneous Determination of Iron, Zinc, and Manganese Ions
by Barbara Rozbicka, Robert Koncki and Marta Fiedoruk-Pogrebniak
Molecules 2024, 29(20), 4805; https://doi.org/10.3390/molecules29204805 - 11 Oct 2024
Cited by 1 | Viewed by 736
Abstract
In this work, a microfluidic paper-based analytical device (µPAD) for simultaneous detection of Fe, Zn, and Mn ions using immobilized chromogenic reagents Ferene S, xylenol orange, and 1-(2-pyridylazo)-2-naphthol, respectively, is presented. As the effective recognition of analytes via respective chromogens takes place under [...] Read more.
In this work, a microfluidic paper-based analytical device (µPAD) for simultaneous detection of Fe, Zn, and Mn ions using immobilized chromogenic reagents Ferene S, xylenol orange, and 1-(2-pyridylazo)-2-naphthol, respectively, is presented. As the effective recognition of analytes via respective chromogens takes place under extremely different pH conditions, experiments reported in this publication are focused on optimization of the µPAD architecture allowing for the elimination of potential cross effects. The paper-based microfluidic device was fabricated using low-cost and well-reproducible wax-printing technology. For optical detection of color changes, an ordinary office scanner and self-made RGB-data processing program were applied. Optimized and stable over time, µPADs allow fast, selective, and reproducible multianalyte determinations at submillimolar levels of respective heavy metal ions, which was confirmed by results of the analysis of solutions mimicking real samples of wastewater. The presented concept of simultaneous determination of different analytes that required extremely different conditions for detection can be useful for the development of other multianalyte microfluidic paper-based devices in the µPAD format. Full article
Show Figures

Figure 1

Figure 1
<p>Detection zones after colorful detection reactions for (<b>a</b>) iron(II), (<b>b</b>) zinc(II), and (<b>c</b>) manganese(II) ions (top) and corresponding calibration graphs (bottom).</p>
Full article ">Figure 2
<p>(<b>a</b>) µPAD shape with two detection zones; (<b>b</b>) calibration curves after adding standard solutions of Mn<sup>2+</sup> to the detection zone with PAR immobilized; (<b>c</b>) photo of the tested µPADs (concentrations of Mn<sup>2+</sup> are given in [mmol/L]).</p>
Full article ">Figure 3
<p>(<b>a</b>) Scanned detection zones with PAN immobilized after Mn<sup>2+</sup> standards deposition; (<b>b</b>) corresponding calibration curve with two linear ranges.</p>
Full article ">Figure 4
<p>(<b>a</b>) Comparison of the µPADs with PAR (<b>top</b>) and PAN (<b>bottom</b>) at Mn detection zones; each standard was tested three times (3 consecutive repetitions of the same standard in rows); on the other side of the detection zone, Ferene S was deposited; (<b>b</b>) the second configuration of a bianalyte µPAD with Mn and Fe detection zones and a sample division zone. Standards concentrations used for these experiments are given in <a href="#molecules-29-04805-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 5
<p>(<b>a</b>) µPAD shape with three detection zones and a sample division zone; (<b>b</b>) layer by layer view of the designed µPAD; (<b>c</b>) the photo of the designed µPAD (after the reaction with all ions. Numbers refer to the following: 1—cold laminating foil, 2—connector between sample zone and sample division zone, 3—Mn<sup>2+</sup> detection zone/sample zone, 4—sample division zone, 5—Fe<sup>2+</sup> detection zone.</p>
Full article ">Figure 6
<p>Calibration curves obtained using a trianalyte µPAD for (<b>a</b>) Fe<sup>2+</sup>, (<b>b</b>) Zn<sup>2+</sup>, and (<b>c</b>) Mn<sup>2+</sup> ions using standards containing a mix of the three ions.</p>
Full article ">Figure 7
<p>The obtained calibration curves using systems stored for two weeks at room temperature (<b>a</b>–<b>c</b>) and at 4 °C (<b>d</b>–<b>f</b>) for (<b>a</b>,<b>d</b>) Fe<sup>2+</sup>, (<b>b</b>,<b>e</b>) Zn<sup>2+</sup>, and (<b>c</b>,<b>f</b>) Mn<sup>2+</sup> ions. Below are the obtained equations and R<sup>2</sup> for each calibration curve.</p>
Full article ">
13 pages, 3708 KiB  
Article
Nonlinear Modeling of a Piezoelectric Actuator-Driven High-Speed Atomic Force Microscope Scanner Using a Variant DenseNet-Type Neural Network
by Thi Thu Nguyen, Luke Oduor Otieno, Oyoo Michael Juma, Thi Ngoc Nguyen and Yong Joong Lee
Actuators 2024, 13(10), 391; https://doi.org/10.3390/act13100391 - 2 Oct 2024
Cited by 1 | Viewed by 863
Abstract
Piezoelectric actuators (PEAs) are extensively used for scanning and positioning in scanning probe microscopy (SPM) due to their high precision, simple construction, and fast response. However, there are significant challenges for instrument designers due to their nonlinear properties. Nonlinear properties make precise and [...] Read more.
Piezoelectric actuators (PEAs) are extensively used for scanning and positioning in scanning probe microscopy (SPM) due to their high precision, simple construction, and fast response. However, there are significant challenges for instrument designers due to their nonlinear properties. Nonlinear properties make precise and accurate control difficult in cases where position feedback sensors cannot be employed. However, the performance of PEA-driven scanners can be significantly improved without position feedback sensors if an accurate mathematical model with low computational costs is applied to reduce hysteresis and other nonlinear effects. Various methods have been proposed for modeling PEAs, but most of them have limitations in terms of their accuracy and computational efficiencies. In this research, we propose a variant DenseNet-type neural network (NN) model for modeling PEAs in an AFM scanner where position feedback sensors are not available. To improve the performance of this model, the mapping of the forward and backward directions is carried out separately. The experimental results successfully demonstrate the efficacy of the proposed model by reducing the relative root-mean-square (RMS) error to less than 0.1%. Full article
(This article belongs to the Section Actuator Materials)
Show Figures

Figure 1

Figure 1
<p>Experimental setup used to collect data for analyzing the hysteresis of the piezoelectric actuator.</p>
Full article ">Figure 2
<p>Hysteresis curve between the input voltage and the displacement for the X-axis of the homemade scanner.</p>
Full article ">Figure 3
<p>Structure of a fully connected layer model.</p>
Full article ">Figure 4
<p>Mathematical formulation behind an ANN node.</p>
Full article ">Figure 5
<p>Structure of the variant DenseNet-type fully connected model.</p>
Full article ">Figure 6
<p>Identification process.</p>
Full article ">Figure 7
<p>Hysteresis curves fitted with a DenseNet-type neural network for (<b>a</b>) the X-axis and (<b>b</b>) the Y-axis.</p>
Full article ">Figure 8
<p>Comparison of uncompensated/compensated trajectories with desired trajectories (left) and hysteresis error (right) for different driving voltages: (<b>a</b>) 100 V, (<b>b</b>) 75 V, (<b>c</b>) 50 V, (<b>d</b>) 25 V for the X-axis and (<b>e</b>) 100 V, (<b>f</b>) 75 V, (<b>g</b>) 50 V, (<b>h</b>) 25 V for the Y-axis.</p>
Full article ">Figure 9
<p>Tapping mode images of DVD data tracks obtained at 1 Hz with (<b>a</b>) an uncompensated scanner (trace) and (<b>b</b>) an uncompensated scanner (retrace). (<b>c</b>) A line profile for the red and blue lines in (<b>a</b>,<b>b</b>). Tapping mode images of DVD data tracks obtained at 1 Hz with(<b>d</b>) a compensated scanner (trace) and (<b>e</b>) a compensated scanner (retrace). (<b>f</b>) A line profile for the red and blue line in (<b>d</b>,<b>e</b>).</p>
Full article ">Figure 10
<p>Tapping mode images of DVD data tracks obtained at 5 Hz with (<b>a</b>) an uncompensated scanner (trace), (<b>b</b>) an uncompensated scanner (retrace), (<b>c</b>) a compensated scanner (trace), and (<b>d</b>) a compensated scanner (retrace), and at 30 Hz with (<b>e</b>) an uncompensated scanner (trace), (<b>f</b>) an uncompensated scanner (retrace), (<b>g</b>) a compensated scanner (trace), and (<b>h</b>) a compensated scanner (retrace).</p>
Full article ">
19 pages, 33004 KiB  
Article
Laboratory Tests of Metrological Characteristics of a Non-Repetitive Low-Cost Mobile Handheld Laser Scanner
by Bartosz Mitka, Przemysław Klapa and Pelagia Gawronek
Sensors 2024, 24(18), 6010; https://doi.org/10.3390/s24186010 - 17 Sep 2024
Cited by 1 | Viewed by 3600
Abstract
The popularity of mobile laser scanning systems as a surveying tool is growing among construction contractors, architects, land surveyors, and urban planners. The user-friendliness and rapid capture of precise and complete data on places and objects make them serious competitors for traditional surveying [...] Read more.
The popularity of mobile laser scanning systems as a surveying tool is growing among construction contractors, architects, land surveyors, and urban planners. The user-friendliness and rapid capture of precise and complete data on places and objects make them serious competitors for traditional surveying approaches. Considering the low cost and constantly improving availability of Mobile Laser Scanning (MLS), mainly handheld surveying tools, the measurement possibilities seem unlimited. We conducted a comprehensive investigation into the quality and accuracy of a point cloud generated by a recently marketed low-cost mobile surveying system, the MandEye MLS. The purpose of the study is to conduct exhaustive laboratory tests to determine the actual metrological characteristics of the device. The test facility was the surveying laboratory of the University of Agriculture in Kraków. The results of the MLS measurements (dynamic and static) were juxtaposed with a reference base, a geometric system of reference points in the laboratory, and in relation to a reference point cloud from a higher-class laser scanner: Leica ScanStation P40 TLS. The Authors verified the geometry of the point cloud, technical parameters, and data structure, as well as whether it can be used for surveying and mapping objects by assessing the point cloud density, noise and measurement errors, and detectability of objects in the cloud. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The laboratory: (<b>a</b>) general view, (<b>b</b>) survey points.</p>
Full article ">Figure 2
<p>Test laboratory.</p>
Full article ">Figure 3
<p>The Livox Mid-360 sensor: (<b>a</b>) the sensor; (<b>b</b>) scanning process; (<b>c</b>) point cloud patterns of the Livox Mid-360 accumulated over different integration times; source: Livox Mid-360 User Manual v1.2, 2024.</p>
Full article ">Figure 4
<p>Measurement Range of the Scanner; source: <a href="https://www.livoxtech.com/mid-360" target="_blank">https://www.livoxtech.com/mid-360</a> (accessed on 15 May 2024) [<a href="#B36-sensors-24-06010" class="html-bibr">36</a>].</p>
Full article ">Figure 5
<p>The measuring suite: (<b>a</b>) MandEye MLS, source: <a href="http://www.datcap.eu" target="_blank">www.datcap.eu</a> (accessed on 1 July 2024 ) [<a href="#B39-sensors-24-06010" class="html-bibr">39</a>]; (<b>b</b>) Leica P40 TLS, source: <a href="http://www.leica-geosystems.com" target="_blank">www.leica-geosystems.com</a> (accessed on 5 July 2024) [<a href="#B38-sensors-24-06010" class="html-bibr">38</a>].</p>
Full article ">Figure 6
<p>Basic measurement series: data sampling: (<b>a</b>) the smallest part of the measurement, four measurement lines, the first measured points; (<b>b</b>) a single measurement series.</p>
Full article ">Figure 7
<p>Sampling resolution: (<b>a</b>) vertical; (<b>b</b>) horizontal.</p>
Full article ">Figure 8
<p>Noise analysis for an object at (<b>a</b>) 5 m; (<b>b</b>) 15 m.</p>
Full article ">Figure 9
<p>Noise distribution (blue) in relation to the point cloud (red) for the (<b>a</b>) isometric view of the object, (<b>b</b>) long-section of the object, and (<b>c</b>) cross-section of the object.</p>
Full article ">Figure 9 Cont.
<p>Noise distribution (blue) in relation to the point cloud (red) for the (<b>a</b>) isometric view of the object, (<b>b</b>) long-section of the object, and (<b>c</b>) cross-section of the object.</p>
Full article ">Figure 10
<p>MandEye scans: (<b>a</b>) static mode; well-defined points in the space of the object (<b>b</b>) black and white targets on the walls; (<b>c</b>) black and white targets on the ceiling; (<b>d</b>) reference spheres.</p>
Full article ">Figure 11
<p>Identification of the geometric center of a black and white target T002: (<b>a</b>) Leica ScanStation P40, (<b>b</b>) MandEye, static mode, (<b>c</b>) MandEye, dynamic mode.</p>
Full article ">Figure 12
<p>Identification of the geometric center of a black and white target: (<b>a</b>) MandEye, static mode, (<b>b</b>) MandEye, dynamic mode.</p>
Full article ">Figure 12 Cont.
<p>Identification of the geometric center of a black and white target: (<b>a</b>) MandEye, static mode, (<b>b</b>) MandEye, dynamic mode.</p>
Full article ">Figure 13
<p>Identification of the geometric center of a reference sphere: (<b>a</b>) Leica ScanStation P40, (<b>b</b>) MandEye, static mode, (<b>c</b>) MandEye, dynamic mode.</p>
Full article ">
10 pages, 2120 KiB  
Article
Development of a Scanning Protocol for Anthropological Remains: A Preliminary Study
by Matteo Orsi, Roberta Fusco, Alessandra Mazzucchi, Roberto Taglioretti, Maurizio Marinato and Marta Licata
Heritage 2024, 7(9), 4997-5006; https://doi.org/10.3390/heritage7090236 - 10 Sep 2024
Viewed by 858
Abstract
Structured-light scanning is a fast and efficient technique for the acquisition of 3D point clouds. However, the extensive and daily application of this class of scanners can be challenging because of the technical know-how necessary to validate the low-cost instrumentation. This challenge is [...] Read more.
Structured-light scanning is a fast and efficient technique for the acquisition of 3D point clouds. However, the extensive and daily application of this class of scanners can be challenging because of the technical know-how necessary to validate the low-cost instrumentation. This challenge is worth accepting because of the large amount of data that can be collected accurately with the aid of specific technical protocols. This work is a preliminary study of the development of an acquisition protocol for anthropological remains performing tests in two opposite and extreme contexts: one characterised by a dark environment and one located in an open area and characterised by a very bright environment. This second context showed the influence of sunlight in the acquisition process, resulting in a colourless point cloud. It is a first step towards the development of a technical protocol for the acquisition of anthropological remains, based on the research of limits and problems associated with an instrument. Full article
(This article belongs to the Section Archaeological Heritage)
Show Figures

Figure 1

Figure 1
<p>Structured-light Einscan-Portable Handheld 3D Scanner.</p>
Full article ">Figure 2
<p>Funerary Unit 1 from the Church of Santa Maria Maggiore in Vercelli.</p>
Full article ">Figure 3
<p>Tb10 of “Rocca di Monselice” immediately before the digital acquisition.</p>
Full article ">Figure 4
<p>The point cloud obtained from the acquisition of Funerary Unit 1 from the Church of Santa Maria Maggiore in Vercelli.</p>
Full article ">Figure 5
<p>The point cloud obtained from the acquisition of Tb10 from “Rocca di Monselice” regarding an intermediate level during the recovery of the skeletons.</p>
Full article ">Scheme 1
<p>Scheme of context 1 and the surrounding area with identified planes (numbered from 1 to 5). The blue rectangle indicates the joint area between plane 3 and plane 1 and between plane 3 and plane 2.</p>
Full article ">
Back to TopTop