Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (24)

Search Parameters:
Keywords = human-centered inspection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 11084 KiB  
Article
Computer Vision and Augmented Reality for Human-Centered Fatigue Crack Inspection
by Rushil Mojidra, Jian Li, Ali Mohammadkhorasani, Fernando Moreu, Caroline Bennett and William Collins
Sensors 2024, 24(11), 3685; https://doi.org/10.3390/s24113685 - 6 Jun 2024
Viewed by 1218
Abstract
A significant percentage of bridges in the United States are serving beyond their 50-year design life, and many of them are in poor condition, making them vulnerable to fatigue cracks that can result in catastrophic failure. However, current fatigue crack inspection practice based [...] Read more.
A significant percentage of bridges in the United States are serving beyond their 50-year design life, and many of them are in poor condition, making them vulnerable to fatigue cracks that can result in catastrophic failure. However, current fatigue crack inspection practice based on human vision is time-consuming, labor intensive, and prone to error. We present a novel human-centered bridge inspection methodology to enhance the efficiency and accuracy of fatigue crack detection by employing advanced technologies including computer vision and augmented reality (AR). In particular, a computer vision-based algorithm is developed to enable near-real-time fatigue crack detection by analyzing structural surface motion in a short video recorded by a moving camera of the AR headset. The approach monitors structural surfaces by tracking feature points and measuring variations in distances between feature point pairs to recognize the motion pattern associated with the crack opening and closing. Measuring distance changes between feature points, as opposed to their displacement changes before this improvement, eliminates the need of camera motion compensation and enables reliable and computationally efficient fatigue crack detection using the nonstationary AR headset. In addition, an AR environment is created and integrated with the computer vision algorithm. The crack detection results are transmitted to the AR headset worn by the bridge inspector, where they are converted into holograms and anchored on the bridge surface in the 3D real-world environment. The AR environment also provides virtual menus to support human-in-the-loop decision-making to determine optimal crack detection parameters. This human-centered approach with improved visualization and human–machine collaboration aids the inspector in making well-informed decisions in the field in a near-real-time fashion. The proposed crack detection method is comprehensively assessed using two laboratory test setups for both in-plane and out-of-plane fatigue cracks. Finally, using the integrated AR environment, a human-centered bridge inspection is conducted to demonstrate the efficacy and potential of the proposed methodology. Full article
(This article belongs to the Special Issue Non-destructive Inspection with Sensors)
Show Figures

Figure 1

Figure 1
<p>The proposed human-centered bridge inspection process empowered by AR and computer vision.</p>
Full article ">Figure 2
<p>Illustration of the proposed crack detection algorithm based on surface distance tracking.</p>
Full article ">Figure 3
<p>The main virtual menu of the developed AR environment.</p>
Full article ">Figure 4
<p>Virtual menu with threshold options for human-in-the-loop decision-making.</p>
Full article ">Figure 5
<p>(<b>a</b>) Test setup for in-plane fatigue crack detection in a C(T) specimen, and (<b>b</b>) close-up view of the test setup.</p>
Full article ">Figure 6
<p>(<b>a</b>) Test setup for out-of-plane fatigue crack detection in a bridge girder specimen, and (<b>b</b>) close-up view of web-gap region and the fatigue crack.</p>
Full article ">Figure 7
<p>(<b>a</b>) Crack detection outcome using a low threshold value; (<b>b</b>) Clustering of the detected feature points on a C(T) specimen.</p>
Full article ">Figure 8
<p>Ground truth labeling: (<b>a</b>) C(T) specimen; (<b>b</b>) bridge girder specimen.</p>
Full article ">Figure 9
<p>Illustration of different values of IOU.</p>
Full article ">Figure 10
<p>Crack detection using a 2D video: (<b>a</b>) The initial frame of the 2D video with the selected ROI; (<b>b</b>) all feature points detected by the Shi–Tomasi algorithm; (<b>c</b>–<b>f</b>) in-plane fatigue crack detection results under different threshold values; (<b>g</b>,<b>h</b>) locations and distance–time histories of feature point pairs for cracked and uncracked regions.</p>
Full article ">Figure 11
<p>Quantification of crack detection in the C(T) specimen using the previous displacement-based method [<a href="#B12-sensors-24-03685" class="html-bibr">12</a>]: (<b>a</b>) detected crack by feature points; (<b>b</b>) clustering result; and (<b>c</b>) ground truth and the clustering result; and quantification of detected crack using the proposed distance-based method (this study): (<b>d</b>) detected crack by feature points (<b>e</b>) clustering result; and (<b>f</b>) ground truth and clustering result.</p>
Full article ">Figure 12
<p>Crack detection using a 3D video: (<b>a</b>) The initial frame of the 3D video with ROI, all feature points detected by the Shi–Tomasi algorithm; (<b>b</b>–<b>f</b>) Out-of-plane fatigue crack detection results of the bridge girder specimen under various threshold values; (<b>g</b>,<b>h</b>) Location and distance–time histories of feature point pairs for cracked and uncracked region. Note that the brightness of the images in (<b>b</b>–<b>f</b>) is enhanced to highlight the feature points.</p>
Full article ">Figure 13
<p>Quantification of crack detection in the bridge girder specimen using the previous displacement-based method [<a href="#B12-sensors-24-03685" class="html-bibr">12</a>]: (<b>a</b>) detected crack by feature points; (<b>b</b>) clustering result; and (<b>c</b>) ground truth and the clustering result; and quantification of detected crack using the distance-based method (this study): (<b>d</b>) detected crack by feature points; (<b>e</b>) clustering result; and (<b>f</b>) ground truth and clustering result.</p>
Full article ">Figure 14
<p>Experimental setup and hardware used in AR-based fatigue crack inspection.</p>
Full article ">Figure 15
<p>Demonstration of the integrated AR environment for human-centered bridge inspection: (<b>a</b>) Inspector starting the AR software, (<b>b</b>) inspector examining results for zero threshold, and (<b>c</b>) inspector examining the detected crack with the final threshold value.</p>
Full article ">
22 pages, 14947 KiB  
Article
Modular Intelligent Control System in the Pre-Assembly Stage
by Branislav Micieta, Peter Macek, Vladimira Binasova, Luboslav Dulina, Martin Gaso and Jan Zuzik
Electronics 2024, 13(9), 1609; https://doi.org/10.3390/electronics13091609 - 23 Apr 2024
Cited by 1 | Viewed by 999
Abstract
This paper presents a novel approach to developing fully automated intelligent control systems for use within production-based organizations, with a specific focus on advancing research into intelligent production systems. This analysis underscores a prevailing deficiency in control operations preceding assembly, where single-purpose control [...] Read more.
This paper presents a novel approach to developing fully automated intelligent control systems for use within production-based organizations, with a specific focus on advancing research into intelligent production systems. This analysis underscores a prevailing deficiency in control operations preceding assembly, where single-purpose control machines are commonly utilized, thus presenting inherent limitations. Conversely, while accurate multipurpose measurement centers exist, they often fail to deliver comprehensive quality control for manufactured parts due to cost and time constraints associated with the measuring process. The primary aim in this study was to develop an intelligent modular control system capable of overseeing the production of diverse components effectively. The modular intelligent control system is designed to meticulously monitor the quality of each module during the pre-assembly phase. By integrating sophisticated sensors, diagnostic tools, and intelligent control mechanisms, this system ensures precise control over module production processes. It facilitates the monitoring of multiple parameters and critical quality features, while integrated sensors and diagnostic methods promptly identify discrepancies and inaccuracies, enabling the swift diagnosis of issues within specific modules. The system’s intelligent control algorithms optimize production processes and ensure synchronization among individual modules, thereby ensuring consistent quality and performance. Notably, the implementation of this solution reduces inspection time by an average of 40 to 60% compared to manual inspection methods. Moreover, the system enables the comprehensive archiving of measurement data, eliminating the substantial error rates introduced by human involvement in the inspection process. Furthermore, the system enhances overall project efficiency, predictability, and safety, while allowing for rapid adjustments in order to meet standards and requirements. This innovative approach represents a significant advancement in intelligent control systems for use in production organizations, offering substantial benefits in terms of efficiency, accuracy, and adaptability. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Virtual reality tools, (<b>b</b>) technology enabling workplace design, and (<b>c</b>) subsequent robotic workstation for virtual verification.</p>
Full article ">Figure 2
<p>Elements of an intelligent modular control system.</p>
Full article ">Figure 3
<p>(<b>a</b>) Elements of the model of control modules; (<b>b</b>) relational matrix of modules designed to control the selected parameters.</p>
Full article ">Figure 4
<p>(<b>a</b>) Module with translational movement and 2D measurement; (<b>b</b>) module with translational movement and 3D measurement.</p>
Full article ">Figure 5
<p>(<b>a</b>) Module with translational movement and profilometer; (<b>b</b>) module with rotary movement and 2D measurement.</p>
Full article ">Figure 6
<p>(<b>a</b>) One of the variants of the layout of the IMCS modules; (<b>b</b>) conceptual schematic design of the spatial solution for the IMCS modules.</p>
Full article ">Figure 7
<p>InMoSysQC modules.</p>
Full article ">Figure 8
<p>A universal template for a wooden part.</p>
Full article ">Figure 9
<p>Examples of selected measurement objects (<b>a</b>–<b>l</b>).</p>
Full article ">Figure 10
<p>(<b>a</b>) FARO 3D measuring arm; (<b>b</b>) measurement of positioning accuracy with a laser interferometer.</p>
Full article ">Figure 11
<p>Drawing of the part (the “zero point” of the part is shown in the circle).</p>
Full article ">Figure 12
<p>Device database structure.</p>
Full article ">Figure 13
<p>Setting the correct exposure and recognition algorithms.</p>
Full article ">Figure 14
<p>More detailed determination of the method of measurement.</p>
Full article ">Figure 15
<p>Measurement on a standard part without corrections.</p>
Full article ">Figure 16
<p>Reference measurements after the introduction of temperature corrections.</p>
Full article ">Figure 17
<p>Evaluation of three consecutive measurements of the same part.</p>
Full article ">Figure 18
<p>Evaluation of three consecutive measurements of the same part.</p>
Full article ">Figure 19
<p>Algorithm for module selection.</p>
Full article ">
20 pages, 5844 KiB  
Article
Smart Detection System of Safety Hazards in Industry 5.0
by Stavroula Bourou, Apostolos Maniatis, Dimitris Kontopoulos and Panagiotis A. Karkazis
Telecom 2024, 5(1), 1-20; https://doi.org/10.3390/telecom5010001 - 22 Dec 2023
Cited by 3 | Viewed by 2377
Abstract
Safety management is a priority to guarantee human-centered manufacturing processes in the context of Industry 5.0, which aims to realize a safe human–machine environment based on knowledge-driven approaches. The traditional approaches for safety management in the industrial environment include staff training, regular inspections, [...] Read more.
Safety management is a priority to guarantee human-centered manufacturing processes in the context of Industry 5.0, which aims to realize a safe human–machine environment based on knowledge-driven approaches. The traditional approaches for safety management in the industrial environment include staff training, regular inspections, warning signs, etc. Despite the fact that proactive measures and procedures have exceptional importance in the prevention of safety hazards, human–machine–environment coupling requires more sophisticated approaches able to provide automated, reliable, real-time, cost-effective, and adaptive hazard identification in complex manufacturing processes. In this context, the use of virtual reality (VR) can be exploited not only as a means of human training but also as part of the methodology to generate synthetic datasets for training AI models. In this paper, we propose a flexible and adjustable detection system that aims to enhance safety management in Industry 5.0 manufacturing through real-time monitoring and identification of hazards. The first stage of the system contains the synthetic data generation methodology, aiming to create a synthetic dataset via VR, while the second one concerns the training of AI object detectors for real-time inference. The methodology is evaluated by comparing the performance of models trained on both real-world data from a publicly available dataset and our generated synthetic data. Additionally, through a series of experiments, the optimal ratio of synthetic and real-world images is determined for training the object detector. It has been observed that even with a small amount of real-world data, training a robust AI model is achievable. Finally, we use the proposed methodology to generate a synthetic dataset of four classes as well as to train an AI algorithm for real-time detection. Full article
(This article belongs to the Special Issue Digitalization, Information Technology and Social Development)
Show Figures

Figure 1

Figure 1
<p>The proposed flexible and adjustable detection system for real-time hazard detection.</p>
Full article ">Figure 2
<p>Proposed methodology for synthetic data generation.</p>
Full article ">Figure 3
<p>(<b>a</b>) Different variations in vest texture; (<b>b</b>) associations between 3D objects in a Blender environment.</p>
Full article ">Figure 4
<p>Layout of 2 (<b>left</b>) and 8 (<b>right</b>) employees.</p>
Full article ">Figure 5
<p>Unity 3D Space. From left to right, a real-life scene as background noise, foreground 3D objects, and the camera’s viewpoint with the colored arrows to represent the X, Y, Z axis is 3D space.</p>
Full article ">Figure 6
<p>Synthetic Image.</p>
Full article ">Figure 7
<p>Number of instances per class for real-world and synthetic datasets.</p>
Full article ">Figure 8
<p>Model’s performance across different ratios of real-world and synthetic images.</p>
Full article ">Figure 9
<p>Visual representation of the model’s predictions on real-world images of the CHV test set.</p>
Full article ">
16 pages, 2701 KiB  
Article
Feature Importance-Based Backdoor Attack in NSL-KDD
by Jinhyeok Jang, Yoonsoo An, Dowan Kim and Daeseon Choi
Electronics 2023, 12(24), 4953; https://doi.org/10.3390/electronics12244953 - 9 Dec 2023
Cited by 2 | Viewed by 1906
Abstract
In this study, we explore the implications of advancing AI technology on the safety of machine learning models, specifically in decision-making across diverse applications. Our research delves into the domain of network intrusion detection, covering rule-based and anomaly-based detection methods. There is a [...] Read more.
In this study, we explore the implications of advancing AI technology on the safety of machine learning models, specifically in decision-making across diverse applications. Our research delves into the domain of network intrusion detection, covering rule-based and anomaly-based detection methods. There is a growing interest in anomaly detection within network intrusion detection systems, accompanied by an increase in adversarial attacks using maliciously crafted examples. However, the vulnerability of intrusion detection systems to backdoor attacks, a form of adversarial attack, is frequently overlooked in untrustworthy environments. This paper proposes a backdoor attack scenario, centering on the “AlertNet” intrusion detection model and utilizing the NSL-KDD dataset, a benchmark widely employed in NIDS research. The attack involves modifying features at the packet level, as network datasets are typically constructed from packets using statistical methods. Evaluation metrics include accuracy, attack success rate, baseline comparisons with clean and random data, and comparisons involving the proposed backdoor. Additionally, the study employs KL-divergence and OneClassSVM for distribution comparisons to demonstrate resilience against manual inspection by a human expert from outliers. In conclusion, the paper outlines applications and limitations and emphasizes the direction and importance of research on backdoor attacks in network intrusion detection systems. Full article
(This article belongs to the Special Issue Emerging Trends and Challenges in IoT Networks)
Show Figures

Figure 1

Figure 1
<p>AlertNet model structure.</p>
Full article ">Figure 2
<p>Flow of a backdoor attack.</p>
Full article ">Figure 3
<p>The overall process of a decision tree-based backdoor attack.</p>
Full article ">Figure 4
<p>Top 5 important features of the NSL-KDD dataset.</p>
Full article ">Figure 5
<p>Various model backdoor attack results.</p>
Full article ">Figure 6
<p>KL-divergence result (<b>top</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>l</mi> <mi>a</mi> <mi>g</mi> <mo>:</mo> <mn>0</mn> </mrow> </semantics></math>, (<b>low</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>l</mi> <mi>a</mi> <mi>g</mi> <mo>:</mo> <mn>0.4</mn> </mrow> </semantics></math>, training data: benign data, test data: backdoor data.</p>
Full article ">Figure 7
<p>OneClassSVM result for training data; Benign vs. Backdoor-feature: same_srv_rate.</p>
Full article ">Figure 8
<p>Testing backdoor OneClassSVM results. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>l</mi> <mi>a</mi> <mi>g</mi> </mrow> </semantics></math>: 0 (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>l</mi> <mi>a</mi> <mi>g</mi> </mrow> </semantics></math>: 0.2 (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>l</mi> <mi>a</mi> <mi>g</mi> </mrow> </semantics></math>: 0.8 (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>a</mi> <mi>m</mi> <mi>e</mi> <mtext>_</mtext> <mi>s</mi> <mi>r</mi> <mi>v</mi> <mtext>_</mtext> <mi>r</mi> <mi>a</mi> <mi>t</mi> <mi>e</mi> </mrow> </semantics></math>:0.</p>
Full article ">Figure 9
<p>Outlier detection results.</p>
Full article ">
11 pages, 796 KiB  
Article
A Study on the Monitoring of Toxocara spp. in Various Children’s Play Facilities in the Republic of Korea (2016–2021)
by Young-Hwan Oh, Hae-Jin Sohn, Mi-Yeon Choi, Min-Woo Hyun, Seok-Ho Hong, Ji-Su Lee, Ah-Reum Ryu, Jong-Hyun Kim and Ho-Joon Shin
Healthcare 2023, 11(21), 2839; https://doi.org/10.3390/healthcare11212839 - 27 Oct 2023
Viewed by 1298
Abstract
Toxocara spp. is a zoonotic soil-transmitted parasite that infects canids and felids, which causes toxocariasis in humans, migrating to organ systems, including the lungs, the ocular system, and the central nervous system. Since Toxocara spp. is usually transmitted through soil, children tend to [...] Read more.
Toxocara spp. is a zoonotic soil-transmitted parasite that infects canids and felids, which causes toxocariasis in humans, migrating to organ systems, including the lungs, the ocular system, and the central nervous system. Since Toxocara spp. is usually transmitted through soil, children tend to be more susceptible to infection. In order to monitor contamination with Toxocara spp. in children’s play facilities in the Republic of Korea, we investigated 11,429 samples of soil from daycare centers, kindergartens, elementary schools, and parks across the country from January 2016 to December 2021. Since the Environmental Health Act in the Republic of Korea was enacted in March 2008, there have been sporadic reports of contamination by Toxocara spp. in children’s activity zones. In this study, soil from children’s play facilities in regions across the Republic of Korea was monitored according to the Korean standardized procedure to use it as basic data for preventive management and public health promotion. The national average positive rate was 0.16% (18/11,429), and Seoul showed a higher rate of 0.63% (2/318) than any other regions while Incheon, Daegu, Ulsan, Kangwon-do, Jeollabuk-do, and Jeollanam-do were negative (p < 0.05). The positive rates were as follows: 0.37% (4/1089) in daycare centers, 0.13% (3/2365) in kindergartens, 0.2% (7/4193) in elementary schools, 0.09% (1/1143) in apartments, and 0.14% (3/2198) in parks. In addition, it was confirmed that 0.2% (1/498) of elementary schools and 1.17% (2/171) of parks were re-contaminated among play facilities managed with the establishment of a regular inspection cycle. Consequently, there is an essential need for continuous monitoring of Toxocara spp. contamination and regular education for preschool and school children in order to prevent soil-borne parasite infections. Full article
Show Figures

Figure 1

Figure 1
<p>The map of the administrative divisions of the Republic of Korea shows one Seoul metropolitan government, one Sejong special self-governing city, six metropolitan cities, eight provincial areas, and one Jeju special self-governing province where the sampled play facilities were located.</p>
Full article ">
22 pages, 7395 KiB  
Article
Power Line Extraction and Tree Risk Detection Based on Airborne LiDAR
by Siyuan Xi, Zhaojiang Zhang, Yufen Niu, Huirong Li and Qiang Zhang
Sensors 2023, 23(19), 8233; https://doi.org/10.3390/s23198233 - 3 Oct 2023
Cited by 1 | Viewed by 1656
Abstract
Transmission lines are the basis of human production and activities. In order to ensure their safe operation, it is essential to regularly conduct transmission line inspections and identify tree risk in a timely manner. In this paper, a power line extraction and tree [...] Read more.
Transmission lines are the basis of human production and activities. In order to ensure their safe operation, it is essential to regularly conduct transmission line inspections and identify tree risk in a timely manner. In this paper, a power line extraction and tree risk detection method is proposed. Firstly, the height difference and local dimension feature probability model are used to extract power line points, and then the Cloth Simulation Filter algorithm and neighborhood sharing method are creatively introduced to distinguish conductors and ground wires. Secondly, conductor reconstruction is realized by the approach of the linear–catenary model, and numerous non-risk points are excluded by constructing the tree risk point candidate area centered on the conductor’s reconstruction curve. Finally, the grading strategy for the safety distance calculation is used to detect the tree risk points. The experimental results show that the precision, recall, and F-score of the conductors (ground wires) classification exceed 98.05% (97.98%), 99.00% (99.14%), and 98.58% (98.56%), respectively, which presents a high classification accuracy. The Root-Mean-Square Error, Maximum Error, and Minimum Error of the conductor’s reconstruction are better than 3.67 cm, 7.13 cm, and 2.64 cm, respectively, and the Mean Absolute Error of the safety distance calculation is better than 6.47 cm, proving the effectiveness and rationality of the proposed tree risk points detection method. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>Overview of the method.</p>
Full article ">Figure 2
<p>Spatial distribution of transmission lines: (<b>a</b>) features of vertical spatial distribution of transmission lines; (<b>b</b>) differences in the number of vegetation and power line points in the same area. Different colors represent different elevations.</p>
Full article ">Figure 3
<p>Point cloud cutting. The pink circles represent the location of the pole tower and the width of the arrows represents the width of the cut range.</p>
Full article ">Figure 4
<p>Point cloud elevation normalization: (<b>a</b>) original point cloud elevation display via elevation rendering method; (<b>b</b>) normalized point cloud elevation display via elevation rendering method.</p>
Full article ">Figure 5
<p>Number of points in different elevation ladders.</p>
Full article ">Figure 6
<p>Coarse extraction of power line: (<b>a</b>) coarse extraction effect of normalized point clouds; (<b>b</b>) coarse extraction of inverse normalized point cloud.</p>
Full article ">Figure 7
<p>The relationship between eigenvalue size and the local dimension features of the point cloud. The black dots represent the point cloud and the arrows indicate the pointing of the feature vectors.</p>
Full article ">Figure 8
<p>One-dimension probability of point cloud.</p>
Full article ">Figure 9
<p>Refined extraction of power lines.</p>
Full article ">Figure 10
<p>Schematic of CSF.</p>
Full article ">Figure 11
<p>Distinction between conductors and ground wires: (<b>a</b>) coarse extraction of ground wires based on CSF; (<b>b</b>) refined extraction of ground wires based on neighborhood sharing.</p>
Full article ">Figure 11 Cont.
<p>Distinction between conductors and ground wires: (<b>a</b>) coarse extraction of ground wires based on CSF; (<b>b</b>) refined extraction of ground wires based on neighborhood sharing.</p>
Full article ">Figure 12
<p>Diagram of neighborhood sharing. The red dots belong to NN(Q<span class="html-italic"><sub>i</sub></span>), the blue dots belong to NN(Q<span class="html-italic"><sub>j</sub></span>), and the black dots are common points.</p>
Full article ">Figure 13
<p>Conductor morphology analysis. In the X-O-Y projection plane, the red line is a straight line, and in the X-O-Z projection plane, the red line is a catenary.</p>
Full article ">Figure 14
<p>3D reconstruction of conductors. In this paper, we use existing methods [<a href="#B11-sensors-23-08233" class="html-bibr">11</a>] combined with manual classification to extract the point clouds of pylons.</p>
Full article ">Figure 15
<p>Construction of tree risk candidate area.</p>
Full article ">Figure 16
<p>Detection results for tree risk points.</p>
Full article ">Figure 17
<p>Diagram of slices.</p>
Full article ">Figure 18
<p>Extraction results for ground wires and conductors: (<b>a</b>) classification effect of a ground wire and conductor of a 110 kV transmission line; (<b>b</b>) classification effect of a ground wire and conductor of a 220 kV transmission line; (<b>c</b>) classification effect of a ground wire and conductor of a 500 kV transmission line. The points in the red circles are misclassified.</p>
Full article ">Figure 19
<p>Reconstruction effect of conductors: (<b>a</b>) reconstruction of conductors of 110 kV transmission line and matching effect of reconstruction curve and conductor points; (<b>b</b>) reconstruction of conductors of 220 kV transmission line and matching effect of reconstruction curve and conductor points; (<b>c</b>) reconstruction of conductors of 500 kV transmission line and matching effect of reconstruction curve and conductor points.</p>
Full article ">Figure 20
<p>Detection of tree risk points: (<b>a</b>) tree risk points of 110 kV transmission lines; (<b>b</b>) tree risk points of 500 kV transmission lines. The rectangles show the locations of the tree risk points and the numbers represent the counts of tree risk.</p>
Full article ">
15 pages, 9380 KiB  
Technical Note
Automatic Identification of Earth Rock Embankment Piping Hazards in Small and Medium Rivers Based on UAV Thermal Infrared and Visible Images
by Renzhi Li, Zhonggen Wang, Hongquan Sun, Shugui Zhou, Yong Liu and Jinping Liu
Remote Sens. 2023, 15(18), 4492; https://doi.org/10.3390/rs15184492 - 12 Sep 2023
Cited by 1 | Viewed by 1281
Abstract
Piping is a major factor contributing to river embankment breaches, particularly during flood season in small and medium rivers. To reduce the costs of earth rock embankment inspections, avoid the need for human inspectors and enable the quick and widespread detection of piping [...] Read more.
Piping is a major factor contributing to river embankment breaches, particularly during flood season in small and medium rivers. To reduce the costs of earth rock embankment inspections, avoid the need for human inspectors and enable the quick and widespread detection of piping hazards, a UAV image-acquisition function was introduced in this study. Through the collection and analysis of thermal infrared and visible (TIR & V) images from several piping field simulation experiments, temperature increases, and diffusion centered on the piping point were discovered, so an automatic algorithm for piping identification was developed to capture this phenomenon. To verify the identification capabilities, the automatic identification algorithm was applied to detect potential piping hazards during the 2022 flooding of the Dingjialiu River, Liaoning, China. The algorithm successfully identified all five piping hazard locations, demonstrating its potential for detecting embankment piping. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Flood Forecasting and Monitoring)
Show Figures

Figure 1

Figure 1
<p>The breached embankment and the manual inspection in small and medium rivers.</p>
Full article ">Figure 2
<p>Flowchart for automatic identification of earth rock embankment piping hazard. No. 1 simulation experiment, No. 2 data processing, No. 3 identification algorithm, No. 4 evaluation and improvement, No. 5 deployment and application, No. 6 display system, No. 7 alarm.</p>
Full article ">Figure 3
<p>Technical route of earth rock embankment piping identification.</p>
Full article ">Figure 4
<p>Confusion matrix for evaluation of piping image identification.</p>
Full article ">Figure 5
<p>(<b>A</b>) Submersible pump and water hose; (<b>B</b>) gasoline engine generator; (<b>C</b>) probe thermometer.</p>
Full article ">Figure 6
<p>UAV and thermal infrared and visible sensors.</p>
Full article ">Figure 7
<p>Artificial simulated piping outlet device.</p>
Full article ">Figure 8
<p>Temperature field of piping outlet. (<b>A1</b>,<b>A2</b>) Images captured on a sunny day; (<b>B1</b>,<b>B2</b>) images captured after rain.</p>
Full article ">Figure 9
<p>Classification and interpretation of visible images.</p>
Full article ">Figure 10
<p>Registration and overlaying of thermal infrared and visible images.</p>
Full article ">Figure 11
<p>(<b>A1</b>–<b>E1</b>) Original visible images; (<b>A2</b>–<b>E2</b>) original thermal infrared images; (<b>A3</b>–<b>E3</b>) classification results obtained with RF method; (<b>A4</b>–<b>E4</b>) identification results obtained with automatic identification algorithm of piping in the field test. In the classification map (<b>A3</b>–<b>E3</b>), dark green represents farmland. Light green represents grassland, dark blue represents waterbodies, light blue represents wet soil, and yellow represents soil types. In the identification result map (<b>A4</b>–<b>E4</b>), black boxes represent the piping areas.</p>
Full article ">
20 pages, 9771 KiB  
Article
Measurement Method of Interpupillary Distance and Pupil Height Based on Ensemble of Regression Trees and the BlendMask Algorithm
by Zhenkai Zhang, Huiyu Xiang, Dongyang Li and Chongjie Leng
Appl. Sci. 2023, 13(15), 8628; https://doi.org/10.3390/app13158628 - 26 Jul 2023
Cited by 1 | Viewed by 2899
Abstract
Measuring interpupilary distance and pupil height is a crucial step in the process of optometry. However, existing methods suffer from low accuracy, high cost, a lack of portability, and limited research on studying both parameters simultaneously. To overcome these challenges, we propose a [...] Read more.
Measuring interpupilary distance and pupil height is a crucial step in the process of optometry. However, existing methods suffer from low accuracy, high cost, a lack of portability, and limited research on studying both parameters simultaneously. To overcome these challenges, we propose a method that combines ensemble regression trees (ERT) with the BlendMask algorithm to accurately measure both interpupillary distance and pupil height. First, we train an ERT-based face keypoint model to locate the pupils and calculate their center coordinates. Then, we develop an eyeglass dataset and train a BlendMask model to obtain the coordinates of the lowest point of the lenses. Finally, we calculate the numerical values of interpupillary distance and pupil height based on their respective definitions. The experimental results demonstrate that the proposed method can accurately measure interpupillary distance (IPD) and pupil height, and the calculated IPD and pupil height values are in good agreement with the measurements obtained by an auto-refractometer. By combining the advantages of the two models, our method overcomes the limitations of traditional methods with high measurement accuracy, low cost, and strong portability. Moreover, this method enables fast and automatic measurement, minimizing operation time, and reducing human errors. Therefore, it possesses broad prospects for application, particularly in the fields of eyeglass customization and vision inspection. Full article
Show Figures

Figure 1

Figure 1
<p>Definition and manual measurement. (<b>a</b>) Definition of pupillary distance and height. (<b>b</b>) Manual measurement using a ruler.</p>
Full article ">Figure 2
<p>Timeline for notable techniques in instance segmentation [<a href="#B30-applsci-13-08628" class="html-bibr">30</a>].</p>
Full article ">Figure 3
<p>Sixty-eight key points of the human face.</p>
Full article ">Figure 4
<p>The algorithm for learning <math display="inline"><semantics><mrow><msub><mi>r</mi><mi>t</mi></msub></mrow></semantics></math> in the cascade.</p>
Full article ">Figure 5
<p>BlendMask network structure.</p>
Full article ">Figure 6
<p>DeepLabV3+ structure.</p>
Full article ">Figure 7
<p>Head Euler angles diagram.</p>
Full article ">Figure 8
<p>Flowchart of head pose estimation Algorithm.</p>
Full article ">Figure 9
<p>Program flow chart.</p>
Full article ">Figure 10
<p>The results of keypoint localization.</p>
Full article ">Figure 11
<p>Case of crossed eyes. (<b>a</b>) Completely oblique. (<b>b</b>) Partially oblique.</p>
Full article ">Figure 12
<p>The loss value changes with the training steps.</p>
Full article ">Figure 13
<p>The results of glasses lens detection.</p>
Full article ">Figure 14
<p>Fifteen pose angles of an object.</p>
Full article ">Figure 15
<p>Line graph comparing the three measurement methods.</p>
Full article ">Figure 16
<p>The relative errors of the values calculated by our model and those measured by a ruler as compared to the values measured by an auto-refractometer.</p>
Full article ">
33 pages, 1617 KiB  
Review
COBOT Applications—Recent Advances and Challenges
by Claudio Taesi, Francesco Aggogeri and Nicola Pellegrini
Robotics 2023, 12(3), 79; https://doi.org/10.3390/robotics12030079 - 4 Jun 2023
Cited by 24 | Viewed by 10757
Abstract
This study provides a structured literature review of the recent COllaborative roBOT (COBOT) applications in industrial and service contexts. Several papers and research studies were selected and analyzed, observing the collaborative robot interactions, the control technologies and the market impact. This review focuses [...] Read more.
This study provides a structured literature review of the recent COllaborative roBOT (COBOT) applications in industrial and service contexts. Several papers and research studies were selected and analyzed, observing the collaborative robot interactions, the control technologies and the market impact. This review focuses on stationary COBOTs that may guarantee flexible applications, resource efficiency, and worker safety from a fixed location. COBOTs offer new opportunities to develop and integrate control techniques, environmental recognition of time-variant object location, and user-friendly programming to interact safely with humans. Artificial Intelligence (AI) and machine learning systems enable and boost the COBOT’s ability to perceive its surroundings. A deep analysis of different applications of COBOTs and their properties, from industrial assembly, material handling, service personal assistance, security and inspection, Medicare, and supernumerary tasks, was carried out. Among the observations, the analysis outlined the importance and the dependencies of the control interfaces, the intention recognition, the programming techniques, and virtual reality solutions. A market analysis of 195 models was developed, focusing on the physical characteristics and key features to demonstrate the relevance and growing interest in this field, highlighting the potential of COBOT adoption based on (i) degrees of freedom, (ii) reach and payload, (iii) accuracy, and (iv) energy consumption vs. tool center point velocity. Finally, a discussion on the advantages and limits is summarized, considering anthropomorphic robot applications for further investigations. Full article
Show Figures

Figure 1

Figure 1
<p>COBOT collaboration with operator: Safety rated monitoring stop (<b>a</b>); Hand guiding (<b>b</b>); Speed and separation monitoring (<b>c</b>); Force and torque limitation (<b>d</b>).</p>
Full article ">Figure 2
<p>COBOT scatter plot of payload and reach: Anthropomorphic (<b>a</b>); Cartesian, SCARA and Torso (<b>b</b>).</p>
Full article ">Figure 3
<p>COBOT scatter plot of accuracy and payload: Anthropomorphic (<b>a</b>); Cartesian, SCARA and Torso (<b>b</b>).</p>
Full article ">Figure 4
<p>COBOT box plot of accuracy for Anthropomorphic and SCARA (minimum, Q1, median, Q3, maximum and outlier—cycle).</p>
Full article ">Figure 5
<p>COBOT scatter plot of accuracy and reach: Anthropomorphic (<b>a</b>); Cartesian, SCARA and Torso (<b>b</b>).</p>
Full article ">Figure 5 Cont.
<p>COBOT scatter plot of accuracy and reach: Anthropomorphic (<b>a</b>); Cartesian, SCARA and Torso (<b>b</b>).</p>
Full article ">Figure 6
<p>COBOT scatter plot of power consumption vs. tool center point velocity of Anthropomorphic architecture.</p>
Full article ">
42 pages, 604 KiB  
Review
The Importance of the Slaughterhouse in Surveilling Animal and Public Health: A Systematic Review
by Juan García-Díez, Sónia Saraiva, Dina Moura, Luca Grispoldi, Beniamino Terzo Cenci-Goga and Cristina Saraiva
Vet. Sci. 2023, 10(2), 167; https://doi.org/10.3390/vetsci10020167 - 20 Feb 2023
Cited by 20 | Viewed by 8917
Abstract
From the point of public health, the objective of the slaughterhouse is to guarantee the safety of meat in which meat inspection represent an essential tool to control animal diseases and guarantee the public health. The slaughterhouse can be used as surveillance center [...] Read more.
From the point of public health, the objective of the slaughterhouse is to guarantee the safety of meat in which meat inspection represent an essential tool to control animal diseases and guarantee the public health. The slaughterhouse can be used as surveillance center for livestock diseases. However, other aspects related with animal and human health, such as epidemiology and disease control in primary production, control of animal welfare on the farm, surveillance of zoonotic agents responsible for food poisoning, as well as surveillance and control of antimicrobial resistance, can be monitored. These controls should not be seen as a last defensive barrier but rather as a complement to the controls carried out on the farm. Regarding the control of diseases in livestock, scientific research is scarce and outdated, not taking advantage of the potential for disease control. Animal welfare in primary production and during transport can be monitored throughout ante-mortem and post-mortem inspection at the slaughterhouse, providing valuable individual data on animal welfare. Surveillance and research regarding antimicrobial resistance (AMR) at slaughterhouses is scarce, mainly in cattle, sheep, and goats. However, most of the zoonotic pathogens are sensitive to the antibiotics studied. Moreover, the prevalence at the slaughterhouse of zoonotic and foodborne agents seems to be low, but a lack of harmonization in terms of control and communication may lead to underestimate its real prevalence. Full article
16 pages, 2843 KiB  
Article
Respondent Dynamic Attention to Streetscape Composition in Nanjing, China
by Zhi Yue, Ying Zhong and Zhouxiao Cui
Sustainability 2022, 14(22), 15209; https://doi.org/10.3390/su142215209 - 16 Nov 2022
Cited by 2 | Viewed by 1519
Abstract
Scholars are interested in understanding human responses and perceptions concerning the configuration of streetscape environments that serve multiple functions. However, drivers’ visual attention to the streetscape has seldom been studied dynamically in multi-modal settings. By employing eye-tracking and semantic segmentation, visual attention partitions [...] Read more.
Scholars are interested in understanding human responses and perceptions concerning the configuration of streetscape environments that serve multiple functions. However, drivers’ visual attention to the streetscape has seldom been studied dynamically in multi-modal settings. By employing eye-tracking and semantic segmentation, visual attention partitions and objects and patterns are inspected in a per-second count along three typical roadways in Nanjing, China. In our study of 28 participants, it was found that people are likely to focus on the frame center (p-value < 0.005) in all methods of transportation. Roads and buildings are constantly observed along the roadway (p-value < 0.005), while smaller transportation objects across multi-modal conditions are noticed more in per-area counts (p-value < 0.025). Besides, vehicles are focused on more in a higher-speed driving lane (p-values < 0.005), while greenery and humans attract more attention in a slower lane (p-values < 0.005). The results indicate that the previous visual engagement results should be reconsidered on several points, and that the risk of distractions from non-traffic-related elements could be overestimated. The potential of the road surface in integrating safety and information-providing has been ignored in current studies. This study showed that greenery and other functional elements will not distract users in driving lanes; decreasing the calculation burden to two-ninth is possible in smart driving. These results could be helpful in future sustainable cities. Full article
Show Figures

Figure 1

Figure 1
<p>Map of road samples. (Base map is from Tianditu Map. 12 roads were selected for preliminary surveys while 3 of them which have similar conditions were picked as samples).</p>
Full article ">Figure 2
<p>Example of road segmentation.</p>
Full article ">Figure 3
<p>Example of nine-rectangle grid of one-frame semantic segmentation. BLG—Bottom left grid; BCG—Bottom center grid; BRG—Bottom right grid; MLG—Middle left grid; MCG—Middle center grid; MRG—Middle right grid; TLG—Top left grid; TCG—Top center grid; TRG—Top right grid.</p>
Full article ">Figure 4
<p>Dwell-time percentage of nine objects. Dwell-time per-area percentage of nine objects. (The blue bullet is the dwelling period in closed section while the orange is that in open section).</p>
Full article ">Figure 5
<p>Dwell time per-area percentage of nine objects. (The blue bullet is the dwelling period in closed section while the orange is that in open section).</p>
Full article ">Figure 6
<p>The common first dwell-area, second dwell-area and first dwell-object of the subjects on the three roads. FDA—first dwell area; SDA—second dwell area; FDO—first dwell object; C section—closed section; O section—open section; R section—road crossing section.</p>
Full article ">
13 pages, 3941 KiB  
Article
Domain Feature Mapping with YOLOv7 for Automated Edge-Based Pallet Racking Inspections
by Muhammad Hussain, Hussain Al-Aqrabi, Muhammad Munawar, Richard Hill and Tariq Alsboui
Sensors 2022, 22(18), 6927; https://doi.org/10.3390/s22186927 - 13 Sep 2022
Cited by 55 | Viewed by 7029
Abstract
Pallet racking is an essential element within warehouses, distribution centers, and manufacturing facilities. To guarantee its safe operation as well as stock protection and personnel safety, pallet racking requires continuous inspections and timely maintenance in the case of damage being discovered. Conventionally, a [...] Read more.
Pallet racking is an essential element within warehouses, distribution centers, and manufacturing facilities. To guarantee its safe operation as well as stock protection and personnel safety, pallet racking requires continuous inspections and timely maintenance in the case of damage being discovered. Conventionally, a rack inspection is a manual quality inspection process completed by certified inspectors. The manual process results in operational down-time as well as inspection and certification costs and undiscovered damage due to human error. Inspired by the trend toward smart industrial operations, we present a computer vision-based autonomous rack inspection framework centered around YOLOv7 architecture. Additionally, we propose a domain variance modeling mechanism for addressing the issue of data scarcity through the generation of representative data samples. Our proposed framework achieved a mean average precision of 91.1%. Full article
Show Figures

Figure 1

Figure 1
<p>Data procurement strategy.</p>
Full article ">Figure 2
<p>Data annotation strategy. (<b>A</b>) Higher occlusion (<b>B</b>) Small Occlusion</p>
Full article ">Figure 3
<p>Variance modelling strategy.</p>
Full article ">Figure 4
<p>Strategy of device placement.</p>
Full article ">Figure 5
<p>Data scaling. (<b>A</b>) Shifted Image (<b>B</b>) Implementing Gaussian Blur.</p>
Full article ">Figure 6
<p>Domain specific augmentations. (<b>A</b>) High Intensity (<b>B</b>) Low Intensity.</p>
Full article ">Figure 7
<p>Proposed system architecture.</p>
Full article ">Figure 8
<p>Precision recall curve for trained YOLOv7.</p>
Full article ">Figure 9
<p>Data samples from [<a href="#B1-sensors-22-06927" class="html-bibr">1</a>].</p>
Full article ">
17 pages, 8167 KiB  
Article
Moving toward Smart Manufacturing with an Autonomous Pallet Racking Inspection System Based on MobileNetV2
by Muhammad Hussain, Tianhua Chen and Richard Hill
J. Manuf. Mater. Process. 2022, 6(4), 75; https://doi.org/10.3390/jmmp6040075 - 8 Jul 2022
Cited by 22 | Viewed by 4265
Abstract
Pallet racking is a fundamental component within the manufacturing, storage, and distribution centers of companies around the World. It requires continuous inspection and maintenance to guarantee the protection of stock and the safety of personnel. At present, racking inspection is manually carried out [...] Read more.
Pallet racking is a fundamental component within the manufacturing, storage, and distribution centers of companies around the World. It requires continuous inspection and maintenance to guarantee the protection of stock and the safety of personnel. At present, racking inspection is manually carried out by certified inspectors, leading to operational down-time, inspection costs and missed damage due to human error. As companies transition toward smart manufacturing, we present an autonomous racking inspection mechanism using a MobileNetV2-SSD architecture. We propose a solution that is affixed to the adjustable cage of a forklift truck, enabling adequate coverage of racking in the immediate vicinity. Our proposed approach leads to a classifier that is optimized for deployment onto edge devices, providing real-time alerts of damage to forklift drivers, with a mean average precision of 92.7%. Full article
Show Figures

Figure 1

Figure 1
<p>Abstract solution comparison.</p>
Full article ">Figure 2
<p>Data procurement strategy.</p>
Full article ">Figure 3
<p>Data preprocessing.</p>
Full article ">Figure 4
<p>Considerations for bounding boxes (<b>A</b>) Tightly bound (correct), (<b>B</b>) Loosely bound (incorrect), (<b>C</b>) Occluded Damage (correct) and (<b>D</b>) Occluded Damage (incorrect).</p>
Full article ">Figure 5
<p>Domain specific augmentations (<b>A</b>) original, (<b>B</b>) Random Rotation, (<b>C</b>) Brightness adjustment (darker), (<b>D</b>) Brightness adjustment (lighter) and (<b>E</b>) Gaussian blur.</p>
Full article ">Figure 6
<p>SSD coupled with MobileNetV2 as backbone.</p>
Full article ">Figure 7
<p>Proposed system architecture.</p>
Full article ">Figure 8
<p>Proposed device placement and resultant coverage.</p>
Full article ">Figure 9
<p>Inference comparison between the two architectures, (<b>A</b>) Fahimeh Farahnakian, (<b>B</b>) Predictions on our dataset.</p>
Full article ">
17 pages, 11582 KiB  
Article
Leveraging LiDAR Intensity to Evaluate Roadway Pavement Markings
by Justin A. Mahlberg, Yi-Ting Cheng, Darcy M. Bullock and Ayman Habib
Future Transp. 2021, 1(3), 720-736; https://doi.org/10.3390/futuretransp1030039 - 1 Dec 2021
Cited by 7 | Viewed by 4086
Abstract
The United States has over 8.8 million lane miles nationwide, which require regular maintenance and evaluations of sign retroreflectivity, pavement markings, and other pavement information. Pavement markings convey crucial information to drivers as well as connected and autonomous vehicles for lane delineations. Current [...] Read more.
The United States has over 8.8 million lane miles nationwide, which require regular maintenance and evaluations of sign retroreflectivity, pavement markings, and other pavement information. Pavement markings convey crucial information to drivers as well as connected and autonomous vehicles for lane delineations. Current means of evaluation are by human inspection or semi-automated dedicated vehicles, which often capture one to two pavement lines at a time. Mobile LiDAR is also frequently used by agencies to map signs and infrastructure as well as assess pavement conditions and drainage profiles. This paper presents a case study where over 70 miles of US-52 and US-41 in Indiana were assessed, utilizing both a mobile retroreflectometer and a LiDAR mobile mapping system. Comparing the intensity data from LiDAR data and the retroreflective readings, there was a linear correlation for right edge pavement markings with an R2 of 0.87 and for the center skip line a linear correlation with an R2 of 0.63. The p-values were 0.000 and 0.000, respectively. Although there are no published standards for using LiDAR to evaluate pavement marking retroreflectivity, these results suggest that mobile LiDAR is a viable tool for network level monitoring of retroreflectivity. Full article
Show Figures

Figure 1

Figure 1
<p>Data collection route and equipment: (<b>a</b>) Indiana US-52 westbound and US-41 northbound evaluation; (<b>b</b>) Indiana US-52 westbound and US-41 northbound evaluation concentrated on evaluation area; (<b>c</b>) data collection convoy and equipment.</p>
Full article ">Figure 1 Cont.
<p>Data collection route and equipment: (<b>a</b>) Indiana US-52 westbound and US-41 northbound evaluation; (<b>b</b>) Indiana US-52 westbound and US-41 northbound evaluation concentrated on evaluation area; (<b>c</b>) data collection convoy and equipment.</p>
Full article ">Figure 2
<p>LiDAR and retroreflective data collection equipment and configuration: (<b>a</b>) Purdue mobile mapping system for LiDAR data collection and (<b>b</b>) Road Vista LLG7 mobile retroreflectometer for retroreflective data collection.</p>
Full article ">Figure 3
<p>Lane marking extraction strategies: (<b>a</b>) road surface block; (<b>b</b>) hypothesized lane markings; (<b>c</b>) lane marking points after the scan line-based outlier removal; (<b>d</b>) lane marking segments after density-based spatial clustering; (<b>e</b>) lane marking segments after geometry-based outlier removal; (<b>f</b>) lane marking segments after local and global refinements.</p>
Full article ">Figure 4
<p>Northbound center skip line retroreflective and LiDAR intensity readings: (<b>a</b>) infrared retroreflective values; (<b>b</b>) northbound standard retroreflective values; (<b>c</b>) LiDAR intensity values.</p>
Full article ">Figure 5
<p>Northbound right edge line retroreflective and LiDAR intensity readings: (<b>a</b>) infrared retroreflective values; (<b>b</b>) standard retroreflective values; (<b>c</b>) LiDAR intensity values.</p>
Full article ">Figure 6
<p>US-41 northbound intensity validation points: (<b>a</b>) mile marker: 33.84; (<b>b</b>) mile marker: 45.64; (<b>c</b>) mile marker: 48.22; (<b>d</b>) mile marker: 60.76.</p>
Full article ">Figure 6 Cont.
<p>US-41 northbound intensity validation points: (<b>a</b>) mile marker: 33.84; (<b>b</b>) mile marker: 45.64; (<b>c</b>) mile marker: 48.22; (<b>d</b>) mile marker: 60.76.</p>
Full article ">Figure 7
<p>US-52 and 41 northbound LiDAR point clouds (colored by intensity): (<b>a</b>) mile marker: 33.84; (<b>b</b>) mile marker: 45.64; (<b>c</b>) mile marker: 48.22; (<b>d</b>) mile marker: 60.76.</p>
Full article ">Figure 8
<p>US-52 and 41 northbound LiDAR point clouds (colored by intensity) and retroreflective reading points: (<b>a</b>) intersection (no lane marking) area; (<b>b</b>) complete lane marking area; corresponding image for (<b>c</b>) locations A and (<b>d</b>) B.</p>
Full article ">Figure 9
<p>Center skip line linear correlation of LiDAR intensity and standard retroreflectivity: (<b>a</b>) US-52 and US-41 combined linear correlation of LiDAR intensity and standard retroreflectivity and (<b>b</b>) US-41 linear correlation of LiDAR intensity and standard retroreflectivity.</p>
Full article ">Figure 10
<p>Qualitative comparison of US-52 and US-41: (<b>a</b>) US-52 pavement containing crack sealant on roadway surface and (<b>b</b>) US-41 pavement without crack sealant on roadway surface.</p>
Full article ">Figure 11
<p>Right edge line linear correlation of LiDAR intensity and standard retroreflectivity: (<b>a</b>) US-52 and US-41 combined linear correlation of LiDAR intensity and standard retroreflectivity and (<b>b</b>) US-41 linear correlation of LiDAR intensity and standard retroreflectivity.</p>
Full article ">Figure 12
<p>Center skip line linear correlation of LiDAR intensity and infrared retroreflectivity: (<b>a</b>) US-52 and US-41 combined linear correlation of LiDAR intensity and infrared retroreflectivity and (<b>b</b>) US-41 linear correlation of LiDAR intensity and infrared retroreflectivity.</p>
Full article ">Figure 13
<p>Right edge line linear correlation of LiDAR intensity and infrared retroreflectivity: (<b>a</b>) US-52 and US-41 combined linear correlation of LiDAR intensity and infrared retroreflectivity and (<b>b</b>) US-41 linear correlation of LiDAR intensity and infrared retroreflectivity.</p>
Full article ">Figure 14
<p>Purdue mobile mapping system sensor comparison: (<b>a</b>) sensor location; (<b>b</b>) front sensors comparison; (<b>c</b>) rear sensors comparison.</p>
Full article ">Figure 15
<p>Linear correlation between different sensors on Purdue mobile mapping system: (<b>a</b>) rear left HDL-32E linear correlation compared to front left HDL-32E; (<b>b</b>) front right VLP-16 Hi-Res linear correlation compared to rear right HDL-32E; (<b>c</b>) front right VLP-16 Hi-Res linear correlation compared to front left HDL-32E; (<b>d</b>) rear right HDL-32E linear correlation compared to rear left HDL-32E.</p>
Full article ">Figure 16
<p>Pilot mobile mapping system for agency deployment: (<b>a</b>) pilot mobile mapping system; (<b>b</b>) components of pilot mobile mapping system; (<b>c</b>) utilization of single sensor pilot mobile mapping system.</p>
Full article ">Figure 17
<p>Pilot mobile mapping system data acquisition: (<b>a</b>) intensity profile from single sensor pilot mobile mapping system; (<b>b</b>) LiDAR cross-section; (<b>c</b>) LiDAR longitudinal pavement marking profile.</p>
Full article ">
14 pages, 1800 KiB  
Article
Genetic Evidence of the Black Death in the Abbey of San Leonardo (Apulia Region, Italy): Tracing the Cause of Death in Two Individuals Buried with Coins
by Donato Antonio Raele, Ginevra Panzarino, Giuseppe Sarcinelli, Maria Assunta Cafiero, Anna Maria Tunzi and Elena Dellù
Pathogens 2021, 10(11), 1354; https://doi.org/10.3390/pathogens10111354 - 20 Oct 2021
Cited by 1 | Viewed by 3779
Abstract
The Abbey of San Leonardo in Siponto (Apulia, Southern Italy) was an important religious and medical center during the Middle Ages. It was a crossroads for pilgrims heading along the Via Francigena to the Sanctuary of Monte Sant’Angelo and for merchants passing through [...] Read more.
The Abbey of San Leonardo in Siponto (Apulia, Southern Italy) was an important religious and medical center during the Middle Ages. It was a crossroads for pilgrims heading along the Via Francigena to the Sanctuary of Monte Sant’Angelo and for merchants passing through the harbor of Manfredonia. A recent excavation of Soprintendenza Archeologica della Puglia investigated a portion of the related cemetery, confirming its chronology to be between the end of the 13th and beginning of the 14th century. Two single graves preserved individuals accompanied by numerous coins dating back to the 14th century, hidden in clothes and in a bag tied to the waist. The human remains of the individuals were analyzed in the Laboratorio di Antropologia Fisica of Soprintendenza ABAP della città metropolitana di Bari. Three teeth from each individual were collected and sent to the Istituto Zooprofilattico Sperimentale di Puglia e Basilicata to study infectious diseases such as malaria, plague, tuberculosis, epidemic typhus and Maltese fever (Brucellosis), potentially related to the lack of inspection of the bodies during burial procedures. DNA extracted from six collected teeth and two additional unrelated human teeth (negative controls) were analyzed using PCR to verify the presence of human DNA (β-globulin) and of pathogens such as Plasmodium spp., Yersinia pestis, Mycobacterium spp., Rickettsia spp. and Brucella spp. The nucleotide sequence of the amplicon was determined to confirm the results. Human DNA was successfully amplified from all eight dental extracts and two different genes of Y. pestis were amplified and sequenced in 4 out of the 6 teeth. Molecular analyses ascertained that the individuals buried in San Leonardo were victims of the Black Death (1347–1353) and the data confirmed the lack of inspection of the corpses despite the presence of numerous coins. This study represents molecular evidence, for the first time, of Southern Italy’s involvement in the second wave of the plague pandemic. Full article
(This article belongs to the Special Issue Molecular Diagnostics for Infectious Diseases)
Show Figures

Figure 1

Figure 1
<p>Abbey of San Leonardo: the church (green), the hospital (purple) and the monastery (yellow); the area of the archaeological investigation (red), the area where the cemetery was believed to be located [<a href="#B12-pathogens-10-01354" class="html-bibr">12</a>] (crosses); the area where the cemetery continues to develop (oblique lines) [photo Google Earth 2019].</p>
Full article ">Figure 2
<p>Abbey of San Leonardo: Details of burial site and coins with the human remains.</p>
Full article ">Figure 3
<p>Geographical distribution of historical, molecular and immunological data related to Black Death reports in Italy.</p>
Full article ">
Back to TopTop