Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,910)

Search Parameters:
Keywords = multi-sensor systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 8192 KiB  
Perspective
Embedding AI-Enabled Data Infrastructures for Sustainability in Agri-Food: Soft-Fruit and Brewery Use Case Perspectives
by Milan Markovic, Andy Li, Tewodros Alemu Ayall, Nicholas J. Watson, Alexander L. Bowler, Mel Woods, Peter Edwards, Rachael Ramsey, Matthew Beddows, Matthias Kuhnert and Georgios Leontidis
Sensors 2024, 24(22), 7327; https://doi.org/10.3390/s24227327 - 16 Nov 2024
Viewed by 384
Abstract
The agri-food sector is undergoing a comprehensive transformation as it transitions towards net zero. To achieve this, fundamental changes and innovations are required, including changes in how food is produced and delivered to customers, new technologies, data and physical infrastructures, and algorithmic advancements. [...] Read more.
The agri-food sector is undergoing a comprehensive transformation as it transitions towards net zero. To achieve this, fundamental changes and innovations are required, including changes in how food is produced and delivered to customers, new technologies, data and physical infrastructures, and algorithmic advancements. In this paper, we explore the opportunities and challenges of deploying AI-based data infrastructures for sustainability in the agri-food sector by focusing on two case studies: soft-fruit production and brewery operations. We investigate the potential benefits of incorporating Internet of Things (IoT) sensors and AI technologies for improving the use of resources, reducing carbon footprints, and enhancing decision-making. We identify user engagement with new technologies as a key challenge, together with issues in data quality arising from environmental volatility, difficulties in generalising models, including those designed for carbon calculators, and socio-technical barriers to adoption. We highlight and advocate for user engagement, more granular availability of sensor, production, and emissions data, and more transparent carbon footprint calculations. Our proposed future directions include semantic data integration to enhance interoperability, the generation of synthetic data to overcome the lack of real-world farm data, and multi-objective optimisation systems to model the competing interests between yield and sustainability goals. In general, we argue that AI is not a silver bullet for net zero challenges in the agri-food industry, but at the same time, AI solutions, when appropriately designed and deployed, can be a useful tool when operating in synergy with other approaches. Full article
(This article belongs to the Special Issue Application of Sensors Technologies in Agricultural Engineering)
Show Figures

Figure 1

Figure 1
<p>Temp./humidity sensor outside tunnel.</p>
Full article ">Figure 2
<p>Temp./humidity and light sensor inside tunnel.</p>
Full article ">Figure 3
<p>Flow meter inside tunnel.</p>
Full article ">Figure 4
<p>Fermentation sensor.</p>
Full article ">Figure 5
<p>Wireless electricity monitor.</p>
Full article ">
17 pages, 5063 KiB  
Article
Enhancing Recovery of Structural Health Monitoring Data Using CNN Combined with GRU
by Nguyen Thi Cam Nhung, Hoang Nguyen Bui and Tran Quang Minh
Infrastructures 2024, 9(11), 205; https://doi.org/10.3390/infrastructures9110205 - 16 Nov 2024
Viewed by 271
Abstract
Structural health monitoring (SHM) plays a crucial role in ensuring the safety of infrastructure in general, especially critical infrastructure such as bridges. SHM systems allow the real-time monitoring of structural conditions and early detection of abnormalities. This enables managers to make accurate decisions [...] Read more.
Structural health monitoring (SHM) plays a crucial role in ensuring the safety of infrastructure in general, especially critical infrastructure such as bridges. SHM systems allow the real-time monitoring of structural conditions and early detection of abnormalities. This enables managers to make accurate decisions during the operation of the infrastructure. However, for various reasons, data from SHM systems may be interrupted or faulty, leading to serious consequences. This study proposes using a Convolutional Neural Network (CNN) combined with Gated Recurrent Units (GRUs) to recover lost data from accelerometer sensors in SHM systems. CNNs are adept at capturing spatial patterns in data, making them highly effective for recognizing localized features in sensor signals. At the same time, GRUs are designed to model sequential dependencies over time, making the combined architecture particularly suited for time-series data. A dataset collected from a real bridge structure will be used to validate the proposed method. Different cases of data loss are considered to demonstrate the feasibility and potential of the CNN-GRU approach. The results show that the CNN-GRU hybrid network effectively recovers data in both single-channel and multi-channel data loss scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Convolutional Neural Networks.</p>
Full article ">Figure 2
<p>The structure of GRU network [<a href="#B44-infrastructures-09-00205" class="html-bibr">44</a>].</p>
Full article ">Figure 3
<p>Data recovery process using CNN-GRU.</p>
Full article ">Figure 4
<p>Thang Long Bridge: (<b>a</b>) side view; (<b>b</b>) lower floor.</p>
Full article ">Figure 5
<p>Arrangement of measuring points at Thang Long Bridge.</p>
Full article ">Figure 6
<p>Data collection: (<b>a</b>) equipment station; (<b>b</b>) sensors’ installation location.</p>
Full article ">Figure 7
<p>Network training results in single-channel data recovery scenario: (<b>a</b>) training convergence curve; (<b>b</b>) mean absolute error.</p>
Full article ">Figure 8
<p>Recovery data segment using CNN-GRU; CNN and GRU.</p>
Full article ">Figure 9
<p>Mode shapes of two datasets.</p>
Full article ">Figure 10
<p>Network training results in multi-channel data recovery scenario: (<b>a</b>) training convergence curve; (<b>b</b>) mean absolute error.</p>
Full article ">Figure 10 Cont.
<p>Network training results in multi-channel data recovery scenario: (<b>a</b>) training convergence curve; (<b>b</b>) mean absolute error.</p>
Full article ">Figure 11
<p>MAC values: (<b>a</b>) two-sensor data recovery; (<b>b</b>) three-sensor data recovery; (<b>c</b>) four-sensor data recovery.</p>
Full article ">Figure 11 Cont.
<p>MAC values: (<b>a</b>) two-sensor data recovery; (<b>b</b>) three-sensor data recovery; (<b>c</b>) four-sensor data recovery.</p>
Full article ">
15 pages, 941 KiB  
Article
Embedding Tree-Based Intrusion Detection System in Smart Thermostats for Enhanced IoT Security
by Abbas Javed, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi, Muhammad Jawad, Jehangir Arshad and Hadi Larijani
Sensors 2024, 24(22), 7320; https://doi.org/10.3390/s24227320 - 16 Nov 2024
Viewed by 262
Abstract
IoT devices with limited resources, and in the absence of gateways, become vulnerable to various attacks, such as denial of service (DoS) and man-in-the-middle (MITM) attacks. Intrusion detection systems (IDS) are designed to detect and respond to these threats in IoT environments. While [...] Read more.
IoT devices with limited resources, and in the absence of gateways, become vulnerable to various attacks, such as denial of service (DoS) and man-in-the-middle (MITM) attacks. Intrusion detection systems (IDS) are designed to detect and respond to these threats in IoT environments. While machine learning-based IDS have typically been deployed at the edge (gateways) or in the cloud, in the absence of gateways, the IDS must be embedded within the sensor nodes themselves. Available datasets mainly contain features extracted from network traffic at the edge (e.g., Raspberry Pi/computer) or cloud servers. We developed a unique dataset, named as Intrusion Detection in the Smart Homes (IDSH) dataset, which is based on features retrievable from microcontroller-based IoT devices. In this work, a Tree-based IDS is embedded into a smart thermostat for real-time intrusion detection. The results demonstrated that the IDS achieved an accuracy of 98.71% for binary classification with an inference time of 276 microseconds, and an accuracy of 97.51% for multi-classification with an inference time of 273 microseconds. Real-time testing showed that the smart thermostat is capable of detecting DoS and MITM attacks without relying on a gateway or cloud. Full article
(This article belongs to the Special Issue Sensor Data Privacy and Intrusion Detection for IoT Networks)
Show Figures

Figure 1

Figure 1
<p>Proposed architecture of embedded IDS for smart thermostats.</p>
Full article ">Figure 2
<p>Dataset collection on smart thermostats.</p>
Full article ">Figure 3
<p>Comparison of IDS implemented with quantization and without quantization.</p>
Full article ">Figure 4
<p>Comparison of IDS implemented with CatBoost and XGBoost on the smart thermostat.</p>
Full article ">
21 pages, 11350 KiB  
Article
A Fast Obstacle Detection Algorithm Based on 3D LiDAR and Multiple Depth Cameras for Unmanned Ground Vehicles
by Fenglin Pang, Yutian Chen, Yan Luo, Zigui Lv, Xuefei Sun, Xiaobin Xu and Minzhou Luo
Drones 2024, 8(11), 676; https://doi.org/10.3390/drones8110676 - 15 Nov 2024
Viewed by 293
Abstract
With the advancement of technology, unmanned ground vehicles (UGVs) have shown increasing application value in various tasks, such as food delivery and cleaning. A key capability of UGVs is obstacle detection, which is essential for avoiding collisions during movement. Current mainstream methods use [...] Read more.
With the advancement of technology, unmanned ground vehicles (UGVs) have shown increasing application value in various tasks, such as food delivery and cleaning. A key capability of UGVs is obstacle detection, which is essential for avoiding collisions during movement. Current mainstream methods use point cloud information from onboard sensors, such as light detection and ranging (LiDAR) and depth cameras, for obstacle perception. However, the substantial volume of point clouds generated by these sensors, coupled with the presence of noise, poses significant challenges for efficient obstacle detection. Therefore, this paper presents a fast obstacle detection algorithm designed to ensure the safe operation of UGVs. Building on multi-sensor point cloud fusion, an efficient ground segmentation algorithm based on multi-plane fitting and plane combination is proposed in order to prevent them from being considered as obstacles. Additionally, instead of point cloud clustering, a vertical projection method is used to count the distribution of the potential obstacle points through converting the point cloud to a 2D polar coordinate system. Points in the fan-shaped area with a density lower than a certain threshold will be considered as noise. To verify the effectiveness of the proposed algorithm, a cleaning UGV equipped with one LiDAR sensor and four depth cameras is used to test the performance of obstacle detection in various environments. Several experiments have demonstrated the effectiveness and real-time capability of the proposed algorithm. The experimental results show that the proposed algorithm achieves an over 90% detection rate within a 20 m sensing area and has an average processing time of just 14.1 ms per frame. Full article
Show Figures

Figure 1

Figure 1
<p>Overall process of the proposed algorithm.</p>
Full article ">Figure 2
<p>The schematic diagram of coordinate transformation. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <msub> <mrow> <mi mathvariant="normal">C</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math> represents the relative pose relationship from the <span class="html-italic">i</span>-th depth camera to the LiDAR sensor.</p>
Full article ">Figure 3
<p>The schematic diagram of ground point cloud segmentation.</p>
Full article ">Figure 4
<p>Schematic diagram of fan-shaped area retrieval.</p>
Full article ">Figure 5
<p>The loaded sensors. (<b>a</b>) Leishen C32W LiDAR; (<b>b</b>) Orbbec DaBai DCW2; (<b>c</b>) Orbbec Dabai MAX.</p>
Full article ">Figure 6
<p>The cleaning UGV equipped with these sensors.</p>
Full article ">Figure 7
<p>The vertical view of the fused point cloud in the main coordinate system (warehouse).</p>
Full article ">Figure 8
<p>The vertical view of the fused point cloud in the main coordinate system (parking).</p>
Full article ">Figure 9
<p>The performance of the ground segmentation effect by Patchwork++ in the warehouse environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 10
<p>The performance of the ground segmentation effect by DipG-Seg in the warehouse environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 11
<p>The performance of the ground segmentation effect by the proposed method in the warehouse environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 12
<p>The performance of the ground segmentation effect by Patchwork++ in the parking environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 13
<p>The performance of the ground segmentation effect by DipG-Seg in the parking environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 14
<p>The performance of the ground segmentation effect by the proposed method in the parking environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 15
<p>Detailed image of the ground segmentation effect of the proposed algorithm. (<b>a</b>) Warehouse; (<b>b</b>) parking.</p>
Full article ">Figure 16
<p>The vertical view of the obstacle detection effect using Euclidean clustering (warehouse).</p>
Full article ">Figure 17
<p>The vertical view of the obstacle detection effect using CenterPoint (warehouse).</p>
Full article ">Figure 18
<p>The vertical view of the obstacle detection effect using the proposed algorithm in smaller hyperparameter settings (warehouse).</p>
Full article ">Figure 19
<p>The vertical view of the obstacle detection effect using the proposed algorithm in larger hyperparameter settings (warehouse).</p>
Full article ">Figure 20
<p>The vertical view of the obstacle detection effect using Euclidean clustering (parking).</p>
Full article ">Figure 21
<p>The vertical view of the obstacle detection effect using CenterPoint (parking).</p>
Full article ">Figure 22
<p>The vertical view of the obstacle detection effect using the proposed algorithm in smaller hyperparameter settings (parking).</p>
Full article ">Figure 23
<p>The vertical view of the obstacle detection effect using the proposed algorithm in larger hyperparameter settings (parking).</p>
Full article ">
32 pages, 11087 KiB  
Article
Path Planning and Motion Control of Robot Dog Through Rough Terrain Based on Vision Navigation
by Tianxiang Chen, Yipeng Huangfu, Sutthiphong Srigrarom and Boo Cheong Khoo
Sensors 2024, 24(22), 7306; https://doi.org/10.3390/s24227306 - 15 Nov 2024
Viewed by 516
Abstract
This article delineates the enhancement of an autonomous navigation and obstacle avoidance system for a quadruped robot dog. Part one of this paper presents the integration of a sophisticated multi-level dynamic control framework, utilizing Model Predictive Control (MPC) and Whole-Body Control (WBC) from [...] Read more.
This article delineates the enhancement of an autonomous navigation and obstacle avoidance system for a quadruped robot dog. Part one of this paper presents the integration of a sophisticated multi-level dynamic control framework, utilizing Model Predictive Control (MPC) and Whole-Body Control (WBC) from MIT Cheetah. The system employs an Intel RealSense D435i depth camera for depth vision-based navigation, which enables high-fidelity 3D environmental mapping and real-time path planning. A significant innovation is the customization of the EGO-Planner to optimize trajectory planning in dynamically changing terrains, coupled with the implementation of a multi-body dynamics model that significantly improves the robot’s stability and maneuverability across various surfaces. The experimental results show that the RGB-D system exhibits superior velocity stability and trajectory accuracy to the SLAM system, with a 20% reduction in the cumulative velocity error and a 10% improvement in path tracking precision. The experimental results also show that the RGB-D system achieves smoother navigation, requiring 15% fewer iterations for path planning, and a 30% faster success rate recovery in challenging environments. The successful application of these technologies in simulated urban disaster scenarios suggests promising future applications in emergency response and complex urban environments. Part two of this paper presents the development of a robust path planning algorithm for a robot dog on a rough terrain based on attached binocular vision navigation. We use a commercial-of-the-shelf (COTS) robot dog. An optical CCD binocular vision dynamic tracking system is used to provide environment information. Likewise, the pose and posture of the robot dog are obtained from the robot’s own sensors, and a kinematics model is established. Then, a binocular vision tracking method is developed to determine the optimal path, provide a proposal (commands to actuators) of the position and posture of the bionic robot, and achieve stable motion on tough terrains. The terrain is assumed to be a gentle uneven terrain to begin with and subsequently proceeds to a more rough surface. This work consists of four steps: (1) pose and position data are acquired from the robot dog’s own inertial sensors, (2) terrain and environment information is input from onboard cameras, (3) information is fused (integrated), and (4) path planning and motion control proposals are made. Ultimately, this work provides a robust framework for future developments in the vision-based navigation and control of quadruped robots, offering potential solutions for navigating complex and dynamic terrains. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified box model of the Lite3P quadruped robotic dog.</p>
Full article ">Figure 2
<p>Internal sensor arrangement of the quadruped robotic dog.</p>
Full article ">Figure 3
<p>Dynamic control flowchart.</p>
Full article ">Figure 4
<p>MPC flowchart.</p>
Full article ">Figure 5
<p>WBC flowchart [<a href="#B30-sensors-24-07306" class="html-bibr">30</a>].</p>
Full article ">Figure 6
<p>Robot coordinates and joint point settings [<a href="#B30-sensors-24-07306" class="html-bibr">30</a>].</p>
Full article ">Figure 7
<p>Intel D435i and velodyne LIDAR.</p>
Full article ">Figure 8
<p>ICP diagram.</p>
Full article ">Figure 9
<p>Comparison of before and after modifying the perception region.</p>
Full article ">Figure 10
<p>Point cloud processing flowchart.</p>
Full article ">Figure 11
<p>{p, v} generation: (<b>a</b>) the creation of {p, v} pairs for collision points; (<b>b</b>) the process of generating anchor points and repulsive vectors for dynamic obstacle avoidance [<a href="#B41-sensors-24-07306" class="html-bibr">41</a>].</p>
Full article ">Figure 12
<p>Overall framework of 2D EGO-Planner.</p>
Full article ">Figure 13
<p>Robot initialization and control process in Gazebo simulation: (<b>a</b>) Gazebo environment creation, (<b>b</b>) robot model import, (<b>c</b>) torque balance mode activation, and (<b>d</b>) robot stepping and rotation in simulation.</p>
Full article ">Figure 14
<p>Joint rotational angles of FL and RL legs.</p>
Full article ">Figure 15
<p>Joint angular velocities of FL and RL legs.</p>
Full article ">Figure 16
<p>Torque applied to FL and RL joints during the gait cycle.</p>
Full article ">Figure 17
<p>The robot navigating in a simple environment using a camera.</p>
Full article ">Figure 18
<p>The robot navigating in a complex environment using a camera.</p>
Full article ">Figure 19
<p>A 2D trajectory showing start and goal positions, obstacles, and rough path.</p>
Full article ">Figure 20
<p>Initial environment setup.</p>
Full article ">Figure 21
<p>The robot starts navigating in a simple environment with a static obstacle (brown box).</p>
Full article ">Figure 22
<p>Dynamic Obstacle 1 introduced: the robot detects a new obstacle and recalculates its path.</p>
Full article ">Figure 23
<p>Dynamic Obstacle 2 introduced: after avoiding the first obstacle, a second obstacle is introduced and detected by the planner.</p>
Full article ">Figure 24
<p>Approaching the target: the robot adjusts its path to approach the target point as the distance shortens.</p>
Full article ">Figure 25
<p>Reaching the target: the robot completes its path and reaches the designated target point.</p>
Full article ">Figure 26
<p>Real-time B-spline trajectory updates in response to dynamic obstacles. Set 1 (orange) shows the initial path avoiding static obstacles. When the first dynamic obstacle is detected, the EGO-Planner updates the path (Set 2, blue) using local optimization. A second obstacle prompts another adjustment (Set 3, green), guiding the robot smoothly towards the target as trajectory updates become more frequent.</p>
Full article ">Figure 27
<p>The robot navigating a simple environment using SLAM.</p>
Full article ">Figure 28
<p>The robot navigating a complex environment using SLAM.</p>
Full article ">Figure 29
<p>A 2D trajectory showing start and goal positions, obstacles, and the planned path in a complex environment using SLAM.</p>
Full article ">Figure 30
<p>Navigation based on RGB-D camera.</p>
Full article ">Figure 31
<p>Navigation based on SLAM.</p>
Full article ">Figure 32
<p>Velocity deviation based on RGB-D camera.</p>
Full article ">Figure 33
<p>Velocity deviation based on SLAM.</p>
Full article ">Figure 34
<p>Cumulative average iterations.</p>
Full article ">Figure 35
<p>Cumulative success rate.</p>
Full article ">
17 pages, 2380 KiB  
Article
Nondestructive Detection of Litchi Stem Borers Using Multi-Sensor Data Fusion
by Zikun Zhao, Sai Xu, Huazhong Lu, Xin Liang, Hongli Feng and Wenjing Li
Agronomy 2024, 14(11), 2691; https://doi.org/10.3390/agronomy14112691 - 15 Nov 2024
Viewed by 271
Abstract
To enhance lychee quality assessment and address inconsistencies in post-harvest pest detection, this study presents a multi-source fusion approach combining hyperspectral imaging, X-ray imaging, and visible/near-infrared (Vis/NIR) spectroscopy. Traditional single-sensor methods are limited in detecting pest damage, particularly in lychees with complex skins, [...] Read more.
To enhance lychee quality assessment and address inconsistencies in post-harvest pest detection, this study presents a multi-source fusion approach combining hyperspectral imaging, X-ray imaging, and visible/near-infrared (Vis/NIR) spectroscopy. Traditional single-sensor methods are limited in detecting pest damage, particularly in lychees with complex skins, as they often fail to capture both external and internal fruit characteristics. By integrating multiple sensors, our approach overcomes these limitations, offering a more accurate and robust detection system. Significant differences were observed between pest-free and infested lychees. Pest-free lychees exhibited higher hardness, soluble sugars (11% higher in flesh, 7% higher in peel), vitamin C (50% higher in flesh, 2% higher in peel), polyphenols, anthocyanins, and ORAC values (26%, 9%, and 14% higher, respectively). The Vis/NIR data processed with SG+SNV+CARS yielded a partial least squares regression (PLSR) model with an R2 of 0.82, an RMSE of 0.18, and accuracy of 89.22%. The hyperspectral model, using SG+MSC+SPA, achieved an R2 of 0.69, an RMSE of 0.23, and 81.74% accuracy, while the X-ray method with support vector regression (SVR) reached an R2 of 0.69, an RMSE of 0.22, and 76.25% accuracy. Through feature-level fusion, Recursive Feature Elimination with Cross-Validation (RFECV), and dimensionality reduction using PCA, we optimized hyperparameters and developed a Random Forest model. This model achieved 92.39% accuracy in pest detection, outperforming the individual methods by 3.17%, 10.25%, and 16.14%, respectively. The multi-source fusion approach also improved the overall accuracy by 4.79%, highlighting the critical role of sensor fusion in enhancing pest detection and supporting the development of automated non-destructive systems for lychee stem borer detection. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the visible/near-infrared spectroscopy acquisition device.</p>
Full article ">Figure 2
<p>Schematic diagram of the hyperspectral imaging acquisition device.</p>
Full article ">Figure 3
<p>Schematic diagram of the X-ray image acquisition system.</p>
Full article ">Figure 4
<p>Multi-source information fusion flowchart.</p>
Full article ">Figure 5
<p>(<b>a</b>) Raw visible/near-infrared spectrum, (<b>b</b>) visible/near-infrared spectrum after SG+SNV preprocessing.</p>
Full article ">Figure 6
<p>(<b>a</b>) Raw hyperspectral spectrum, (<b>b</b>) hyperspectral spectrum after SG+MSC preprocessing.</p>
Full article ">Figure 7
<p>PCA classification of grayscale values in X-ray imaging feature regions for stem-borer-infested and non-infested fruit.</p>
Full article ">Figure 8
<p>(<b>a</b>) Litchi fruit without pests, (<b>b</b>) litchi fruit with pests.</p>
Full article ">
14 pages, 6553 KiB  
Article
An Arteriovenous Bioreactor Perfusion System for Physiological In Vitro Culture of Complex Vascularized Tissue Constructs
by Florian Helms, Delia Käding, Thomas Aper, Arjang Ruhparwar and Mathias Wilhelmi
Bioengineering 2024, 11(11), 1147; https://doi.org/10.3390/bioengineering11111147 - 14 Nov 2024
Viewed by 322
Abstract
Background: The generation and perfusion of complex vascularized tissues in vitro requires sophisticated perfusion techniques. For multiscale arteriovenous networks, not only the arterial, but also the venous, biomechanical and biochemical conditions that physiologically exist in the human body must be accurately emulated. For [...] Read more.
Background: The generation and perfusion of complex vascularized tissues in vitro requires sophisticated perfusion techniques. For multiscale arteriovenous networks, not only the arterial, but also the venous, biomechanical and biochemical conditions that physiologically exist in the human body must be accurately emulated. For this, we here present a modular arteriovenous perfusion system for the in vitro culture of a multi-scale bioartificial vascular network. Methods: The custom-built perfusion system consisted of two circuits: in the arterial circuit, physiological arterial biomechanical and biochemical conditions were simulated using a modular set-up with a pulsatile peristaltic pump, compliance chambers, and resistors. In the venous circuit, venous conditions were emulated accordingly. In the center of the system, a bioartificial multi-scale vascularized fibrin-based tissue was perfused by both circuits simultaneously under biomimetic arteriovenous conditions. Culture conditions were monitored continuously using a multi-sensor monitoring system. Results: The physiological arterial and venous pressure- and flow-curves, as well as the microvascular arteriovenous oxygen partial pressure gradient, were accurately emulated in the perfusion system. The multi-sensor monitoring system facilitated live monitoring of the respective parameters and data-logging. In a proof-of-concept experiment, vascularized three-dimensional fibrin tissues showed sustained cell viability and homogenous microvessel formation after culture in the perfusion system. Conclusions: The arteriovenous perfusion system facilitated the in vitro culture of a multiscale vascularized tissue under physiological pressure-, flow-, and oxygen-gradient conditions. With that, it presents a promising technique for the in vitro generation and culture of complex large-scale vascularized tissues. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Generation of the vascularized fibrin-based matrix. (<b>A</b>) Schematic cross-section of the targeted multi-scale vasculature. Venous and arterial fibrin-based macrovessels (1 + 2) were placed in parallel to each other and interconnected via four microchannels (3). Vascular sprouts arising from the microchannels (4) were intended to interconnect the microchannels to a capillary network built-up by the co-culture of human umbilical vein derived endothelial cells (HUVECs) and adipogenous stem cells (5) seeded throughout a low-density fibrin matrix (6). Both macrovessels and microchannels were enothelialized by a HUVEC monolayer (7). Black arrows indicate the media flow direction during perfusion. (<b>B</b>) Perfusion chamber with the integrated fibrin-based tissue construct. Two hose nozzles on each side facilitated connection of the integrated macrovessels to the respective arterial and venous perfusion circuit, and the perforated sheath on the bottom allowed for insertion of needles during the molding process for the generation of the microchannels. (<b>C</b>) Macroscopic morphology of the explanted fibrin-based tissue matrix after 48 h of culture in the arteriovenous perfusion system. Scale bar = 1 cm.</p>
Full article ">Figure 2
<p>(<b>A</b>) Schematic representation of the arteriovenous perfusion system setup and desired pressure and flow curves. 1: Pulsatile peristaltic pump; 2: upstream compliance chamber; 3: pressure sensor; 4: flow sensor; 5: perfusion chamber with the integrated fibrin-based matrix and vessels; 6: variable resistor; 7: reservoir; 8: dissolved oxygen sensor; 9: oxygen inflow canula; 10: downstream arterial compliance chamber; 11: backflow line. (<b>B</b>) Photographic top-view of the assembled system.</p>
Full article ">Figure 3
<p>Pressure curve analysis. (<b>A</b>) Pressure curve monitored in the arterial circuit; (<b>B</b>) systolic (black) and diastolic (grey) pressures observed in the arterial circuit over 48 h. (<b>C</b>) Pressure curve monitored in the venous circuit; (<b>D</b>) systolic (black) and diastolic (grey) pressures observed in the venous circuit over 48 h.</p>
Full article ">Figure 4
<p>Flow curve analysis. (<b>A</b>) Flow curve monitored in the arterial circuit. (<b>B</b>) Flow curve monitored in the venous circuit.</p>
Full article ">Figure 5
<p>Arterial (black) and venous (grey) oxygen partial pressure monitored in the system over 48 h.</p>
Full article ">Figure 6
<p>(<b>A</b>) Fluorescence microscopic view of the fibrin-based tissue matrix. Capillary tubes were visualized based on red fluorescent protein expression of human umbilical vein endothelial cells. (<b>B</b>) Angiotool analysis of the capillary network depicted in (<b>A</b>). Crossing points were marked by blue dots, capillary tubes were depicted in red, and outlines were marked in yellow. Scale bar = 100 µm.</p>
Full article ">
46 pages, 4014 KiB  
Article
Robust Human Activity Recognition for Intelligent Transportation Systems Using Smartphone Sensors: A Position-Independent Approach
by John Benedict Lazaro Bernardo, Attaphongse Taparugssanagorn, Hiroyuki Miyazaki, Bipun Man Pati and Ukesh Thapa
Appl. Sci. 2024, 14(22), 10461; https://doi.org/10.3390/app142210461 - 13 Nov 2024
Viewed by 803
Abstract
This study explores Human Activity Recognition (HAR) using smartphone sensors to address the challenges posed by position-dependent datasets. We propose a position-independent system that leverages data from accelerometers, gyroscopes, linear accelerometers, and gravity sensors collected from smartphones placed either on the chest or [...] Read more.
This study explores Human Activity Recognition (HAR) using smartphone sensors to address the challenges posed by position-dependent datasets. We propose a position-independent system that leverages data from accelerometers, gyroscopes, linear accelerometers, and gravity sensors collected from smartphones placed either on the chest or in the left/right leg pocket. The performance of traditional machine learning algorithms (Decision Trees (DT), K-Nearest Neighbors (KNN), Random Forest (RF), Support Vector Classifier (SVC), and XGBoost) is compared against deep learning models (Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Temporal Convolutional Networks (TCN), and Transformer models) under two sensor configurations. Our findings highlight that the Temporal Convolutional Network (TCN) model consistently outperforms other models, particularly in the four-sensor non-overlapping configuration, achieving the highest accuracy of 97.70%. Deep learning models such as LSTM, GRU, and Transformer also demonstrate strong performance, showcasing their effectiveness in capturing temporal dependencies in HAR tasks. Traditional machine learning models, including RF and XGBoost, provide reasonable performance but do not match the accuracy of deep learning models. Additionally, incorporating data from linear accelerometers and gravity sensors led to slight improvements over using accelerometer and gyroscope data alone. This research enhances the recognition of passenger behaviors for intelligent transportation systems, contributing to more efficient congestion management and emergency response strategies. Full article
Show Figures

Figure 1

Figure 1
<p>Running Activity Accelerometer Data: acceleration values along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 2
<p>Running Activity Gyroscope Data: angular velocity along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 3
<p>Running Activity Linear Accelerometer Data: linear acceleration values along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 4
<p>Running Activity Gravity Sensor Data: gravitational acceleration values along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 5
<p>Methodological framework for assessing machine learning and deep learning techniques.</p>
Full article ">Figure 6
<p>Architecture of a Gated Recurrent Unit (GRU) Network used in Activity Recognition. Adapted from [<a href="#B37-applsci-14-10461" class="html-bibr">37</a>], showing the flow through the reset and update gates, facilitating efficient sequential data processing.</p>
Full article ">Figure 7
<p>Architecture of a Long Short-Term Memory (LSTM) Network utilized in Activity Recognition. Adapted from [<a href="#B38-applsci-14-10461" class="html-bibr">38</a>], showing the flow of information through the forget, input, and output gates to manage long-term dependencies in sequential data.</p>
Full article ">Figure 8
<p>Architecture of a Temporal Convolutional Network (TCN) for Activity Recognition, adapted from [<a href="#B28-applsci-14-10461" class="html-bibr">28</a>]. Dilated causal convolutions capture long-term dependencies, with dropout layers to prevent overfitting.</p>
Full article ">Figure 9
<p>Architecture of the Transformer Model used in Activity Recognition, illustrating the multi-head attention and feed-forward layers, adapted from [<a href="#B29-applsci-14-10461" class="html-bibr">29</a>]. The positional encoding enables handling of sequential data without recurrence.</p>
Full article ">Figure 10
<p>ConfusionMatrices formodels using a two-sensor configuration with non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 10 Cont.
<p>ConfusionMatrices formodels using a two-sensor configuration with non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 11
<p>Confusion Matrices for models using a two-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 11 Cont.
<p>Confusion Matrices for models using a two-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 12
<p>Confusion Matrices for models using a four-sensor configuration with Non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 12 Cont.
<p>Confusion Matrices for models using a four-sensor configuration with Non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 13
<p>Confusion Matrices for models using a four-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 13 Cont.
<p>Confusion Matrices for models using a four-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">
41 pages, 6420 KiB  
Article
Analyzing Autonomous Vehicle Collision Types to Support Sustainable Transportation Systems: A Machine Learning and Association Rules Approach
by Ehsan Kohanpour, Seyed Rasoul Davoodi and Khaled Shaaban
Sustainability 2024, 16(22), 9893; https://doi.org/10.3390/su16229893 - 13 Nov 2024
Viewed by 460
Abstract
The increasing presence of autonomous vehicles (AVs) in transportation, driven by advances in AI and robotics, requires a strong focus on safety in mixed-traffic environments to promote sustainable transportation systems. This study analyzes AV crashes in California using advanced machine learning to identify [...] Read more.
The increasing presence of autonomous vehicles (AVs) in transportation, driven by advances in AI and robotics, requires a strong focus on safety in mixed-traffic environments to promote sustainable transportation systems. This study analyzes AV crashes in California using advanced machine learning to identify patterns among various crash factors. The main objective is to explore AV crash mechanisms by extracting association rules and developing a decision tree model to understand interactions between pre-crash conditions, driving states, crash types, severity, locations, and other variables. A multi-faceted approach, including statistical analysis, data mining, and machine learning, was used to model crash types. The SMOTE method addressed data imbalance, with models like CART, Apriori, RF, XGB, SHAP, and Pearson’s test applied for analysis. Findings reveal that rear-end crashes are the most common, making up over 50% of incidents. Side crashes at night are also frequent, while angular and head-on crashes tend to be more severe. The study identifies high-risk locations, such as complex unsignalized intersections, and highlights the need for improved AV sensor technology, AV–infrastructure coordination, and driver training. Technological advancements like V2V and V2I communication are suggested to significantly reduce the number and severity of specific types of crashes, thereby enhancing the overall safety and sustainability of transportation systems. Full article
Show Figures

Figure 1

Figure 1
<p>Conceptual framework. Process of crash data extraction to modeling.</p>
Full article ">Figure 2
<p>The heat map of AV crashes in the test areas.</p>
Full article ">Figure 3
<p>The sample OL-316 form for the AV collision report provided by the CA DMV is presented. (<b>a</b>) First page of form OL-316; (<b>b</b>) Second page of form OL-316; (<b>c</b>) Third page of form OL-316.</p>
Full article ">Figure 4
<p>Word cloud of points of interest with the highest number of crashes.</p>
Full article ">Figure 5
<p>Descriptive statistics of CA DMV data as of 31 December 2023.</p>
Full article ">Figure 6
<p>Descriptive statistics of CA DMV data. (<b>a</b>) means Types of ADS disengagement; (<b>b</b>) means Type of intersection at the collision site; (<b>c</b>) means Intersection with traffic signals; (<b>d</b>) means Types of AV collisions; (<b>e</b>) means AV driving mode; (<b>f</b>) means Collision severity.</p>
Full article ">Figure 7
<p>Decision tree for classification and regression for the variable of collision type.</p>
Full article ">Figure 8
<p>Association rules bubble chart.</p>
Full article ">Figure 9
<p>Variable importance for collision type using XGB, CART, and RF algorithms.</p>
Full article ">Figure 10
<p>Feature importance with SHAP. (<b>a</b>) Impact on model output; (<b>b</b>) Average impact on model output.</p>
Full article ">
17 pages, 7504 KiB  
Article
Multi-Frequency Microwave Sensing System with Frequency Selection Method for Pulverized Coal Concentration
by Haoyu Tian, Feng Gao, Yuwei Meng, Xiaoyan Jia, Rongdong Yu, Zhan Wang and Zicheng Liu
Sensors 2024, 24(22), 7245; https://doi.org/10.3390/s24227245 - 13 Nov 2024
Viewed by 279
Abstract
The accurate measurement of pulverized coal concentration (PCC) is crucial for optimizing the production efficiency and safety of coal-fired power plants. Traditional microwave attenuation methods typically rely on a single frequency for analysis while neglecting valuable information in the frequency domain, making them [...] Read more.
The accurate measurement of pulverized coal concentration (PCC) is crucial for optimizing the production efficiency and safety of coal-fired power plants. Traditional microwave attenuation methods typically rely on a single frequency for analysis while neglecting valuable information in the frequency domain, making them susceptible to the varying sensitivity of the signal at different frequencies. To address this issue, we proposed an innovative frequency selection method based on principal component analysis (PCA) and orthogonal matching pursuit (OMP) algorithms and implemented a multi-frequency microwave sensing system for PCC measurement. This method transcended the constraints of single-frequency analysis by employing a developed hardware system to control multiple working frequencies and signal paths. It measured insertion loss data across the sensor cross-section at various frequencies and utilized PCA to reduce the dimensionality of high-dimensional full-path insertion loss data. Subsequently, the OMP algorithm was applied to select the optimal frequency signal combination based on the contribution rates of the eigenvectors, enhancing the measurement accuracy through multi-dimensional fusion. The experimental results demonstrated that the multi-frequency microwave sensing system effectively extracted features from the high-dimensional PCC samples and selected the optimal frequency combination. Filed experiments conducted on five coal mills showed that, within a common PCC range of 0–0.5 kg/kg, the system achieved a minimum mean absolute error (MAE) of 1.41% and a correlation coefficient of 0.85. These results indicate that the system could quantitatively predict PCC and promptly detect PCC fluctuations, highlighting its immediacy and reliability. Full article
Show Figures

Figure 1

Figure 1
<p>Two-port waveguide system for coal pipeline.</p>
Full article ">Figure 2
<p>Overall framework of the multi-frequency microwave sensing system.</p>
Full article ">Figure 3
<p>Schematic structure: (<b>a</b>) typical microstrip line; (<b>b</b>) microwave sensor.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic diagram of the 2-by-16 switch matrix; (<b>b</b>) electrode pair combinations at 16 electrodes.</p>
Full article ">Figure 5
<p>The air–coal loop setup: (<b>a</b>) simulation model; (<b>b</b>) actual installation.</p>
Full article ">Figure 6
<p>Measurement of different concentrations: (<b>a</b>) raw signal attenuation; (<b>b</b>) relative signal attenuation.</p>
Full article ">Figure 7
<p>Distribution of L1 distance values at each frequency: (<b>a</b>) prototype experiments; (<b>b</b>) field experiments.</p>
Full article ">Figure 8
<p>Confusion matrix using the SVM method.</p>
Full article ">Figure 9
<p>Installation outcomes. (<b>a</b>) Overall system overview; (<b>b</b>) hardware of the high-speed microwave signal routing module.</p>
Full article ">Figure 10
<p>(<b>a</b>) Distribution of frequencies by number of eigenvalues; (<b>b</b>) cumulative explained variance chart for different frequencies.</p>
Full article ">Figure 11
<p>Eigenvector distribution for frequency combinations.</p>
Full article ">Figure 12
<p>(<b>a</b>) Effect of frequency combination size on prediction result; (<b>b</b>) comparison of prediction results at different stages of frequency selection methods.</p>
Full article ">Figure 13
<p>Test results on the C# coal mill.</p>
Full article ">Figure 14
<p>Test results on other coal mill: (<b>a</b>) A#; (<b>b</b>) B#; (<b>c</b>) D#; (<b>d</b>) E#.</p>
Full article ">
15 pages, 3407 KiB  
Article
Minimalist Design for Multi-Dimensional Pressure-Sensing and Feedback Glove with Variable Perception Communication
by Hao Ling, Jie Li, Chuanxin Guo, Yuntian Wang, Tao Chen and Minglu Zhu
Actuators 2024, 13(11), 454; https://doi.org/10.3390/act13110454 - 13 Nov 2024
Viewed by 239
Abstract
Immersive human–machine interaction relies on comprehensive sensing and feedback systems, which enable transmission of multiple pieces of information. However, the integration of increasing numbers of feedback actuators and sensors causes a severe issue in terms of system complexity. In this work, we propose [...] Read more.
Immersive human–machine interaction relies on comprehensive sensing and feedback systems, which enable transmission of multiple pieces of information. However, the integration of increasing numbers of feedback actuators and sensors causes a severe issue in terms of system complexity. In this work, we propose a pressure-sensing and feedback glove that enables multi-dimensional pressure sensing and feedback with a minimalist design of the functional units. The proposed glove consists of modular strain and pressure sensors based on films of liquid metal microchannels and coin vibrators. Strain sensors located at the finger joints can simultaneously project the bending motion of the individual joint into the virtual space or robotic hand. For subsequent tactile interactions, the design of two symmetrically distributed pressure sensors and vibrators at the fingertips possesses capabilities for multi-directional pressure sensing and feedback by evaluating the relationship of the signal variations between two sensors and tuning the feedback intensities of two vibrators. Consequently, both dynamic and static multi-dimensional pressure communication can be realized, and the vibrational actuation can be monitored by a liquid-metal-based sensor via a triboelectric sensing mechanism. A demonstration of object interaction indicates that the proposed glove can effectively detect dynamic force in varied directions at the fingertip while offering the reconstruction of a similar perception via the haptic feedback function. This device introduces an approach that adopts a minimalist design to achieve a multi-functional system, and it can benefit commercial applications in a more cost-effective way. Full article
Show Figures

Figure 1

Figure 1
<p>Multi-dimensional pressure-sensing and feedback glove and its intelligent interaction system. Schematic diagram of the glove’s application in enhanced spatial immersive interaction, including (i) the structural diagram of the pressure sensor, (ii) the components of the vibration haptic feedback module, and (iii) the structural diagram of the bending sensor.</p>
Full article ">Figure 2
<p>Sensors of the multi-dimensional pressure-sensing and feedback glove. (<b>a</b>) Optical image of the pressure sensor; (<b>b</b>) optical image of the bending sensor; and (<b>c</b>) optical image of the interactive glove and the corresponding components.</p>
Full article ">Figure 3
<p>Working mechanism of the pressure sensor and the bending sensor. (<b>a</b>) The (i) schematic diagram of the pressure sensor, (ii) dimensional changes of the liquid metal electrodes in the normal and pressurized states, and (iii) changes in the A-A’ cross-section of the liquid metal electrodes; and (<b>b</b>) the (i) schematic diagram of the bending sensor, (ii) changes in the liquid metal electrodes in the normal and bending states, and (iii) dimensional changes in the bending sensors observed from view B.</p>
Full article ">Figure 4
<p>Characterization of the pressure sensor. (<b>a</b>) Schematic of the characterization method; (<b>b</b>) relationship between the sensor’s output signal and the pressure under loading conditions; (<b>c</b>) relationship between the pressure sensor’s output signal and the pressure under loading and unloading conditions; (<b>d</b>) real-time monitoring of the output signal changes during one cycle of pressure increase and decrease; (<b>e</b>) response and recovery times of the sensor; (<b>f</b>) repeatability test over 2000 cycles at 55 kPa; (<b>g</b>) relationship between the driven voltage of a coin vibration and the collected triboelectric voltage signal of the sensor; and (<b>h</b>) real-time triboelectric voltage signal as the driven voltage continues to increase.</p>
Full article ">Figure 5
<p>Characterization of the bending sensor. (<b>a</b>) Schematic of the characterization method; (<b>b</b>) relationship between the sensor’s output signal and the strain under tensile conditions; (<b>c</b>) relationship between the bending sensor’s output signal and the pressure under loading and unloading conditions; (<b>d</b>) response and recovery times of the sensor; (<b>e</b>) repeatability test over 2000 cycles at 20% strain; (<b>f</b>) response of the sensor to strain with a given initial torsion angle; and (<b>g</b>) response of the sensor to strain with a given initial curvature.</p>
Full article ">Figure 6
<p>Demonstration application of the multi-dimensional pressure-sensing and feedback glove. (<b>a</b>) Schematic of fingertip pressing status including (i) left side contact, (ii) right side contact, (iii) intermediate contact and (iv) rolling from left to right; (<b>b</b>) real-time output signals of the pressure sensor at different pressing angles; (<b>c</b>) feedback from the coin vibrators at different pressing angles with single vibrator running condition marked by grey and both vibrators running condition marked by pale yellow; (<b>d</b>) output signals from the bending sensor measure the stepped bending of the finger at an angle of 10 degrees each time up to 90 degrees; (<b>e</b>) response of the bending sensor under different bending methods; (<b>f</b>) various hand gestures labelled from ① to ⑧ used to test the bending sensor; (<b>g</b>) output signals corresponding to different hand gestures labelled from ② to ⑧; (<b>h</b>) demonstration of grasping a test tube; (<b>i</b>) feedback from coin vibrators during the grasping process; (<b>j</b>) real-time signal output during the grasp; and (<b>k</b>) snapshot of pressure and bending angles before and after grasping.</p>
Full article ">
18 pages, 7562 KiB  
Article
Reliable and Resilient Wireless Communications in IoT-Based Smart Agriculture: A Case Study of Radio Wave Propagation in a Corn Field
by Blagovest Nikolaev Atanasov, Nikolay Todorov Atanasov and Gabriela Lachezarova Atanasova
Telecom 2024, 5(4), 1161-1178; https://doi.org/10.3390/telecom5040058 - 12 Nov 2024
Viewed by 560
Abstract
In the past few years, one of the largest industries in the world, the agriculture sector, has faced many challenges, such as climate change and the depletion of limited natural resources. Smart Agriculture, based on IoT, is considered a transformative force that will [...] Read more.
In the past few years, one of the largest industries in the world, the agriculture sector, has faced many challenges, such as climate change and the depletion of limited natural resources. Smart Agriculture, based on IoT, is considered a transformative force that will play a crucial role in the further advancement of the agri-food sector. Furthermore, in IoT-based Smart Agriculture systems, radio wave propagation faces unique challenges (such as attenuation in vegetation and soil and multiple reflections) because of sensor nodes deployed in agriculture fields at or slightly above the ground level. In our study, we present, for the first time, several models (Multi-slope, Weissberger, and COST-235) suitable for planning radio coverage in a cornfield for Smart Agriculture applications. We received signal level measurements as a function of distance in a corn field (R3 corn stage) at 0.9 GHz and 2.4 GHz using two transmitting and two receiving antenna heights, with both horizontal and vertical polarization. The results indicate that radio wave propagation in a corn field is influenced not only by the surrounding environment (i.e., corn), but also by the antenna polarization and the positions of the transmitting and receiving antennas relative to the ground. Full article
Show Figures

Figure 1

Figure 1
<p>Measurements in a corn field in the agricultural area near Dabravata Village, Yablanitsa Municipality, Bulgaria: (<b>a</b>) Google Earth image; (<b>b</b>) photo from the corn field.</p>
Full article ">Figure 2
<p>Configuration of the measurement setup.</p>
Full article ">Figure 3
<p>Measurement site: (<b>a</b>) Google Earth image with Tx and Rx antenna locations in the corn field; (<b>b</b>) photo of Rx antenna placed below the corn height; (<b>c</b>) photo of Rx antenna placed above the corn height; (<b>d</b>) direction of measurement at the experimental corn field.</p>
Full article ">Figure 4
<p>Measured reflection coefficients for the two dipoles: (<b>a</b>) reference dipole used for measurements at 0.9 GHz; (<b>b</b>) reference dipole used for measurements at 2.4 GHz.</p>
Full article ">Figure 5
<p>Received signal level variation with distance at 0.9 GHz for co-polarized H-H and V-V antennas: (<b>a</b>) transmitting antenna is placed at a height of λ/3 m above ground (hTx = 0.11 m); (<b>b</b>) transmitting antenna is placed at a height of 0.5 m above ground (hTx = 0.5 m).</p>
Full article ">Figure 6
<p>Received signal level variation with distance at 2.4 GHz for co-polarized H-H and V-V antennas: (<b>a</b>) transmitting antenna is placed at a height of λ/3 m above ground (h<sub>Tx</sub> = 0.04 m); (<b>b</b>) transmitting antenna is placed at a height of 0.5 m above ground (h<sub>Tx</sub> = 0.5 m).</p>
Full article ">Figure 7
<p>Received signal level variation with distance at 0.9 GHz for co-polarized H-H and V-V antennas: (<b>a</b>) receiving antenna height below the corn height (h<sub>Rx</sub> = 2.0 m); (<b>b</b>) receiving antenna height above the corn height (h<sub>Rx</sub> = 3.4 m).</p>
Full article ">Figure 8
<p>Received signal level variation with distance at 2.4 GHz for co-polarized H-H and V-V antennas: (<b>a</b>) receiving antenna height below the corn height (h<sub>Rx</sub> = 2.0 m); (<b>b</b>) receiving antenna height above the corn height (h<sub>Rx</sub> = 3.4 m).</p>
Full article ">Figure 9
<p>Comparison between losses for co-polarized H-H antennas at transmitting antenna height λ/3 m with existing models: (<b>a</b>) 0.9 GHz, h<sub>Rx</sub> = 2.0 m; (<b>b</b>) 0.9 GHz, h<sub>Rx</sub> = 3.4 m; (<b>c</b>) 2.4 GHz, h<sub>Rx</sub> = 2.0 m; (<b>d</b>) 2.4 GHz, h<sub>Rx</sub> = 3.4 m.</p>
Full article ">Figure 10
<p>Comparison between losses for co-polarized V-V antennas at transmitting antenna height λ/3 m with existing models: (<b>a</b>) 0.9 GHz, h<sub>Rx</sub> = 2.0 m; (<b>b</b>) 0.9 GHz, h<sub>Rx</sub> = 3.4 m; (<b>c</b>) 2.4 GHz, h<sub>Rx</sub> = 2.0 m; (<b>d</b>) 2.4 GHz, h<sub>Rx</sub> = 3.4 m.</p>
Full article ">Figure 11
<p>Comparison between losses for co-polarized H-H antennas at transmitting antenna height 0.5 m with existing models: (<b>a</b>) 0.9 GHz, h<sub>Rx</sub> = 2.0 m; (<b>b</b>) 0.9 GHz, h<sub>Rx</sub> = 3.4 m; (<b>c</b>) 2.4 GHz, h<sub>Rx</sub> = 2.0 m; (<b>d</b>) 2.4 GHz, h<sub>Rx</sub> = 3.4 m.</p>
Full article ">Figure 12
<p>Comparison between losses for co-polarized V-V antennas at transmitting antenna height 0.5 m with existing models: (<b>a</b>) 0.9 GHz, h<sub>Rx</sub> = 2.0 m; (<b>b</b>) 0.9 GHz, h<sub>Rx</sub> = 3.4 m; (<b>c</b>) 2.4 GHz, h<sub>Rx</sub> = 2.0 m; (<b>d</b>) 2.4 GHz, h<sub>Rx</sub> = 3.4 m.</p>
Full article ">
17 pages, 6898 KiB  
Article
SLAM Algorithm for Mobile Robots Based on Improved LVI-SAM in Complex Environments
by Wenfeng Wang, Haiyuan Li, Haiming Yu, Qiuju Xie, Jie Dong, Xiaofei Sun, Honggui Liu, Congcong Sun, Bin Li and Fang Zheng
Sensors 2024, 24(22), 7214; https://doi.org/10.3390/s24227214 - 11 Nov 2024
Viewed by 536
Abstract
The foundation of robot autonomous movement is to quickly grasp the position and surroundings of the robot, which SLAM technology provides important support for. Due to the complex and dynamic environments, single-sensor SLAM methods often have the problem of degeneracy. In this paper, [...] Read more.
The foundation of robot autonomous movement is to quickly grasp the position and surroundings of the robot, which SLAM technology provides important support for. Due to the complex and dynamic environments, single-sensor SLAM methods often have the problem of degeneracy. In this paper, a multi-sensor fusion SLAM method based on the LVI-SAM framework was proposed. First of all, the state-of-the-art feature detection algorithm SuperPoint is used to extract the feature points from a visual-inertial system, enhancing the detection ability of feature points in complex scenarios. In addition, to improve the performance of loop-closure detection in complex scenarios, scan context is used to optimize the loop-closure detection. Ultimately, the experiment results show that the RMSE of the trajectory under the 05 sequence from the KITTI dataset and the Street07 sequence from the M2DGR dataset are reduced by 12% and 11%, respectively, compared to LVI-SAM. In simulated complex environments of animal farms, the error of this method at the starting and ending points of the trajectory is less than that of LVI-SAM, as well. All these experimental comparison results prove that the method proposed in this paper can achieve higher precision and robustness performance in localization and mapping within complex environments of animal farms. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental platform for livestock inspection robot.</p>
Full article ">Figure 2
<p>Experimental conditions and scenarios (the numbers correspond to <a href="#sensors-24-07214-t003" class="html-table">Table 3</a>).</p>
Full article ">Figure 3
<p>The system architecture of our method. The red solid-line boxes (SuperPoint) and the orange solid-line boxes (scan context) are the innovative parts of our method compared with LVI-SAM.</p>
Full article ">Figure 4
<p>Comparison of Shi–Tomasi, ORB, and SuperPoint feature detection. (<b>a</b>) Shi–Tomasi algorithm. (<b>b</b>) ORB algorithm. (<b>c</b>) SuperPoint algorithm. The red circles indicate the feature points extracted by this method.</p>
Full article ">Figure 5
<p>Frame 265 of the KITTI sequence 05 scan context transformation. (<b>a</b>) 3D point cloud. (<b>b</b>) Scan context.</p>
Full article ">Figure 6
<p>Scan context algorithm overview [<a href="#B30-sensors-24-07214" class="html-bibr">30</a>]. Copyright © 2018, IEEE.</p>
Full article ">Figure 7
<p>Frames 61 and 1105 of the KITTI sequence 05 scan context transformation. (<b>a</b>) The frame 61 scan context. (<b>b</b>) The frame 1105 scan context. (<b>c</b>) The Scan Context after translation of frame 1105.</p>
Full article ">Figure 8
<p>Schematic diagram of translation search method with prior information.</p>
Full article ">Figure 9
<p>The mapping effects of different methods on KITTI sequence 05. (<b>a</b>) 3D mapping of the method proposed in this paper. (<b>b</b>) 3D mapping of LVI-SAM. (<b>c</b>) 3D map construction details of the method proposed in this paper. (<b>d</b>) 3D map construction details of LVI-SAM.</p>
Full article ">Figure 10
<p>Comparison of trajectories using different methods on the KITTI sequence 05. (<b>a</b>) Comparison of trajectories on the x-y plane. (<b>b</b>) Comparison of trajectories in the x-, y-, and z-directions.</p>
Full article ">Figure 11
<p>Comparison of APE at various time points on KITTI sequence 05 (/m).</p>
Full article ">Figure 12
<p>Comparison of trajectories using different methods on the M2DGR sequence Street07.</p>
Full article ">Figure 13
<p>Comparison of APE at various time points on the M2DGR sequence Street07 (/m).</p>
Full article ">Figure 14
<p>The mapping effects of different methods in real-world scenarios. (<b>a</b>) 3D mapping of the method proposed in this paper. (<b>b</b>) 3D mapping of LVI-SAM. (<b>c</b>) 3D map construction details of the method proposed in this paper. (<b>d</b>) 3D map construction details of LVI-SAM.</p>
Full article ">Figure 15
<p>Comparison of trajectories using different methods in real-world scenarios.</p>
Full article ">Figure 16
<p>Movement speed using different methods at various times under real-world scenarios.</p>
Full article ">
46 pages, 19002 KiB  
Article
3Cat-8 Mission: A 6-Unit CubeSat for Ionospheric Multisensing and Technology Demonstration Test-Bed
by Luis Contreras-Benito, Ksenia Osipova, Jeimmy Nataly Buitrago-Leiva, Guillem Gracia-Sola, Francesco Coppa, Pau Climent-Salazar, Paula Sopena-Coello, Diego Garcín, Juan Ramos-Castro and Adriano Camps
Remote Sens. 2024, 16(22), 4199; https://doi.org/10.3390/rs16224199 - 11 Nov 2024
Viewed by 751
Abstract
This paper presents the mission analysis of 3Cat-8, a 6-Unit CubeSat mission being developed by the UPC NanoSat Lab for ionospheric research. The primary objective of the mission is to monitor the ionospheric scintillation of the aurora, and to perform several technological [...] Read more.
This paper presents the mission analysis of 3Cat-8, a 6-Unit CubeSat mission being developed by the UPC NanoSat Lab for ionospheric research. The primary objective of the mission is to monitor the ionospheric scintillation of the aurora, and to perform several technological demonstrations. The satellite incorporates several novel systems, including a deployable Fresnel Zone Plate Antenna (FZPA), an integrated PocketQube deployer, a dual-receiver GNSS board for radio occultation and reflectometry experiments, and a polarimetric multi-spectral imager for auroral emission observations. The mission design, the suite of payloads, and the concept of operations are described in detail. This paper discusses the current development status of 3Cat-8, with several subsystems already developed and others in the final design phase. It is expected that the data gathered by 3Cat-8 will contribute to a better understanding of ionospheric effects on radio wave propagation and demonstrate the feasibility of compact remote sensors in a CubeSat platform. Full article
(This article belongs to the Special Issue Advances in CubeSats for Earth Observation)
Show Figures

Figure 1

Figure 1
<p><sup>3</sup>Cat-8 overall mission timeline.</p>
Full article ">Figure 2
<p><sup>3</sup>Cat-8 Mission Launch and Early Orbit Phase.</p>
Full article ">Figure 3
<p>General view of the <sup>3</sup>Cat-8 satellite.</p>
Full article ">Figure 4
<p><sup>3</sup>Cat-8 in fully deployed configuration.</p>
Full article ">Figure 5
<p><sup>Po</sup>Cat deployment and early operations phase.</p>
Full article ">Figure 6
<p><sup>3</sup>Cat-8 Mission Operational Phase.</p>
Full article ">Figure 7
<p>Exploded view of the <sup>3</sup>Cat-8 satellite’s subsystems.</p>
Full article ">Figure 8
<p><sup>3</sup>Cat-8 data connection architecture.</p>
Full article ">Figure 9
<p>Modeled camera quantum efficiency compared to auroral emissions.</p>
Full article ">Figure 10
<p>Flight model of the SUSIE imager.</p>
Full article ">Figure 11
<p>OBDHand OBC PCBs of the C3SatP’s qualification and flight models.</p>
Full article ">Figure 12
<p>Schematic view of the GNSS-R/RO payload architecture.</p>
Full article ">Figure 13
<p>SiLeX down-looking antenna, which includes the L1 patch array, and the stacked S-band (<math display="inline"><semantics> <mrow> <mn>2</mn> <mo>×</mo> </mrow> </semantics></math> single patch) and X-band (<math display="inline"><semantics> <mrow> <mn>2</mn> <mo>×</mo> <mn>2</mn> </mrow> </semantics></math> array) antennas.</p>
Full article ">Figure 14
<p>FZPA pathfinders: (<b>left</b>) 3U model after a deployment test; (<b>center</b>) Single-crown FZPA in the anechoic chamber; (<b>right</b>) Scale model of the FZPA membrane for 2.45 GHz.</p>
Full article ">Figure 15
<p>Radiation pattern at L1, for a full-crown plus partial segments FZPA, from CST simulation (<b>left</b>), and anechoic chamber tests (<b>right</b>).</p>
Full article ">Figure 16
<p>Deployment sequence of the FZPA from the <sup>3</sup>Cat-8 spacecraft.</p>
Full article ">Figure 17
<p>CuPID models for two and three PocketQube units.</p>
Full article ">Figure 18
<p><sup>3</sup>Cat-8 power distribution block diagram.</p>
Full article ">Figure 19
<p><sup>3</sup>Cat-8 OBC application layer element overview.</p>
Full article ">Figure 20
<p><sup>3</sup> Cat-8 structure with a 3P version of CuPID (see <a href="#remotesensing-16-04199-f017" class="html-fig">Figure 17</a>a).</p>
Full article ">Figure 21
<p>Block diagram of the spacecraft ADCS.</p>
Full article ">Figure 22
<p>Attitude control actuators un <sup>3</sup>Cat-8.</p>
Full article ">Figure 23
<p>Link budget of the different communication subsystems.</p>
Full article ">Figure 24
<p>Pass duration distribution for four communication lines over GS in Montsec (over 1 year).</p>
Full article ">Figure 25
<p>Temperature distribution of <sup>3</sup>Cat-8 during Post-Standby in the coldest (<b>left</b>) and hottest (<b>right</b>) orbit areas.</p>
Full article ">Figure 26
<p><sup>3</sup>Cat-8 subsystem average temperature variations during Post-Standby.</p>
Full article ">Figure 27
<p>Temperature distribution of <sup>3</sup>Cat-8 in Nominal Mode in the coldest orbit area.</p>
Full article ">Figure 28
<p>Tmperature distribution of <sup>3</sup>Cat-8 in Nominal Mode in the hottest orbit area.</p>
Full article ">Figure 29
<p><sup>3</sup>Cat-8 subsystem average temperatures variation in Nominal Mode.</p>
Full article ">Figure 30
<p><sup>3</sup>Cat-8 simplified geometry.</p>
Full article ">Figure 31
<p>Modal shapes for modes with the greatest Effective Mass Ratio.</p>
Full article ">Figure 32
<p>Equivalent stress for the Random Vibrations simulation at GEVS levels.</p>
Full article ">Figure 33
<p>Angular body rates and actuator torques during complete detumbling operation.</p>
Full article ">Figure 34
<p>Total attitude error, disturbance torques, and reaction wheel angular rate during hybrid actuation nadir pointing with momentum unloading.</p>
Full article ">Figure 35
<p>Altitude change of <sup>3</sup>Cat-8 in Fully Deployed Configuration.</p>
Full article ">Figure 36
<p>Altitude change of <sup>3</sup>Cat-8 in Partially Deployed Configuration.</p>
Full article ">
45 pages, 24880 KiB  
Article
Future Low-Cost Urban Air Quality Monitoring Networks: Insights from the EU’s AirHeritage Project
by Saverio De Vito, Antonio Del Giudice, Gerardo D’Elia, Elena Esposito, Grazia Fattoruso, Sergio Ferlito, Fabrizio Formisano, Giuseppe Loffredo, Ettore Massera, Paolo D’Auria and Girolamo Di Francia
Atmosphere 2024, 15(11), 1351; https://doi.org/10.3390/atmos15111351 - 10 Nov 2024
Viewed by 531
Abstract
The last decade has seen a significant growth in the adoption of low-cost air quality monitoring systems (LCAQMSs), mostly driven by the need to overcome the spatial density limitations of traditional regulatory grade networks. However, urban air quality monitoring scenarios have proved extremely [...] Read more.
The last decade has seen a significant growth in the adoption of low-cost air quality monitoring systems (LCAQMSs), mostly driven by the need to overcome the spatial density limitations of traditional regulatory grade networks. However, urban air quality monitoring scenarios have proved extremely challenging for their operative deployment. In fact, these scenarios need pervasive, accurate, personalized monitoring solutions along with powerful data management technologies and targeted communications tools; otherwise, these scenarios can lead to a lack of stakeholder trust, awareness, and, consequently, environmental inequalities. The AirHeritage project, funded by the EU’s Urban Innovative Action (UIA) program, addressed these issues by integrating intelligent LCAQMSs with conventional monitoring systems and engaging the local community in multi-year measurement strategies. Its implementation allowed us to explore the benefits and limitations of citizen science approaches, the logistic and functional impacts of IoT infrastructures and calibration methodologies, and the integration of AI and geostatistical sensor fusion algorithms for mobile and opportunistic air quality measurements and reporting. Similar research or operative projects have been implemented in the recent past, often focusing on a limited set of the involved challenges. Unfortunately, detailed reports as well as recorded and/or cured data are often not publicly available, thus limiting the development of the field. This work openly reports on the lessons learned and experiences from the AirHeritage project, including device accuracy variance, field recording assessments, and high-resolution mapping outcomes, aiming to guide future implementations in similar contexts and support repeatability as well as further research by delivering an open datalake. By sharing these insights along with the gathered datalake, we aim to inform stakeholders, including researchers, citizens, public authorities, and agencies, about effective strategies for deploying and utilizing LCAQMSs to enhance air quality monitoring and public awareness on this challenging urban environment issue. Full article
(This article belongs to the Special Issue Air Quality and Energy Transition: Interactions and Impacts)
Show Figures

Figure 1

Figure 1
<p>The path from the goals to the selection of architectural design and technology for LCAQMS network deployment projects; connections illustrate possible routes throughout the project design choices.</p>
Full article ">Figure 2
<p>MONICA<sup>TM</sup> node diagram.</p>
Full article ">Figure 3
<p>Front and back picture of the MONICA node.</p>
Full article ">Figure 4
<p>Synthetic schema of complete software architecture for AirHeritage project.</p>
Full article ">Figure 5
<p>Status of air quality from fixed stations.</p>
Full article ">Figure 6
<p>Interactive map for a MONICA registered session. Mobility paths are highlighted using a color code base on the European Air Quality Index (EAQI).</p>
Full article ">Figure 7
<p>The position of the three co-location campaigns on a map performed in the AirHeritage project and details of the assembly and USB based on the multiple device power supply unit.</p>
Full article ">Figure 8
<p>Scheme of IoT architecture in stationary setup.</p>
Full article ">Figure 9
<p>The 7 fixed stations as deployed nearby the reference mobile station during calibration data gathering (co-location periods).</p>
Full article ">Figure 10
<p>Lognormal fitted pollutant concentrations as recorded in the first co-location period by mobile ARPAC air quality monitoring laboratory reporting reference values for data-driven calibration.</p>
Full article ">Figure 11
<p>Lognormal fitted concentrations of CO as recorded during the first co-location period by mobile ARPAC air quality monitoring laboratory reporting reference values for data-driven calibration.</p>
Full article ">Figure 12
<p>Distribution, across the 30 MONICA™ devices, of R<sup>2</sup> (first coloumn) and MAE (second column) short-term performance values for NO<sub>2</sub> (first row), O<sub>3</sub> (second row). and CO (third row), as estimated by MLR-based data-driven calibration in deployment period 1. The distributions appear to be skewed by a few outliers. Performed checks show that anomalous low performance is due to transients in raw sensor responses when they were first switched on.</p>
Full article ">Figure 13
<p>R<sup>2</sup> (1st coloumn) and MAE (2nd coloumn) and short-term performance for PM<sub>2.5</sub> (first row) and PM<sub>10</sub> (second row) as estimated by MLR-based data-driven calibration in deployment period 1, across the 30 MONICA devices.</p>
Full article ">Figure 14
<p>Histogram of PM<sub>2.5</sub> R<sup>2</sup> accuracy index; (violet) along with gaussian distribution fit (blue) 3 different device performance clusters are observable, each one corresponding to a co-location batch.</p>
Full article ">Figure 15
<p>Time series of PM<sub>10</sub> and PM<sub>2.5</sub> concentrations, as measured by the mobile laboratory, during the initial co-location period.</p>
Full article ">Figure 16
<p>Trend in the measured hourly mean NO<sub>2</sub> concentrations.</p>
Full article ">Figure 17
<p>The hourly mean concentration of ozone (O<sub>3</sub>) (black line) and the 8 h moving average (yellow line) are reported.</p>
Full article ">Figure 18
<p>Hourly average carbon monoxide CO concentration time series.</p>
Full article ">Figure 19
<p>The time series of PM<sub>10</sub> and PM2<sub>.5</sub> concentrations, as measured by the mobile laboratory, during the 2nd co-location period.</p>
Full article ">Figure 20
<p>Trend in the measured hourly mean NO<sub>2</sub> concentrations during the 2nd co-location.</p>
Full article ">Figure 21
<p>The hourly mean concentration of ozone (O3) (black line) and the 8 h moving average (yellow line) are presented along with the daily average temperature graph.</p>
Full article ">Figure 22
<p>Hourly average carbon monoxide CO concentration time series in 2nd co-location.</p>
Full article ">Figure 23
<p>The time series of PM<sub>10</sub> and PM<sub>2.5</sub> concentrations, as measured by the mobile laboratory, during the 3rd co-location period.</p>
Full article ">Figure 24
<p>NO<sub>2</sub> hourly average concentrations during the 3rd co-location.</p>
Full article ">Figure 25
<p>Hourly average concentration of ozone (O<sub>3</sub>) (black line) and the 8 h moving average (yellow line) are shown (<b>top</b>) with the daily average temperature plot (<b>bottom</b>).</p>
Full article ">Figure 26
<p>CO hourly average concentrations recorded by the mobile station during the 3rd co-location period.</p>
Full article ">Figure 27
<p>(<b>a</b>) An illustrative example of a user session as displayed on the webpage, accompanied by an indication of the location and the level of pollutants. (<b>b</b>) An illustrative example of a user session as displayed on the MONICA app.</p>
Full article ">Figure 28
<p>A schematic representation of the data flow in a mobile application scenario.</p>
Full article ">Figure 29
<p>The workflow performed in the Air-Heritage project.</p>
Full article ">Figure 30
<p>Site suitability map for networks of low-cost traffic-orientated stations for air pollutant monitoring across the city of Portici.</p>
Full article ">Figure 31
<p>Map of one of the optimal locations (red triangle within the red circle), with the related geographical coordinates (marked in red in the table) and the image of the mounted pole where NOx and PM<sub>2.5</sub> sensors have to be installed.</p>
Full article ">Figure 32
<p>Maps of the mobile monitoring campaigns along the selected monitoring route.</p>
Full article ">Figure 33
<p>Comparison between MONICA (blue line) and SIRANE (orange line), for CO pollutant on 5 and 21 June. Triangles are street canyons and circles are open roads. The ID receptors are grouped by monitoring road segments. The graphs (<b>a</b>,<b>c</b>,<b>e</b>) show the comparisons at 9 a.m., 1 p.m. and 5 p.m. on 5 June while the graphs (<b>b</b>,<b>d</b>,<b>f</b>) show the comparison at 9 a.m., 1 p.m. and 5 p.m. on 21 June.</p>
Full article ">Figure 34
<p>Maps of the PM<sub>2.5</sub> measurement density for each 25 m bin in summer (<b>a</b>) and winter campaigns (<b>b</b>).</p>
Full article ">Figure 35
<p>Maps of the distribution (median value) of the recorded PM<sub>2.5</sub> concentrations within the 25 m bins in summer (<b>a</b>) and winter campaigns (<b>b</b>).</p>
Full article ">
Back to TopTop