Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Variations in External and Internal Intensities and Impact of Maturational Age on Soccer Training Tasks
Previous Article in Journal
Uncertainty Calculation as a Service: Integrating Cloud-Based Microservices for Enhanced Calibration and DCC Generation
Previous Article in Special Issue
Long-Range Imaging of Alpha Emitters Using Radioluminescence in Open Environments: Daytime and Night-Time Applications
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Portable Sensors for Dynamic Exposure Assessments in Urban Environments: State of the Science

by
Jelle Hofman
1,*,
Borislav Lazarov
1,
Christophe Stroobants
2,
Evelyne Elst
2,
Inge Smets
2 and
Martine Van Poppel
2
1
Environmental Intelligence Unit, Flemish Institute for Technological Research (VITO), Vlasmeer 5, 2400 Mol, Belgium
2
Flanders Environmental Agency (VMM), Kronenburgstraat 45, 2000 Antwerp, Belgium
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(17), 5653; https://doi.org/10.3390/s24175653
Submission received: 20 June 2024 / Revised: 20 August 2024 / Accepted: 28 August 2024 / Published: 30 August 2024
(This article belongs to the Special Issue Novel Sensing Technologies for Environmental Monitoring and Detection)
Graphical abstract
">
Figure 1
<p>Considered sensor systems (10) with (upper panel left to right): PAM (2BTech, Broomfield, CO, USA), GeoAir, Observair (DSTech, Pohang-si, Republic of Korea), SODAQ Air (SODAQ, Hilversum, The Netherlands), PMscan (TERA Sensor, Rousset, France), OPEN SENECA (open-seneca.org), and ATMOTUBE Pro (ATMOTECH Inc., San Francisco, CA, USA). Lower panel left to right: SODAQ NO<sub>2</sub> (SODAQ, Hilversum, The Netherlands), Habitatmap Airbeam (Habitatmap, Brooklyn, NY, USA), and BCmeter (BCmeter.org).</p> ">
Figure 2
<p>PM exposure chamber in the lab (<b>left</b>), mobile field test with cargo bike (<b>middle</b>), and field co-location campaign at an urban background monitoring station (<b>right</b>).</p> ">
Figure 3
<p>Mobile field trajectory (10.4 km) in the city center of Antwerp, Belgium (<b>upper left</b>), and applied cargo bike setup (<b>upper right</b>). Lower pictures show the variety of urban landscape and road traffic along the cycling route.</p> ">
Figure 4
<p>Stepwise PM<sub>2</sub>.<sub>5</sub> concentrations generated during the lack-of-fit test and measured concentrations by the different sensor systems (1-3; green-blue-red) and the reference monitor (Grimm; purple/green).</p> ">
Figure 5
<p>Coarse PM testing procedure with consecutive 5-min generation periods of coarse (7.75 µm) and fine (1.18 µm) PM peaks (upper panel; measured by Grimm REF monitor) and resulting ATMOTUBE and OPEN SENECA sensor response (µg/m<sup>3</sup>) in the lower panels.</p> ">
Figure 6
<p>Stepwise NO<sub>2</sub> concentrations generated during the lack-of-fit tests and measured raw (<b>left</b>) and lab-calibrated (<b>right</b>) concentrations by the SODAQ NO<sub>2</sub> (1-3; upper in red), PAM (<b>middle</b> in red), Observair (<b>lower</b> in red), and the reference monitor (Thermo NO<sub>x</sub> analyzer in purple/green).</p> ">
Figure 7
<p>(<b>Left</b>): GPS tracks of the considered sensor systems (dots) and reference GPS track (blue line). (<b>Right</b>): Accuracy calculation by means of horizontal distance to reference GPS track (blue line).</p> ">
Figure 8
<p>Location of the exposure shelter on top of R801 urban background monitoring station (<b>left</b>), detail of the exposure shelter (<b>middle</b>), and positioning of the sensor systems at the different platforms inside the shelter (<b>right</b>).</p> ">
Figure 9
<p>Hourly timeseries of PM<sub>2</sub>.<sub>5</sub>, NO<sub>2</sub>, and BC concentrations measured by the respective sensor systems and the reference monitors at the R801 reference background monitoring station.</p> ">
Review Reports Versions Notes

Abstract

:
This study presents a fit-for-purpose lab and field evaluation of commercially available portable sensor systems for PM, NO2, and/or BC. The main aim of the study is to identify portable sensor systems that are capable of reliably quantifying dynamic exposure gradients in urban environments. After an initial literature and market study resulting in 39 sensor systems, 10 sensor systems were ultimately purchased and benchmarked under laboratory and real-word conditions. We evaluated the comparability to reference analyzers, sensor precision, and sensitivity towards environmental confounders (temperature, humidity, and O3). Moreover, we evaluated if the sensor accuracy can be improved by applying a lab or field calibration. Because the targeted application of the sensor systems under evaluation is mobile monitoring, we conducted a mobile field test in an urban environment to evaluate the GPS accuracy and potential impacts from vibrations on the resulting sensor signals. Results of the considered sensor systems indicate that out-of-the-box performance is relatively good for PM (R2 = 0.68–0.9, Uexp = 16–66%, BSU = 0.1–0.7 µg/m3) and BC (R2 = 0.82–0.83), but maturity of the tested NO2 sensors is still low (R2 = 0.38–0.55, Uexp = 111–614%) and additional efforts are needed in terms of signal noise and calibration, as proven by the performance after multilinear calibration (R2 = 0.75–0.83, Uexp = 37–44%)). The horizontal accuracy of the built-in GPS was generally good, achieving <10 m accuracy for all sensor systems. More accurate and dynamic exposure assessments in contemporary urban environments are crucial to study real-world exposure of individuals and the resulting impacts on potential health endpoints. A greater availability of mobile monitoring systems capable of quantifying urban pollutant gradients will further boost this line of research.

Graphical Abstract">

Graphical Abstract

1. Introduction

Air quality has improved significantly over the past decades. Yet, exposure to particulate matter and nitrogen dioxide in Europe still causes an estimated 253,000 and 52,000 premature deaths per year [1]. Moreover, continuous worldwide urbanization results in megacities with intrinsic hotspots, highlighting the importance of proper air pollution monitoring. Currently, the exposure of the population to air pollution is still determined based on home address (static exposure). However, research has shown that people are exposed to the highest air pollution peaks at times when they are in transit (e.g., during commutes) [2,3,4,5,6]. Studies applying activity-based models or personal monitors demonstrated that transit activities, although short in duration, can be responsible for quite a large part of the integrated personal exposure to combustion-related pollutants [2,4,7,8,9]. Research based on an extensive dataset of 20,000 citizens confirmed that this in-transit (dynamic) exposure is often (64% of the individuals) higher than the respective static residence-based exposure [10]. To better assess dynamic exposure on a wider scale, mobile monitoring systems are needed that (i) can easily be used by study participants (e.g., citizens) and (ii) produce reliable data.
Recent advances in sensor and Internet of Things (IoT) technologies have resulted in a wide range of commercially available “low-cost” sensor systems that allow for quantification of urban pollutants, e.g., particulate matter (PMx), nitrogen dioxide (NO2), and ozone (O3), at an unprecedented scale [11]. Portable air quality sensors enable quantification of dynamic exposure while raising awareness among citizens about their personal exposure, in turn driving behavioral change [12,13,14,15,16]. Moreover, the obtained mobile data can be used to construct urban exposure maps offering policy makers the right tools for evidence-based policy measures [11,17,18,19,20,21,22]. As Helbig et al. [23] stated, wearable sensing has two aspects: firstly, the exposure of an individual is recorded, and secondly, individuals act as explorers of the urban area. While many stationary sensor systems have been evaluated and benchmarked in previous years [24,25], mobile sensor systems have different requirements, e.g., power autonomy (battery), a high monitoring resolution, and accurate positioning (GPS). Also, the sensor signal noise and between-sensor variability should be low enough to be able to measure the spatial concentration variability at a high temporal resolution (with multiple sensors). Today, many commercially available portable sensor systems are already on the market, but it is hard to determine their fit-for-purpose. This is one of the first studies benchmarking commercially available portable sensor systems for mobile applications. This study includes an evaluation of the data quality performance of different sensor systems under lab and field conditions, as well as during a mobile field test to evaluate GPS performance, the impact of vibrations on the sensor signal, and the overall potential to capture spatial pollutant gradients in urban environments. Doing so, we evaluate the applicability of these sensor systems in real-world urban environments.

2. Materials and Methods

2.1. Sensor System Selection

Based on an earlier literature market study on air quality sensors [26], expert network consultation (RIVM, VMM, NPL, Ineris, US EPA), reports of independent sensor performance studies (AIRlab, AQ-SPEC, SamenMeten, EPA Air Sensor Toolbox), recent sensor-based citizen science studies [13,27,28,29,30,31,32,33], and a new literature search on Web of Science (~90 publications with search criteria “mobile”, “sensor”, “pollution”, “exposure”), we compiled a longlist of 39 sensor systems with the following criteria:
  • commercially available
  • wireless/power solution (battery or via car)
  • weatherproof housing
  • data transmission/logging solution (internal, USB, Bluetooth, LTE-M, LoRA, wifi)
In addition to the criteria above, we defined a set of quantitative (R2, slope, intercept, accuracy, and between-sensor uncertainty) and qualitative criteria (price, monitored pollutants, additional variables (temperature, relative humidity, pressure, noise, …), monitoring resolution, GPS localization, autonomy, display/LED, user-friendliness (portability, mounting options, size, weight)) to differentiate between the longlist sensor systems. This longlist was narrowed down based on the following:
  • sensor capability to monitor PM, NO2, and/or BC
  • availability of particle mass concentration (µg/m3; instead of particle number concentration)
  • power autonomy (battery instead of car-powered systems)
  • GPS localization (internal or via smartphone)
This resulted in a final shortlist of 12 suitable portable sensor systems for which quotation requests were sent out. Ultimately, 10 sensor systems were purchased (Table S1), of which 8/10 contained a PM2.5 and PM10 sensor, and 3/10 sensor systems contained an additional NO2 sensor (SODAQ NO2, DST Observair, and 2BTech PAM). All 10 sensor systems can be regarded as portable air quality sensor systems, with power autonomy (battery), data storage and/or transmission, and GPS localization (Figure 1).
As no commercial low-cost sensor systems were available for BC, we considered a mid-end instrument, including NO2 (DST Observair), and research prototype for stationary measurements (wifi, power cable) in the field co-location campaign. In order to obtain a portable BCmeter, additional hardware/software developments will be needed.

2.2. Benchmarking Protocol

The purchased sensor systems were evaluated under controlled (laboratory) and real-life (field) conditions (Figure 2). Field benchmarking included a mobile test on a cargo bike and a 3-month co-location campaign at a regulatory urban background (R801) air quality monitoring station in Antwerp, Belgium.

2.2.1. Laboratory Test Protocol

Laboratory tests were performed for both PM and NO2.Test levels and test conditions for NO2 were based on the CEN/TS 17660-1:2022. For PM, we included a laboratory test to evaluate the potential of the sensor to measure the coarse fraction (PM2.5–10 = PM10 − PM2.5) because it is known that some low-cost sensors calculate PM10 concentrations based on the measured concentrations of PM2.5, and sensors can have various response characteristics regarding size selectivity [34,35]. For PM2.5 and PM10, we evaluated:
  • Lack-of-fit (linearity) at setpoints 0, 30, 40, 60, 130, 200, and 350 µg/m3 (PM10, dolomite dust). This concentration range can be considered representative for exhibited PM levels in typical urban environments [2,36,37,38,39,40,41,42]. A Palas Particle dispenser (RBG 100) system connected to a fan-based dilution system and aluminum PM exposure chamber was used.
  • Sensitivity of PM sensor to the coarse (2.5–10 µm) particle fraction. We dosed, sequentially, 7.750 µm and 1.180 µm-sized monodisperse dust (silica nanospheres with density of 2 g/cm3) using an aerosolizer (from the Grimm 7.851 aerosol generator) system connected to a fan-based dilution system and an aluminum PM exposure chamber with fans to have homogeneous PM concentrations. This testing protocol is currently considered to be included in the CEN/TS 17660-2 (in preparation) on performance targets for PM sensors.
Based on the lack-of-fit results, the comparability against the reference is evaluated from the resulting linearity (R2), accuracy (A; %), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Bias Error (MBE), and Expanded Uncertainty (Uexp). As reference instrument, we used a Grimm 11-D with heated sampling inlet line (EDM 264, Grimm). The accuracy is calculated per concentration setpoint as (1) in the lack-of-fit test and evaluated between the sensors as an overall average of all setpoints (mean of means):
A   % = 100 s e n s o r ¯ R E F ¯ R E F ¯ × 100
The comparability between the sensors can be regarded as the observed variability between sensors of the same type and is calculated by the between-sensor uncertainty (BSU (2)):
B S U s e n s o r = i = 1 n j = 1 k s e n s o r i j a v e r a g e i 2 n 1
with n, the number of sensors (3), k, the number of measurements over time, s e n s o r i j , the sensor measurements for period i, and a v e r a g e i , the mean result for period i.
In addition, we calculated the minimal and maximal observed Pearson correlation (r) and MAE (µg/m3) between the sensors of the same brand in order to evaluate the intra-sensor comparability.
For NO2, we evaluated the following:
  • Lack-of-fit (linearity) at setpoints of 0, 40, 100, 140, and 200 μg/m3.
  • Sensor sensitivity to relative humidity at 15, 50, 70, and 90% (±5%) during stable temperature conditions of 20 ± 1 °C.
  • Sensor sensitivity to temperature at −5, 10, 20, and 30 °C (±3 °C) during stable relative humidity conditions of 50 ± 5%.
  • Sensor cross-sensitivity to ozone (120 µg/m3) at zero and 100 µg/m3.
  • Sensor response time under rapidly changing NO2 concentrations (from 0 to 200 µg/m3).
From the lack-of-fit tests, the comparability against the reference was evaluated from the resulting linearity (R2), accuracy (%), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Bias Error (MBE), and Expanded Uncertainty (Uexp).
In addition, we evaluated sensor stability (mean of exhibited standard deviations at each (stable) concentration setpoint in the lack-of-fit test) and intra-sensor comparability by calculating the between-sensor uncertainty (BSU). As reference instrument, we applied a Thermo Scientific 42iQ-TL chemiluminescence monitor (Thermo Fisher, Waltham, MA, USA).

2.2.2. Mobile Field Test

The mobile field test aimed at testing the GPS accuracy of the sensor systems along a ~10 km trajectory within the varying urban landscape (street canyons, open parks, tunnels, …) of Antwerp, Belgium (Figure 3). GPS accuracy was evaluated by calculating the average horizontal distance (m) of the high-resolution mobile GPS measurements to a reference GPS track. The reference GPS track was determined by evaluating 3 different GPS platforms (TomTom Runner2, Garmin Edge 810, and Komoot smartphone application) and selecting the best performing one as the reference GPS trajectory.

2.2.3. Field Co-Location Campaign

During the field co-location campaign, the considered sensor systems were exposed for a period of 3 months (7 September 2022–5 December 2022) to ambient pollutant concentrations in an actively vented outdoor shelter, deployed on top (near the air inlets) of a regulatory urban background monitoring station (R801) in Antwerp, Belgium. Sensor systems were evenly distributed across the three shelter levels. Regulatory data included NO2 (Thermo 42C; µg/m3), O3 (Teledyne API400E; µg/m3), PM1, PM2.5, PM10 (Palas FIDAS 200; µg/m3), BC (Thermo MAAP; µg/m3), relative humidity (%), and temperature (°C) and exhibited good hourly data coverage (n = 2132) of 96.7, 96.6, and 92.9% for, respectively, PM, BC, and NO2. The collected raw (RAW) and lab-calibrated (LAB CAL; linear calibration based on lack-of-fit) sensor data were subsequently evaluated for the following:
  • Hourly data coverage (%)
  • Timeseries plot: RAW & LAB CAL
  • Scatter plot: RAW & LAB CAL
  • Comparability between sensors: between-sensor uncertainty (BSU)
  • Comparability with reference (hourly): R2, RMSE, MAE, MBE
  • Expanded uncertainty (non-parametric): Uexp (%)
In addition we evaluated the sensitivity of the sensors (R2, RMSE, MAE, MBE) towards the (real-life) coarse particulate fraction (PM10–PM2.5) and exhibited meteorological conditions (temperature and relative humidity). Moreover, we tested the impact of a 2-week field co-location calibration (FIELD CAL; linear calibration for PM and multilinear for NO2) on the resulting sensor performance and compared the field calibration performance to the lab calibration performance.

3. Results

3.1. Laboratory Test

3.1.1. PM

Due to the varying monitoring resolutions of the sensor systems (2 s–5 min; Supplementary S1), all data were temporally aggregated to a 1-min resolution and merged with the reference (Grimm 11D) data. The SODAQ Air and NO2 apply a 5 min resolution when stationary and change automatically to ~10 s when mobile, resulting in fewer datapoints in the laboratory test. The GeoAir experienced power supply issues during the lack-of-fit measurements (insufficient amperage from applied USB hubs), resulting in data loss for all sensors (NA in Table 1). Setpoint averages (µg/m3) were calculated from the most stable concentration periods (final 15 min of each 1-h setpoint) and are shown in Supplementary S2. From these setpoint averages, lack-of-fit (linear regression) curves were generated (Supplementary S3), linearity (R2) and regression coefficients (slope + intercept (y = a*x + b) and slope only (y = a*x)) determined and sensor accuracy (%) were calculated. All results are shown per sensor system and subsequently presented in an overview table.
All sensor systems respond nicely to the increasing particle concentrations inside the PM exposure chamber (Figure 4), resulting in a generally good linearity between sensor and reference (R2 = 0.96–1). Nevertheless, most of the sensor systems seemed to underestimate the actual PM2.5 and PM10 concentrations, while overestimating the PM1 particle size fraction. Mean setpoint accuracy (mean of different setpoint accuracies) varied from 82–85% for PM1, 63–69% for PM2.5, and 28–31% for PM10 (ATMOTUBE); 12–28% for PM1, 76–84% for PM2.5, and 45–51 for PM10 (TERA PMscan); 80–86% for PM1, 53–56% for PM2.5, and 22–23 for PM10 (OPEN SENECA); 31–94% for PM1, 48–95% for PM2.5, and 20–43 for PM10 (SODAQ Air); 60–77% for PM1, 35–70% for PM2.5, and 13–29 for PM10 (SODAQ NO2); and 63% for PM1, 29% for PM2.5, and 13% for PM10 (2BTECH PAM). Quantitative performance statistics are calculated based on all 1 min averaged lack-of-fit data (R2, MAE, BSU and Uexp) for each sensor system and particle size fraction and shown in Table 1.
From Figure 4, it can be observed that the between-sensor uncertainty (BSU) is larger for the SODAQ Air (3.96 µg/m3) and NO2 (no simultaneous data) when compared to ATMOTUBE (1.52 µg/m3), OPEN SENECA (1.21 µg/m3), and TERA PM (1.64 µg/m3). For the 2BTech PAM, this could not be evaluated, as we had only one device available.
After applying a linear lab calibration (based on lack-of-fit regression coefficients), all sensor systems fell within expanded uncertainty <50% for PM2.5, which is the data quality objective for indicative (Class 1) sensor systems (cfr. CEN/TS 17660-1 for gases).
Recent research showed that particle sensors exhibit low sensitivity in the coarse particle size range (2.5–10 µm) [43,44]. Therefore, a test procedure was developed to evaluate sensor sensitivity to the coarse fraction and to evaluate if sensors really measure PM10 rather than extrapolating it from the PM2.5 signal. We exposed the sensors to monodisperse dust (silica microspheres) of, consecutively, 7.75 µm and 1.18 µm (fine) diameters. We finetuned the settings of the aerosolizer to reach representative (~100–150 µg/m3) PM10 concentrations by generating dust pulses every 30 s during a 5 min period. The idea is to simulate conditions with mainly fine (‘Fine test cond’.) and mainly coarse aerosol (‘Coarse test cond’.), respectively. Two representative 5-min periods (1 coarse test, 1 fine test) were subsequently selected and evaluated by calculating the dust composition (% coarse), PM10, PM2.5, and PMcoarse sensor/REF ratios, and 2 relative change metrics as (3) and (4) (%):
  • Relative change (%) in fractional (coarse vs. fine) sensor/REF ratio during respective fine and coarse test conditions:
R e l P M f r a c t i o n a l % = P M 10 2.5   ( s e n ,   C O A R S E ) P M 10 2.5   ( R E F ,   C O A R S E ) P M 2.5   ( s e n ,   F I N E ) P M 2.5   ( R E F ,   F I N E ) P M 2.5   ( s e n ,   F I N E ) P M 2.5   ( R E F ,   F I N E ) × 100
  • Relative change (%) in PM10 sensor/REF ratio between fine and coarse test conditions:
R e l P M 10 % = P M 10   ( s e n , C O A R S E ) P M 10   ( R E F ,   C O A R S E ) P M 10   ( s e n ,   F I N E ) P M 10   ( R E F ,   F I N E ) P M 10   ( s e n ,   F I N E ) P M 10   ( R E F , F I N E ) × 100
The sensor systems tend to visually pick up fine particle spikes but appeared far less responsive to the coarse fraction spikes (Figure 5). Note that in both fine and coarse generation spikes, PM2.5 is present. Similar responses are observed between the different sensor systems, which is not surprising, as all sensors are ultimately based on three original equipment manufacturer (OEM) sensors, namely Sensirion SPS30, Plantower PMS, and TERA next-PM. From the calculated change ratios in Supplementary S4, the sensor/REF ratio changed significantly between the considered particle size conditions (73–100%), with all sensors showing very low sensitivity towards the coarse particle size fraction (PMcoarse sensor/REF ratio from 0–0.11 as shown in Supplementary S4).

3.1.2. NO2

For all sensors containing a NO2 sensor (3/10), lack-of-fit tests were conducted on three days (August 12th, 14th, and 15th) at concentrations ramping between 0 and 200 µg/m3 (Figure 6). Due to the varying monitoring resolutions of the sensor systems (2 s–5 min), all data were temporally aggregated to 1-min resolution and merged with the reference data (Thermo NOx analyzer). Setpoint averages were calculated based on steady-state conditions (final 1.5-h considering a 15-min buffer period before each setpoint change). From these setpoint averages, lack-of-fit (linear regression) plots were generated, linearity (R2) and regression coefficients (slope + intercept (y = a*x+b) and slope only (y = a*x)) determined and sensor stability (µg/m3) and accuracy (%) were calculated. The SODAQ NO2 showed significant noise and data connectivity issues, resulting in a low stability (5–80 µg/m3) and setpoint accuracy (−113–254%). Moreover, sensor readings were inversely correlated (R2 = 0.03–0.18) to the actual NO2 concentrations (Figure 6), with a poor between-sensor uncertainty (BSU) of 125 µg/m3. This out-of-the-box performance can be considered as inadequate. Potential calibration is hindered by the high signal noise, while sensor boxes showed connectivity issues and high BSU. The 2BTech PAM (only one unit available) was positively correlated with the generated NO2 concentrations, with a mean setpoint accuracy of 72%, but exhibited significant noise and extreme peak values during the lack-of-fit test, resulting in low sensor stability of 27 µg/m3. The DST Observair (one unit available) is not pre-calibrated by the supplier and relies on co-location calibration in the field. The uncalibrated sensor readings during the lack-of-fit test varied between -0.03 and 0.03 µg/m3 and showed a negative linear response to the increasing NO2 concentration steps. Compared to the SODAQ NO2 and PAM, the Observair exhibits much lower signal noise, resulting in better stability (<0.01 µg/m3) and better calibration potential. After calibration, the expanded uncertainty (Uexp) of the Observair (65%) outperforms the observed accuracies of the SODAQ NO2 (415–490%) and PAM (80%). Nevertheless, the considered NO2 sensors do not classify for the Class 1 uncertainty objective of <25% (CEN/TS 17660-1 [45]).
The impact from a changing relative humidity (0-50-75-90%) at zero and span concentration resulted in similar responses (Supplementary S7), with initial peak responses with every setpoint change followed by a subsequent stabilization (transient effect) under different levels of noisiness (Observair < PAM < SODAQ NO2). Similar responses can be explained by the underlying OEM sensor (Alphasense NO2-B43F), which is similar for all NO2 sensor systems. Similar transient effects (Supplementary S8) were observed under varying temperatures (−5, 10, 20, and 30 °C), both at zero and span concentration.
To evaluate response time to rapidly changing NO2 concentrations, sensors were placed in glass tubes that allowed for rapid concentration changes from 0–200 µg/m3 (Supplementary S9). The smaller volume of the glass tubes (compared to the NO2 exposure chamber) only allowed evaluation of the Observair and PAM sensors as the SODAQ NO2 boxes did not fit in the glass tubes. Thirty-min setpoints (0 and 200 µg/m3) were considered, and lab-calibrated sensor data were compared to the 1-min data from the Thermo NOx analyzer. Averages and 90-percentiles (90% of max concentration) concentrations were determined for each 200 µg/m3 plateau, and the associated response time, i.e., time needed to reach 90% concentration, was calculated for each sensor system (and reference analyzer). The resulting response times derived from the 3 consecutive 0–200 plateaus are provided in Supplementary S9 and varied from 1–2 min for the sensor systems and 3 min for the Thermo NOx reference analyzer. Quantitative performance statistics (R2, MAE, BSU, and Uexp) are calculated based on all 1 min averaged lack-of-fit data for each sensor system and are shown in Table 1.

3.2. Mobile Field Test

All sensors were mounted on top (in the free airflow) of a cargo bike. Package sleeves were applied to damp vibrations of the cargo bike while cycling. Besides the sensors, two mid-range instruments, namely a Grimm 11D (PM; without heated inlet) and MA200 (BC), were placed inside the cargo bike with air inlets at the height of the sensors. Finally, the cargo bike was equipped with 3 different GPS instruments (Garmin 810 Edge, TomTom Runner 2, Komoot smartphone application). The TomTom track showed the highest monitoring resolution (1 s) and horizontal accuracy and was, therefore, selected as reference track. The exhibited PM2.5 concentration variability (measured by the Grimm) ranged between 4.8 and 133.3 µg/m3, while the BC (measured by the MA200) varied between 0.4 and 4.4 µg/m3 (Supplementary S10). While the highest PM2.5 concentrations were observed at a housing façade construction site, the highest BC concentrations were obtained when cycling downwind of a busy highway (E313/E34). When plotting all sensor tracks on a map (Figure 7), the GPS accuracy performed visually better in open areas compared to narrow and/or high street canyons. A higher height/width ratio seems to result in lower GPS accuracy, while GPS accuracy deteriorates as well when moving through tunnels, which are well-described phenomena in the literature [27,46,47].
When calculating the average horizontal accuracy (m) as average distance to the reference track in QGIS (Figure 7), the horizontal accuracy was generally good, achieving a <10 m horizontal accuracy for all sensor systems (Supplementary S11). The highest horizontal accuracy (2.28 m) was obtained for the TERA PMscan, while the lowest horizontal accuracy (8.15 m) was observed for the GeoAir.
With regard to the measured raw sensor signals (PM/NO2/BC), the mobile deployment (and related vibrations) did not seem to result in additional instrument noise or outliers when compared to stationary conditions. Moreover, similar hotspots were identified when comparing the sensor systems to the high-grade (MA200 and Grimm) monitors (Supplementary S12).

3.3. Field Co-Location Campaign

All sensor systems were deployed for 3 months (7 September 2022–5 December 2022) in an actively vented exposure shelter on top of an urban background monitoring station (R801) in the city center of Antwerp (Figure 8). Different data storage and transmission protocols were used, including automatic cloud upload via GPRS/4G (SODAQ) and internal SD card storage (GeoAir), while some sensor systems relied on a smartphone application (TERA PMscan, ATMOTUBE) or a combination of these data transmission protocols (PAM, OPEN SENECA, Airbeam, Observair). Some sensor systems were not designed for continuous, long-term monitoring. TERA PMscan relies on a smartphone application for operation, which resulted in forced automatic shutdowns by the smartphone software after some time (~1–2 days) and lack of continuous long-term data. The Observair relies on filter replacements for its BC measurement. As the filter saturates quickly, the instrument turned to error mode and did not collect any BC or NO2 data. The BCmeter also relies on filter replacements. A dedicated 1.5 week campaign (16–30 November) was therefore set up to evaluate BC (and NO2 from the Observair). The Airbeams arrived later and became operational on the 9th of November. Sensor data were offloaded (remotely via web dashboards and on-site via SD card readout) weekly to avoid data loss, and a logbook was created to keep track of that status and encountered issues.
From the regulatory data, PM2.5 concentrations ranged from 1–51 µg/m3 (mean = 10.85 µg/m3), while NO2 exhibited 2–111 µg/m3 (mean = 26 µg/m3). Atmospheric temperature varied between 1 and 27 °C (mean = 13 °C), while relative humidity was within 42 and 100% (mean = 83.5%). Temporal pollutant variability reflects typical urban pollution dynamics (Supplementary S13), with morning and evening rush hour peaks for NO2 and BC, slightly delayed PM peaks with a regional background character, and O3 that is photochemically formed at low NO2 concentrations and high solar radiation conditions (inversely related to NO2).
For each of the sensor systems, hourly data coverage, linearity (R2), accuracy, expanded uncertainty, impacts from lab and field calibration, and sensor drift (sensor/REF ratio) over time were evaluated (Table 2). For PM sensor systems, the sensitivity towards the coarse particle fraction (PM10–PM2.5) and impact from, respectively, lab- and field calibrations were additionally evaluated. PM field calibration was similar to the lab calibration, based on linear slope/intercept derivation based on a training period (first 2 weeks: 7 September 2022–21 September 2022) and evaluated (R2 and MAE) based on the remaining 2.5 months of data (22 September 2022–5 December 2022). For NO2 sensor systems, a multilinear field calibration model was trained with covariates for sensor response, temperature, RH, and O3, following earlier sensor calibration studies [48,49,50]. Model training was based on 2 weeks of co-location data (to fit the model and derive regression parameters), and the calibration performance (R2 and MAE) was tested on the remaining 2 months of test data. This multilinear field calibration outperformed the raw and lab calibrations for all NO2 sensor systems. Lab calibrations did not hold in field conditions, which is not surprising, as field conditions are different in terms of PM composition and meteorological conditions (temperature, relative humidity). Compared to the observed PM2.5 performance in Table 2, performance decreases for PM10 (R2 = 0.6–0.62, MAE = 12.6 µg/m3), and the association is entirely lost (R2 = 0–0.01) when focusing on the coarse fraction (PMcoarse = PM10-PM2.5), confirming the lack of sensitivity in the coarse particle size fraction. This was also observed in an earlier field study with six different low-cost PM sensors [43]. For PM2.5, general good correlations (R2 = 0.7–0.9), varying accuracies (MAE = 3–4.7 µg/m3), and low between-sensor uncertainties (0.1–0.7 µg/m3) were observed. The accuracy worsened by applying the lab calibration but was optimized further for all sensor systems based on the field calibration. No distinct aging effect (gradual deviation in sensor/REF ratio) was observed over the 3-month co-location period. The considered PM sensor systems exhibited sensitivity towards relative humidity (Supplementary S14), with exponentially increasing sensor/REF ratios under increasing humidity (mainly impacting data quality from a relative humidity >85%). This phenomenon is caused by condensational particle growth due to particle hygroscopicity and is well documented in previous literature [43,48,51,52,53,54,55,56,57]. An overview of the observed quantitative performance metrics based on the hourly-averaged data for each of the sensor systems during the field co-location campaign is provided in Table 2.
Hourly PM2.5, NO2, and BC timeseries of the considered sensor systems and reference data are provided in Figure 9.

4. Discussion

During this lab and field benchmarking campaign, we collected quantitative and qualitative evidence on the fit-for-purpose of current commercially available dynamic exposure sensor systems. An overview is provided of the observed sensor system performance (hourly coverage, accuracy, R2, MAE, BSU, stability, Uexp) for the considered pollutants under laboratory (Table 1) and real-world (Table 2) conditions.
For the considered PM sensor systems, out-of-the-box performance is already quite good and close to the Class 1 data quality objective (Uexp < 50%). In addition, the sensors showed high precision, <0.4 µg/m3 in the lab and <0.6 µg/m3 in the field, which allows for multi-sensor (network) applications (e.g., [13,58,59]). Whether the obtained accuracy is sufficient to characterize PM gradients in urban environments (which are typically not that steep) will vary from city to city and should be further investigated. In our mobile field test, Grimm measurements showed PM2.5 concentrations along the 10km trajectory ranging from 4.8 to 133 µg/m3. This exposure variability is, therefore, quantifiable by the considered sensor systems with MAEs of 3–4.7 µg/m3. The highest accuracy was observed for PM1, followed by PM2.5 and PM10. The considered sensor systems do not reliably detect the coarse particle size fraction or show sensitivity towards relative humidity. TERA is the only sensor system that seems to pick up some coarse particles (R2 = 0.3), while all other sensors show R2 of ~0. The accuracy of PM sensors can be further improved by linear slope/intercept calibration. However, we showed that lab calibrations do not hold in the field, as previously shown in other studies [11,13,48,60]. A local field calibration (representative pollutant and meteorological environment) seems, therefore, crucial to obtain the most reliable sensor data. In general, the assessed PM performance and observed sensitivities (drift/RH) are very similar between the benchmarked PM sensors, which can be explained by similar underlying sensor technology (Sensirion SPS30 + Plantower) and lack of applied factory algorithms. The sensor systems showed elevated sensor/REF ratios under increasing relative humidity, which can be explained by hygroscopic effects documented in previous literature [48,51,52,53,54,55,57,61,62].
Regarding NO2, out-of-the-box performance was unsatisfactory for direct application, as sensor systems suffered from noise (stability) and calibration (negative association) issues. Although 2BTech PAM showed the best raw performance, a higher but negative association (R2) and stability were observed for Observair. Following a linear laboratory calibration, the best performance was, therefore, achieved for the Observair. Similar to the PM sensors, linear lab calibrations do not hold in the field. For NO2, a local and multilinear field calibration (incorporating covariates for temperature, relative humidity, and O3 sensitivity) showed acceptable sensor performance (R2 = 0.75–0.83, MAE = 6–44 µg/m3), which shows the potential of the considered NO2 sensor systems. Further research and development work should therefore focus on implementing research-proven noise reduction and calibration procedures [11,48,49,60,63,64,65] in commercial instruments to increase the level of maturity on the market. Recent sensor studies applying multilinear [48,60] or machine-learning-based [64,66] calibrations (co-location or network-based) have provided evidence on sensor sensitivities and data quality improvements on a variety of sensors. Application of this knowledge to commercial applications is crucial in order to obtain reliable and actionable air quality data.
Regarding BC, both considered sensor systems showed good field performance (R2 = 0.82–0.83, MAE = 0.2–0.3 µg/m3); however, we should mention that BCmeter cannot yet be applied in mobile applications (due to wired power and wifi connectivity). The measurement principle of light attenuation on filter strips has proven to be a robust methodology to measure black carbon in the past [15,67,68,69,70,71,72] and can be minimized to portable and lower-cost instruments. Moreover, the spatial BC exposure variability, measured by the Observair in the mobile field test, was in good agreement with the Aethlabs MA200 measurements (Supplementary S12). In general, all sensor systems showed a good horizontal accuracy (<10 m) with no vibration impacts on the sensor readings for all pollutants during the mobile field test, confirming the suitability of portable sensor systems for mobile applications.

5. Conclusions

This study evaluated the fit-for-purpose of commercially available portable sensor systems for dynamic exposure assessments in urban environments. We evaluated 10 sensor systems, measuring PM, NO2, and/or BC in both laboratory and real-world conditions. Besides quantitative performance assessments, qualitative experience on their portability, data transmission/storage, and user-friendliness were obtained throughout the experiments. Autonomous operation with internal GPS (no reliance on app connectivity) and data storage redundancy (SD storage besides cloud or app transmission) for example showed to be valuable assets in terms of data coverage. Results of the considered sensor systems indicate that out-of-the-box performance is relatively good for PM and BC, but the maturity of the tested NO2 sensors is still low, and additional effort is needed in terms of signal noise and calibration. Multivariate calibration under field conditions showed promising potential for real-world applications. Future directions for PM and BC should focus on applicability (pollutant gradients in urban environments), added value, and user-friendliness (day-to-day use) of real-world applications, while for NO2, research-proven noise reduction and calibration procedures [11,48,49,60,63,64,65] should be implemented in commercial instruments to increase the level of maturity on the market. This work shows that commercially available portable sensor systems have reached a good maturity level for PM and BC, while more work is needed for NO2 in terms of calibration and noise reduction. More accurate and dynamic exposure assessments in contemporary urban environments are crucial to study real-world exposure of individuals and the impact on potential health endpoints [17,73,74,75,76,77,78]. This research domain will be boosted by the greater availability of mobile monitoring systems capable of quantifying urban pollutant gradients and enabling personal exposure assessments, identification of hotspot locations, and new air quality mapping applications, in turn driving awareness, behavior change, and evidence-based air quality policies.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s24175653/s1, Supplementary S1 Purchased portable sensor systems for the lab and field benchmarking study. *PT = prototype. Supplementary S2 Obtained setpoint averages (µg/m3) for each sensor (1–3) and brand during the lack-of-fit testing for PM1, PM2.5, and PM10. Supplementary S3 Obtained lack-of-fit curves and associated linear functions for each sensor (ATMO1–3, TERA1–3, OPEN1–3, AIR1–3, and NO2_1–2; upper to lower) for the corresponding PM1, PM2.5, and PM10 particle size fractions (left to right), Supplementary S4 Coarse test results obtained on 14/7 (ATMOTUBE, OPEN SENECA, GeoAir, and SODAQ Air) and 2/9 (TERA, PAM, and SODAQ NO2) with observed coarse composition (% coarse), PM10, PM2.5, and PMcoarse sensor/REF ratios, fine/coarse change ratio (%; between highlighted columns), and PM10 change ratio (%). *Faulty results due to peak mismatch. Supplementary S5 Obtained setpoint averages for the considered NO2 sensor systems. Supplementary S6 Obtained lack-of-fit curves and associated linear functions for each sensor system; SODAQ NO2 (1–3), PAM, and Observair. Supplementary S7 Lab-calibrated NO2 sensor response to varying relative humidity steps (0–90–75–50–0%) under zero (upper) and span (lower) concentrations. Supplementary S8 Lab-calibrated NO2 sensor response to varying temperature steps (−5, 10, 20, and 30 °C) under zero (upper) and span (lower) concentrations. Supplementary S9 Response test setup, NO2 average (AVG), 90-percentile (90%) concentration, and associated response time (t_90), calculated for the Observair and PAM sensor systems and Thermo NOx analyzer. Supplementary S10 Observed PM2.5 (left) and BC (right) concentrations experienced by, respectively, the Grimm 11D and Aethlabs MA200 during the mobile field test. Supplementary S11 Average horizontal accuracy (m) and number of datapoints (n) of the considered sensor systems during the mobile field test. Supplementary S12 Black carbon (µg/m3) concentration maps generated from the mobile measurements conducted by the Aethlabs MA200 and Observair during the mobile field test in Antwerp, Belgium. Supplementary S13 Temporal pollutant variability of PM, BC, NO2, and O3 at R801 during the field co-location campaign. Shadings denote 95% confidence intervals. Supplementary S14 Sensitivity of the considered PM sensor systems towards relative humidity (%) as observed during the field campaign by elevated sensor/REF PM2.5 ratios under increasing relative humidity (%). Mind the different ranges in relative humidity between the sensors, resulting from the varying data availabilities for some of the sensor systems (explained in Section 3.3).

Author Contributions

Conceptualization: J.H., C.S., E.E. and M.V.P.; Methodology and validation: J.H., B.L. and M.V.P.; Formal analysis: J.H.; writing—original draft preparation: J.H.; writing—review and editing: J.H., B.L., E.E., I.S. and M.V.P.; funding acquisition: C.S., I.S. and E.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by European Union’s Horizon 2020 Research and Innovation Program (RI-URBANS; grant number 101036245) and Innovative Public Procurement Program (PIO; grant number 3846).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The research data of this study will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. EEA. Harm to human health from air pollution in Europe: Burden of disease 2023. ETC HE Rep. 2023, 7, 104. [Google Scholar]
  2. Morales Betancourt, R.; Galvis, B.; Balachandran, S.; Ramos-Bonilla, J.P.; Sarmiento, O.L.; Gallo-Murcia, S.M.; Contreras, Y. Exposure to fine particulate, black carbon, and particle number concentration in transportation microenvironments. Atmos. Environ. 2017, 157, 135–145. [Google Scholar] [CrossRef]
  3. Knibbs, L.D.; Cole-Hunter, T.; Morawska, L. A review of commuter exposure to ultrafine particles and its health effects. Atmos. Environ. 2011, 45, 2611–2622. [Google Scholar] [CrossRef]
  4. Vandeninden, B.; Vanpoucke, C.; Peeters, O.; Hofman, J.; Stroobants, C.; De Craemer, S.; Hooyberghs, H.; Dons, E.; Van Poppel, M.; Panis, L.I.; et al. Uncovering Spatio-temporal Air Pollution Exposure Patterns During Commutes to Create an Open-Data Endpoint for Routing Purposes. In Hidden Geographies; Krevs, M., Ed.; Springer International Publishing: Cham, Germany, 2021; pp. 115–151. [Google Scholar]
  5. Moreno, T.; Reche, C.; Rivas, I.; Cruz Minguillón, M.; Martins, V.; Vargas, C.; Buonanno, G.; Parga, J.; Pandolfi, M.; Brines, M.; et al. Urban air quality comparison for bus, tram, subway and pedestrian commutes in Barcelona. Environ. Res. 2015, 142, 495–510. [Google Scholar] [CrossRef]
  6. de Nazelle, A.; Fruin, S.; Westerdahl, D.; Martinez, D.; Ripoll, A.; Kubesch, N.; Nieuwenhuijsen, M. A travel mode comparison of commuters’ exposures to air pollutants in Barcelona. Atmos. Environ. 2012, 59, 151–159. [Google Scholar] [CrossRef]
  7. Beckx, C.; Int Panis, L.; Uljee, I.; Arentze, T.; Janssens, D.; Wets, G. Disaggregation of nation-wide dynamic population exposure estimates in The Netherlands: Applications of activity-based transport models. Atmos. Environ. 2009, 43, 5454–5462. [Google Scholar] [CrossRef]
  8. Fruin, S.A.; Winer, A.M.; Rodes, C.E. Black carbon concentrations in California vehicles and estimation of in-vehicle diesel exhaust particulate matter exposures. Atmos. Environ. 2004, 38, 4123–4133. [Google Scholar] [CrossRef]
  9. Dons, E.; Int Panis, L.; Van Poppel, M.; Theunis, J.; Wets, G. Personal exposure to Black Carbon in transport microenvironments. Atmos. Environ. 2012, 55, 392–398. [Google Scholar] [CrossRef]
  10. Dons, E.; De Craemer, S.; Huyse, H.; Vercauteren, J.; Roet, D.; Fierens, F.; Lefebvre, W.; Stroobants, C.; Meysman, F. Measuring and modelling exposure to air pollution with citizen science: The CurieuzeNeuzen project. In Proceedings of the ISEE 2020 Virtual Conference: 32nd Annual Conference of the International Society of Environmental Epidemiology, Virtual, 24 August 2020; ISEE Conference Abstracts. Volume 2020. [Google Scholar] [CrossRef]
  11. Hofman, J.; Peters, J.; Stroobants, C.; Elst, E.; Baeyens, B.; Laer, J.V.; Spruyt, M.; Essche, W.V.; Delbare, E.; Roels, B.; et al. Air Quality Sensor Networks for Evidence-Based Policy Making: Best Practices for Actionable Insights. Atmosphere 2022, 13, 944. [Google Scholar] [CrossRef]
  12. Van Poppel, M.; Hoek, G.; Viana, M.; Hofman, J.; Theunis, J.; Peters, J.; Kerckhoffs, J.; Moreno, T.; Rivas, I.; Basagaña, X.; et al. Deliverable D13 (D2.5): Description of Methodology for Mobile Monitoring and Citizen Involvement; RI-URBANS Project Deliverable D2.5. 2022. Available online: https://riurbans.eu/wp-content/uploads/2022/10/RI-URBANS_D13_D2.5.pdf (accessed on 15 May 2024).
  13. Wesseling, J.; Hendricx, W.; de Ruiter, H.; van Ratingen, S.; Drukker, D.; Huitema, M.; Schouwenaar, C.; Janssen, G.; van Aken, S.; Smeenk, J.W.; et al. Assessment of PM2.5 Exposure during Cycle Trips in The Netherlands Using Low-Cost Sensors. Int. J. Environ. Res. Public Health 2021, 18, 6007. [Google Scholar] [CrossRef]
  14. Carreras, H.; Ehrnsperger, L.; Klemm, O.; Paas, B. Cyclists’ exposure to air pollution: In situ evaluation with a cargo bike platform. Environ. Monit. Assess. 2020, 192, 470. [Google Scholar] [CrossRef] [PubMed]
  15. Hofman, J.; Samson, R.; Joosen, S.; Blust, R.; Lenaerts, S. Cyclist exposure to black carbon, ultrafine particles and heavy metals: An experimental study along two commuting routes near Antwerp, Belgium. Environ. Res. 2018, 164, 530–538. [Google Scholar] [CrossRef] [PubMed]
  16. Dons, E.; Laeremans, M.; Orjuela, J.P.; Avila-Palencia, I.; Carrasco-Turigas, G.; Cole-Hunter, T.; Anaya-Boig, E.; Standaert, A.; De Boever, P.; Nawrot, T.; et al. Wearable Sensors for Personal Monitoring and Estimation of Inhaled Traffic-Related Air Pollution: Evaluation of Methods. Environ. Sci. Technol. 2017, 51, 1859–1867. [Google Scholar] [CrossRef] [PubMed]
  17. Blanco, M.N.; Bi, J.; Austin, E.; Larson, T.V.; Marshall, J.D.; Sheppard, L. Impact of Mobile Monitoring Network Design on Air Pollution Exposure Assessment Models. Environ. Sci. Technol. 2023, 57, 440–450. [Google Scholar] [CrossRef] [PubMed]
  18. Fu, X.; Cai, Q.; Yang, Y.; Xu, Y.; Zhao, F.; Yang, J.; Qiao, L.; Yao, L.; Li, W. Application of Mobile Monitoring to Study Characteristics of Air Pollution in Typical Areas of the Yangtze River Delta Eco-Green Integration Demonstration Zone, China. Sustainability 2023, 15, 205. [Google Scholar] [CrossRef]
  19. Hofman, J.; Do, T.H.; Qin, X.; Bonet, E.R.; Philips, W.; Deligiannis, N.; La Manna, V.P. Spatiotemporal air quality inference of low-cost sensor data: Evidence from multiple sensor testbeds. Environ. Model. Softw. 2022, 149, 105306. [Google Scholar] [CrossRef]
  20. Chen, Y.; Gu, P.; Schulte, N.; Zhou, X.; Mara, S.; Croes, B.E.; Herner, J.D.; Vijayan, A. A new mobile monitoring approach to characterize community-scale air pollution patterns and identify local high pollution zones. Atmos. Environ. 2022, 272, 118936. [Google Scholar] [CrossRef]
  21. Messier, K.P.; Chambliss, S.E.; Gani, S.; Alvarez, R.; Brauer, M.; Choi, J.J.; Hamburg, S.P.; Kerckhoffs, J.; LaFranchi, B.; Lunden, M.M.; et al. Mapping Air Pollution with Google Street View Cars: Efficient Approaches with Mobile Monitoring and Land Use Regression. Environ. Sci. Technol. 2018, 52, 12563–12572. [Google Scholar] [CrossRef]
  22. Van den Bossche, J. Towards High Spatial Resolution Air Quality Mapping: A Methodology to Assess Street-Level Exposure Based on Mobile Monitoring. Ph.D. Thesis, Ghent University, Ghent, Belgium, 2016. [Google Scholar]
  23. Helbig, C.; Ueberham, M.; Becker, A.M.; Marquart, H.; Schlink, U. Wearable Sensors for Human Environmental Exposure in Urban Settings. Curr. Pollut. Rep. 2021, 7, 417–433. [Google Scholar] [CrossRef]
  24. Kang, Y.; Aye, L.; Ngo, T.D.; Zhou, J. Performance evaluation of low-cost air quality sensors: A review. Sci. Total Environ. 2022, 818, 151769. [Google Scholar] [CrossRef]
  25. Karagulian, F.; Barbiere, M.; Kotsev, A.; Spinelle, L.; Gerboles, M.; Lagler, F.; Redon, N.; Crunaire, S.; Borowiak, A. Review of the Performance of Low-Cost Sensors for Air Quality Monitoring. Atmosphere 2019, 10, 506. [Google Scholar] [CrossRef]
  26. Peters, J.; Van Poppel, M. Literatuurstudie, Marktonderzoek en Multicriteria-Analyse Betreffende Luchtkwaliteitssensoren en Sensorboxen; VITO: Mol, Belgium, 2020. [Google Scholar]
  27. Park, Y.M.; Sousan, S.; Streuber, D.; Zhao, K. GeoAir-A Novel Portable, GPS-Enabled, Low-Cost Air-Pollution Sensor: Design Strategies to Facilitate Citizen Science Research and Geospatial Assessments of Personal Exposure. Sensors 2021, 21, 3761. [Google Scholar] [CrossRef]
  28. Varaden, D.; Leidland, E.; Lim, S.; Barratt, B. “I am an air quality scientist”—Using citizen science to characterise school children’s exposure to air pollution. Environ. Res. 2021, 201, 111536. [Google Scholar] [CrossRef] [PubMed]
  29. Wesseling, J.; de Ruiter, H.; Blokhuis, C.; Drukker, D.; Weijers, E.; Volten, H.; Vonk, J.; Gast, L.; Voogt, M.; Zandveld, P.; et al. Development and implementation of a platform for public information on air quality, sensor measurements, and citizen science. Atmosphere 2019, 10, 445. [Google Scholar] [CrossRef]
  30. Lim, C.C.; Kim, H.; Vilcassim, M.J.R.; Thurston, G.D.; Gordon, T.; Chen, L.-C.; Lee, K.; Heimbinder, M.; Kim, S.-Y. Mapping urban air quality using mobile sampling with low-cost sensors and machine learning in Seoul, South Korea. Environ. Int. 2019, 131, 105022. [Google Scholar] [CrossRef]
  31. Volten, H.; Devilee, J.; Apituley, A.; Carton, L.; Grothe, M.; Keller, C.; Kresin, F.; Land-Zandstra, A.; Noordijk, E.; van Putten, E.; et al. Enhancing national environmental monitoring through local citizen science. In Citizen Science; Hecker, S., Haklay, M., Bowser, A., Makuch, Z., Vogel, J., Bonn, A., Eds.; UCL Press: London, UK, 2018; pp. 337–352. [Google Scholar]
  32. Fishbain, B.; Lerner, U.; Castell, N.; Cole-Hunter, T.; Popoola, O.; Broday, D.M.; Iñiguez, T.M.; Nieuwenhuijsen, M.; Jovasevic-Stojanovic, M.; Topalovic, D.; et al. An evaluation tool kit of air quality micro-sensing units. Sci. Total Environ. 2016, 575, 639–648. [Google Scholar] [CrossRef]
  33. Jiang, Q.; Kresin, F.; Bregt, A.K.; Kooistra, L.; Pareschi, E.; van Putten, E.; Volten, H.; Wesseling, J. Citizen Sensing for Improved Urban Environmental Monitoring. J. Sens. 2016, 2016, 5656245. [Google Scholar] [CrossRef]
  34. Molina Rueda, E.; Carter, E.; L’Orange, C.; Quinn, C.; Volckens, J. Size-Resolved Field Performance of Low-Cost Sensors for Particulate Matter Air Pollution. Environ. Sci. Technol. Lett. 2023, 10, 247–253. [Google Scholar] [CrossRef]
  35. Kuula, J.; Mäkelä, T.; Aurela, M.; Teinilä, K.; Varjonen, S.; Gonzales, O.; Timonen, H. Laboratory evaluation of particle size-selectivity of optical low-cost particulate matter sensors. Atmos. Meas. Tech. 2020, 13, 2413–2423. [Google Scholar] [CrossRef]
  36. Languille, B.; Gros, V.; Nicolas, B.; Honoré, C.; Kaufmann, A.; Zeitouni, K. Personal Exposure to Black Carbon, Particulate Matter and Nitrogen Dioxide in the Paris Region Measured by Portable Sensors Worn by Volunteers. Toxics 2022, 10, 33. [Google Scholar] [CrossRef]
  37. Lauriks, T.; Longo, R.; Baetens, D.; Derudi, M.; Parente, A.; Bellemans, A.; van Beeck, J.; Denys, S. Application of Improved CFD Modeling for Prediction and Mitigation of Traffic-Related Air Pollution Hotspots in a Realistic Urban Street. Atmos. Environ. 2021, 246, 118127. [Google Scholar] [CrossRef]
  38. Zeb, B.; Alam, K.; Sorooshian, A.; Blaschke, T.; Ahmad, I.; Shahid, I. On the morphology and composition of particulate matter in an urban environment. Aerosol Air Qual. Res. 2018, 18, 1431–1447. [Google Scholar] [CrossRef]
  39. Kumar, P.; Patton, A.P.; Durant, J.L.; Frey, H.C. A review of factors impacting exposure to PM2.5, ultrafine particles and black carbon in Asian transport microenvironments. Atmos. Environ. 2018, 187, 301–316. [Google Scholar] [CrossRef]
  40. Peters, J.; Theunis, J.; Poppel, M.V.; Berghmans, P. Monitoring PM10 and Ultrafine Particles in Urban Environments Using Mobile Measurements. Aerosol Air Qual. Res. 2013, 13, 509–522. [Google Scholar] [CrossRef]
  41. Pirjola, L.; Lähde, T.; Niemi, J.V.; Kousa, A.; Rönkkö, T.; Karjalainen, P.; Keskinen, J.; Frey, A.; Hillamo, R. Spatial and temporal characterization of traffic emissions in urban microenvironments with a mobile laboratory. Atmos. Environ. 2012, 63, 156–167. [Google Scholar] [CrossRef]
  42. Kaur, S.; Nieuwenhuijsen, M.J.; Colvile, R.N. Fine particulate matter and carbon monoxide exposure concentrations in urban street transport microenvironments. Atmos. Environ. 2007, 41, 4781–4810. [Google Scholar] [CrossRef]
  43. Vercauteren, J. Performance Evaluation of Six Low-Cost Particulate Matter Sensors in the Field; VAQUUMS: VMM: Antwerp, Belgium, 2021. [Google Scholar]
  44. Weijers, E.; Vercauteren, J.; van Dinther, D. Performance Evaluation of Low-Cost Air Quality Sensors in the Laboratory and in the Field; VAQUUMS: VMM: Antwerp, Belgium, 2021. [Google Scholar]
  45. CEN: CEN/TS 17660-1:2022; Air Quality—Performance Evaluation of Air Quality Sensor Systems—Part 1: Gaseous Pollutants in Ambient Air. CEN: Brussels, Belgium, 2022.
  46. Ma, L.; Zhang, C.; Wang, Y.; Peng, G.; Chen, C.; Zhao, J.; Wang, J. Estimating Urban Road GPS Environment Friendliness with Bus Trajectories: A City-Scale Approach. Sensors 2020, 20, 1580. [Google Scholar] [CrossRef] [PubMed]
  47. Merry, K.; Bettinger, P. Smartphone GPS accuracy study in an urban environment. PLoS ONE 2019, 14, e0219890. [Google Scholar] [CrossRef] [PubMed]
  48. Hofman, J.; Nikolaou, M.; Shantharam, S.P.; Stroobants, C.; Weijs, S.; La Manna, V.P. Distant calibration of low-cost PM and NO2 sensors; evidence from multiple sensor testbeds. Atmos. Pollut. Res. 2022, 13, 101246. [Google Scholar] [CrossRef]
  49. Mijling, B.; Jiang, Q.; de Jonge, D.; Bocconi, S. Field calibration of electrochemical NO2 sensors in a citizen science context. Atmos. Meas. Tech. 2018, 11, 1297–1312. [Google Scholar] [CrossRef]
  50. Karagulian, F.; Borowiak, W.; Barbiere, M.; Kotsev, A.; Van den Broecke, J.; Vonk, J.; Signironi, M.; Gerboles, M. Calibration of AirSensEUR Boxes during a Field Study in the Netherlands; European Commission: Ispra, Italy, 2020. [Google Scholar]
  51. Tagle, M.; Rojas, F.; Reyes, F.; Vásquez, Y.; Hallgren, F.; Lindén, J.; Kolev, D.; Watne, Å.K.; Oyola, P. Field performance of a low-cost sensor in the monitoring of particulate matter in Santiago, Chile. Environ. Monit. Assess. 2020, 192, 171. [Google Scholar] [CrossRef]
  52. Crilley, L.R.; Singh, A.; Kramer, L.J.; Shaw, M.D.; Alam, M.S.; Apte, J.S.; Bloss, W.J.; Hildebrandt Ruiz, L.; Fu, P.; Fu, W.; et al. Effect of aerosol composition on the performance of low-cost optical particle counter correction factors. Atmos. Meas. Tech. 2020, 13, 1181–1193. [Google Scholar] [CrossRef]
  53. Badura, M.; Batog, P.; Drzeniecka-Osiadacz, A.; Modzel, P. Evaluation of Low-Cost Sensors for Ambient PM2.5 Monitoring. J. Sens. 2018, 2018, 5096540. [Google Scholar] [CrossRef]
  54. Di Antonio, A.; Popoola, O.A.M.; Ouyang, B.; Saffell, J.; Jones, R.L. Developing a Relative Humidity Correction for Low-Cost Sensors Measuring Ambient Particulate Matter. Sensors 2018, 18, 2790. [Google Scholar] [CrossRef] [PubMed]
  55. Feenstra, B.; Papapostolou, V.; Hasheminassab, S.; Zhang, H.; Boghossian, B.D.; Cocker, D.; Polidori, A. Performance evaluation of twelve low-cost PM2.5 sensors at an ambient air monitoring site. Atmos. Environ. 2019, 216, 116946. [Google Scholar] [CrossRef]
  56. Jayaratne, R.; Liu, X.; Thai, P.; Dunbabin, M.; Morawska, L. The Influence of Humidity on the Performance of Low-Cost Air Particle Mass Sensors and the Effect of Atmospheric Fog. Atmos. Meas. Tech. Discuss. 2018, 11, 4883–4890. [Google Scholar] [CrossRef]
  57. Wang, Y.; Li, J.; Jing, H.; Zhang, Q.; Jiang, J.; Biswas, P. Laboratory Evaluation and Calibration of Three Low-Cost Particle Sensors for Particulate Matter Measurement. Aerosol Sci. Technol. 2015, 49, 1063–1077. [Google Scholar] [CrossRef]
  58. Hofman, J.; Panzica La Manna, V.; Ibarrola-Ulzurrun, E.; Peters, J.; Escribano Hierro, M.; Van Poppel, M. Opportunistic mobile air quality mapping using sensors on postal service vehicles: From point clouds to actionable insights. Front. Environ. Health 2023, 2, 1232867. [Google Scholar] [CrossRef]
  59. deSouza, P.; Anjomshoaa, A.; Duarte, F.; Kahn, R.; Kumar, P.; Ratti, C. Air quality monitoring using mobile low-cost sensors mounted on trash-trucks: Methods development and lessons learned. Sustain. Cities Soc. 2020, 60, 102239. [Google Scholar] [CrossRef]
  60. van Zoest, V.; Osei, F.B.; Stein, A.; Hoek, G. Calibration of low-cost NO2 sensors in an urban air quality network. Atmos. Environ. 2019, 210, 66–75. [Google Scholar] [CrossRef]
  61. Byrne, R.; Ryan, K.; Venables, D.S.; Wenger, J.C.; Hellebust, S. Highly local sources and large spatial variations in PM2.5 across a city: Evidence from a city-wide sensor network in Cork, Ireland. Environ. Sci. Atmos. 2023, 3, 919–930. [Google Scholar] [CrossRef]
  62. Peters, D.R.; Popoola, O.A.M.; Jones, R.L.; Martin, N.A.; Mills, J.; Fonseca, E.R.; Stidworthy, A.; Forsyth, E.; Carruthers, D.; Dupuy-Todd, M.; et al. Evaluating uncertainty in sensor networks for urban air pollution insights. Atmos. Meas. Tech. 2022, 15, 321–334. [Google Scholar] [CrossRef]
  63. Hofman, J.; La Manna, V.P.; Ibarrola, E.; Hierro, M.E.; van Poppel, M. Opportunistic Mobile Air Quality Mapping Using Service Fleet Vehicles: From point clouds to actionable insights. In Proceedings of the Air Sensors International Conference (ASIC) 2022, Pasadena, CA, USA, 11–13 May 2022; ASIC: Pasadena, CA, USA, 2022. [Google Scholar]
  64. Cui, H.; Zhang, L.; Li, W.; Yuan, Z.; Wu, M.; Wang, C.; Ma, J.; Li, Y. A new calibration system for low-cost Sensor Network in air pollution monitoring. Atmos. Pollut. Res. 2021, 12, 101049. [Google Scholar] [CrossRef]
  65. Mijling, B.; Jiang, Q.; de Jonge, D.; Bocconi, S. Practical field calibration of electrochemical NO2 sensors for urban air quality applications. Atmos. Meas. Tech. Discuss. 2017, 43, 1–25. [Google Scholar] [CrossRef]
  66. Vikram, S.; Collier-Oxandale, A.; Ostertag, M.H.; Menarini, M.; Chermak, C.; Dasgupta, S.; Rosing, T.; Hannigan, M.; Griswold, W.G. Evaluating and improving the reliability of gas-phase sensor system calibrations across new locations for ambient measurements and personal exposure monitoring. Atmos. Meas. Tech. 2022, 12, 4211–4239. [Google Scholar] [CrossRef]
  67. Backman, J.; Schmeisser, L.; Virkkula, A.; Ogren, J.A.; Asmi, E.; Starkweather, S.; Sharma, S.; Eleftheriadis, K.; Uttal, T.; Jefferson, A.; et al. On Aethalometer measurement uncertainties and an instrument correction factor for the Arctic. Atmos. Meas. Tech. 2017, 10, 5039–5062. [Google Scholar] [CrossRef]
  68. Viana, M.; Rivas, I.; Reche, C.; Fonseca, A.S.; Pérez, N.; Querol, X.; Alastuey, A.; Álvarez-Pedrerol, M.; Sunyer, J. Field comparison of portable and stationary instruments for outdoor urban air exposure assessments. Atmos. Environ. 2015, 123, 220–228. [Google Scholar] [CrossRef]
  69. Drinovec, L.; Močnik, G.; Zotter, P.; Prévôt, A.S.H.; Ruckstuhl, C.; Coz, E.; Rupakheti, M.; Sciare, J.; Müller, T.; Wiedensohler, A.; et al. The “dual-spot” Aethalometer: An improved measurement of aerosol black carbon with real-time loading compensation. Atmos. Meas. Tech. 2015, 8, 1965–1979. [Google Scholar] [CrossRef]
  70. Cai, J.; Yan, B.; Ross, J.; Zhang, D.; Kinney, P.L.; Perzanowski, M.S.; Jung, K.; Miller, R.; Chillrud, S.N. Validation of MicroAeth® as a Black Carbon Monitor for Fixed-Site Measurement and Optimization for Personal Exposure Characterization. Aerosol Air Qual. Res. 2014, 14, 1–9. [Google Scholar] [CrossRef]
  71. Park, S.S.; Hansen, A.D.A.; Cho, S.Y. Measurement of real time black carbon for investigating spot loading effects of Aethalometer data. Atmos. Environ. 2010, 44, 1449–1455. [Google Scholar] [CrossRef]
  72. Weingartner, E.; Saathoff, H.; Schnaiter, M.; Streit, N.; Bitnar, B.; Baltensperger, U. Absorption of light by soot particles: Determination of the absorption coefficient by means of aethalometers. J. Aerosol. Sci. 2003, 34, 1445–1463. [Google Scholar] [CrossRef]
  73. Liu, J.; Clark, L.P.; Bechle, M.J.; Hajat, A.; Kim, S.Y.; Robinson, A.L.; Sheppard, L.; Szpiro, A.A.; Marshall, J.D. Disparities in Air Pollution Exposure in the United States by Race/Ethnicity and Income, 1990–2010. Environ. Health Perspect. 2021, 129, 127005. [Google Scholar] [CrossRef]
  74. Van den Bossche, J.; De Baets, B.; Botteldooren, D.; Theunis, J. A spatio-temporal land use regression model to assess street-level exposure to black carbon. Environ. Model. Softw. 2020, 133, 104837. [Google Scholar] [CrossRef]
  75. Li, M.; Gao, S.; Lu, F.; Tong, H.; Zhang, H. Dynamic Estimation of Individual Exposure Levels to Air Pollution Using Trajectories Reconstructed from Mobile Phone Data. Int. J. Environ. Res. Public Health 2019, 16, 4522. [Google Scholar] [CrossRef]
  76. Kutlar Joss, M.; Boogaard, H.; Samoli, E.; Patton, A.P.; Atkinson, R.; Brook, J.; Chang, H.; Haddad, P.; Hoek, G.; Kappeler, R.; et al. Long-Term Exposure to Traffic-Related Air Pollution and Diabetes: A Systematic Review and Meta-Analysis. Int. J. Public Health 2023, 68, 1605718. [Google Scholar] [CrossRef]
  77. Blanco, M.N.; Doubleday, A.; Austin, E.; Marshall, J.D.; Seto, E.; Larson, T.V.; Sheppard, L. Design and evaluation of short-term monitoring campaigns for long-term air pollution exposure assessment. J. Expo. Sci. Environ. Epidemiol. 2023, 33, 465–473. [Google Scholar] [CrossRef]
  78. Kim, S.-Y.; Blanco, M.N.; Bi, J.; Larson, T.V.; Sheppard, L. Exposure assessment for air pollution epidemiology: A scoping review of emerging monitoring platforms and designs. Environ. Res. 2023, 223, 115451. [Google Scholar] [CrossRef]
Figure 1. Considered sensor systems (10) with (upper panel left to right): PAM (2BTech, Broomfield, CO, USA), GeoAir, Observair (DSTech, Pohang-si, Republic of Korea), SODAQ Air (SODAQ, Hilversum, The Netherlands), PMscan (TERA Sensor, Rousset, France), OPEN SENECA (open-seneca.org), and ATMOTUBE Pro (ATMOTECH Inc., San Francisco, CA, USA). Lower panel left to right: SODAQ NO2 (SODAQ, Hilversum, The Netherlands), Habitatmap Airbeam (Habitatmap, Brooklyn, NY, USA), and BCmeter (BCmeter.org).
Figure 1. Considered sensor systems (10) with (upper panel left to right): PAM (2BTech, Broomfield, CO, USA), GeoAir, Observair (DSTech, Pohang-si, Republic of Korea), SODAQ Air (SODAQ, Hilversum, The Netherlands), PMscan (TERA Sensor, Rousset, France), OPEN SENECA (open-seneca.org), and ATMOTUBE Pro (ATMOTECH Inc., San Francisco, CA, USA). Lower panel left to right: SODAQ NO2 (SODAQ, Hilversum, The Netherlands), Habitatmap Airbeam (Habitatmap, Brooklyn, NY, USA), and BCmeter (BCmeter.org).
Sensors 24 05653 g001
Figure 2. PM exposure chamber in the lab (left), mobile field test with cargo bike (middle), and field co-location campaign at an urban background monitoring station (right).
Figure 2. PM exposure chamber in the lab (left), mobile field test with cargo bike (middle), and field co-location campaign at an urban background monitoring station (right).
Sensors 24 05653 g002
Figure 3. Mobile field trajectory (10.4 km) in the city center of Antwerp, Belgium (upper left), and applied cargo bike setup (upper right). Lower pictures show the variety of urban landscape and road traffic along the cycling route.
Figure 3. Mobile field trajectory (10.4 km) in the city center of Antwerp, Belgium (upper left), and applied cargo bike setup (upper right). Lower pictures show the variety of urban landscape and road traffic along the cycling route.
Sensors 24 05653 g003
Figure 4. Stepwise PM2.5 concentrations generated during the lack-of-fit test and measured concentrations by the different sensor systems (1-3; green-blue-red) and the reference monitor (Grimm; purple/green).
Figure 4. Stepwise PM2.5 concentrations generated during the lack-of-fit test and measured concentrations by the different sensor systems (1-3; green-blue-red) and the reference monitor (Grimm; purple/green).
Sensors 24 05653 g004
Figure 5. Coarse PM testing procedure with consecutive 5-min generation periods of coarse (7.75 µm) and fine (1.18 µm) PM peaks (upper panel; measured by Grimm REF monitor) and resulting ATMOTUBE and OPEN SENECA sensor response (µg/m3) in the lower panels.
Figure 5. Coarse PM testing procedure with consecutive 5-min generation periods of coarse (7.75 µm) and fine (1.18 µm) PM peaks (upper panel; measured by Grimm REF monitor) and resulting ATMOTUBE and OPEN SENECA sensor response (µg/m3) in the lower panels.
Sensors 24 05653 g005
Figure 6. Stepwise NO2 concentrations generated during the lack-of-fit tests and measured raw (left) and lab-calibrated (right) concentrations by the SODAQ NO2 (1-3; upper in red), PAM (middle in red), Observair (lower in red), and the reference monitor (Thermo NOx analyzer in purple/green).
Figure 6. Stepwise NO2 concentrations generated during the lack-of-fit tests and measured raw (left) and lab-calibrated (right) concentrations by the SODAQ NO2 (1-3; upper in red), PAM (middle in red), Observair (lower in red), and the reference monitor (Thermo NOx analyzer in purple/green).
Sensors 24 05653 g006
Figure 7. (Left): GPS tracks of the considered sensor systems (dots) and reference GPS track (blue line). (Right): Accuracy calculation by means of horizontal distance to reference GPS track (blue line).
Figure 7. (Left): GPS tracks of the considered sensor systems (dots) and reference GPS track (blue line). (Right): Accuracy calculation by means of horizontal distance to reference GPS track (blue line).
Sensors 24 05653 g007
Figure 8. Location of the exposure shelter on top of R801 urban background monitoring station (left), detail of the exposure shelter (middle), and positioning of the sensor systems at the different platforms inside the shelter (right).
Figure 8. Location of the exposure shelter on top of R801 urban background monitoring station (left), detail of the exposure shelter (middle), and positioning of the sensor systems at the different platforms inside the shelter (right).
Sensors 24 05653 g008
Figure 9. Hourly timeseries of PM2.5, NO2, and BC concentrations measured by the respective sensor systems and the reference monitors at the R801 reference background monitoring station.
Figure 9. Hourly timeseries of PM2.5, NO2, and BC concentrations measured by the respective sensor systems and the reference monitors at the R801 reference background monitoring station.
Sensors 24 05653 g009
Table 1. Summary table of out-of-the-box performance (setpoint accuracy, setpoint stability, MAE, R2, Uexp, and BSU) obtained for each considered sensor system and pollutant (PM and NO2) during the laboratory tests.
Table 1. Summary table of out-of-the-box performance (setpoint accuracy, setpoint stability, MAE, R2, Uexp, and BSU) obtained for each considered sensor system and pollutant (PM and NO2) during the laboratory tests.
SENSOR SYSTEMAccuracy (%)MAER2UexpBSU
PM1PM2.5PM10µg/m3-%µg/m3
PMATMOTUBE (3)84652910.00.98471.5
OPEN SENECA (3)83542212.60.99551.2
TERA (3)1879475.21.00251.6
SODAQ Air (3)6470318.90.99404.0
SODAQ NO2 (3)68522110.90.9945NA
GeoAir (3)NANANANANANANA
PAM (1)63291317.30.9679NA
SENSOR SYSTEMAccuracyStability MAER2UexpBSU
%µg/m3 µg/m3-%µg/m3
NO2SODAQ NO2 (3)−16651 270.30.11304124.7
PAM (1)7227 49.50.13110NA
Observair (1)00 79.00.98112NA
Table 2. Summary table of quantitative performance metrics (accuracy, stability, MAE, R2, Uexp, and BSU) obtained for each sensor system and pollutant (PM and NO2) during the field co-location campaign (hourly data). * As the PAM only consisted of one instrument, BSU could not be calculated (NA).
Table 2. Summary table of quantitative performance metrics (accuracy, stability, MAE, R2, Uexp, and BSU) obtained for each sensor system and pollutant (PM and NO2) during the field co-location campaign (hourly data). * As the PAM only consisted of one instrument, BSU could not be calculated (NA).
SENSOR SYSTEMData CoverageMAER2UexpBSU
%µg/m3-%µg/m3
PM2.5ATMOTUBE (3)764.30.88480.6
OPEN SENECA (3)1003.70.90350.3
TERA (3)174.40.87640.1
SODAQ Air (3)443.10.68160.7
SODAQ NO2 (3)443.80.67400.4
AIRBEAM (3)533.90.87360.7
GeoAir (3)963.00.89280.6
PAM (1)1004.70.8966NA *
NO2SODAQ NO2_raw (3)44190.30.42614
SODAQ NO2_cal (1)4427.10.62108
SODAQ NO2_mlcal (1)445.60.8337
PAM (3)10084.10.55284
PAM_cal (1)100349.00.551225
PAM_calml (1)10044.20.7544
Observair_raw7828.40.38111
Observair_cal7828.80.3895
Observair_mlcal78NANANA
BCObservair780.30.82
BCmeter780.20.83
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hofman, J.; Lazarov, B.; Stroobants, C.; Elst, E.; Smets, I.; Van Poppel, M. Portable Sensors for Dynamic Exposure Assessments in Urban Environments: State of the Science. Sensors 2024, 24, 5653. https://doi.org/10.3390/s24175653

AMA Style

Hofman J, Lazarov B, Stroobants C, Elst E, Smets I, Van Poppel M. Portable Sensors for Dynamic Exposure Assessments in Urban Environments: State of the Science. Sensors. 2024; 24(17):5653. https://doi.org/10.3390/s24175653

Chicago/Turabian Style

Hofman, Jelle, Borislav Lazarov, Christophe Stroobants, Evelyne Elst, Inge Smets, and Martine Van Poppel. 2024. "Portable Sensors for Dynamic Exposure Assessments in Urban Environments: State of the Science" Sensors 24, no. 17: 5653. https://doi.org/10.3390/s24175653

APA Style

Hofman, J., Lazarov, B., Stroobants, C., Elst, E., Smets, I., & Van Poppel, M. (2024). Portable Sensors for Dynamic Exposure Assessments in Urban Environments: State of the Science. Sensors, 24(17), 5653. https://doi.org/10.3390/s24175653

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop