Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,318)

Search Parameters:
Keywords = near-real-time

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 4500 KiB  
Article
The CHEWMA Chart: A New Statistical Control Approach for Microclimate Monitoring in Preventive Conservation of Cultural Heritage
by Ignacio Díaz-Arellano and Manuel Zarzo
Sensors 2025, 25(4), 1242; https://doi.org/10.3390/s25041242 - 18 Feb 2025
Abstract
A new statistical control chart denoted as CHEWMA (Cultural Heritage EWMA) is proposed for microclimate monitoring in preventive conservation. This tool is a real-time detection method inspired by the EN 15757:2010 standard, serving as an alternative to its common adaptations. The proposed control [...] Read more.
A new statistical control chart denoted as CHEWMA (Cultural Heritage EWMA) is proposed for microclimate monitoring in preventive conservation. This tool is a real-time detection method inspired by the EN 15757:2010 standard, serving as an alternative to its common adaptations. The proposed control chart is intended to detect short-term fluctuations (STFs) in temperature (T) and relative humidity (RH), which would enable timely interventions to mitigate the risk of mechanical damage to collections. The CHEWMA chart integrates the Exponentially Weighted Moving Average (EWMA) control chart with a weighting mechanism that prioritizes fluctuations occurring near extreme values. The methodology was validated using RH time series recorded by seven dataloggers installed at the Alava Fine Arts Museum, and, from these, seventy simulated time series were generated to enhance the robustness of the analyses. Sensitivity analyses demonstrated that, for the studied dataset, the CHEWMA chart exhibits stronger similarity to the application of EN 15757:2010 than other commonly used real-time STF detection methods in the literature. Furthermore, it provides a flexible option for real-time applications, enabling adaptation to specific conservation needs while remaining aligned with the general framework established by the standard. To the best of our knowledge, this is the first statistical process control chart designed for the field of preventive conservation of cultural heritage. Beyond assessing CHEWMA’s performance, this study reveals that, when adapting the procedures of the European norm by developing a new real-time approach based on a simple moving average (herein termed SMA-FT), a window of approximately 14 days is more appropriate for STF detection than the commonly assumed 30-day period in the literature. Full article
(This article belongs to the Special Issue Metrology for Living Environment 2024)
Show Figures

Figure 1

Figure 1
<p>Summary of the phases for constructing and deploying the CHEWMA chart based on classical SPC methodology. First, a monitoring campaign is conducted, followed by the statistical estimation and adjustment of the constants. The chart is then validated using historical data to ensure that it generates the desired amount of alarm signals aligned with the cultural heritage institution’s needs. During the deployment phase, the necessary statistics for calculating the control limits are estimated, and the monitoring statistic <span class="html-italic">Z</span> is computed. In the event of a signal, the established protocol is triggered to mitigate adverse conditions. Periodic validation is essential to ensure that the process remains stable and the chart continues to be appropriate for microclimate monitoring.</p>
Full article ">Figure 2
<p>Steps for the calculation of the statistics derived from <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>X</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>X</mi> <mi>t</mi> <mrow> <mn>24</mn> <mi mathvariant="normal">h</mi> </mrow> </msubsup> </semantics></math> based on the recorded time series <span class="html-italic">X</span>. The estimation of <math display="inline"><semantics> <msub> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>Δ</mo> <mi>X</mi> </mrow> </msub> </semantics></math> is performed during Phase 1. On the other hand, <math display="inline"><semantics> <msub> <mover accent="true"> <mi>μ</mi> <mo stretchy="false">^</mo> </mover> <msubsup> <mi>X</mi> <mi>t</mi> <mrow> <mn>24</mn> <mi mathvariant="normal">h</mi> </mrow> </msubsup> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <msubsup> <mi>X</mi> <mi>t</mi> <mrow> <mn>24</mn> <mi mathvariant="normal">h</mi> </mrow> </msubsup> </msub> </semantics></math> are calculated upon the application of CHEWMA. Thus, these statistics are computed in Phase 2, during CHEWMA’s operational phase aimed at STF detection, and in Phase 1, when the SPC chart is applied to historical data to adjust constants <math display="inline"><semantics> <mi>λ</mi> </semantics></math>, <math display="inline"><semantics> <mi>β</mi> </semantics></math>, <span class="html-italic">p</span>, and <span class="html-italic">L</span>. The chart assumes that outliers from the differentiated time series (<math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>X</mi> </mrow> </semantics></math>) and the deseasonalized series (<math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>X</mi> <mo stretchy="false">^</mo> </mover> <mi>t</mi> <mi>desest</mi> </msubsup> </semantics></math>) have been removed. It is also assumed that the recording frequency of <span class="html-italic">X</span> is one observation per hour.</p>
Full article ">Figure 3
<p>CHEWMA chart with <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> on a real time series. As observed, since the annual mean of <span class="html-italic">X</span> is greater than the RH reference point (i.e., <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>μ</mi> <mo stretchy="false">^</mo> </mover> <msup> <mi>X</mi> <mi>annual</mi> </msup> </msub> <mo>&gt;</mo> <mi>p</mi> </mrow> </semantics></math>), it follows that <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>μ</mi> <mo stretchy="false">^</mo> </mover> <msub> <mi>Z</mi> <mi>t</mi> </msub> <mo>′</mo> </msubsup> <mo>&lt;</mo> <msub> <mover accent="true"> <mi>μ</mi> <mo stretchy="false">^</mo> </mover> <msub> <mi>Z</mi> <mi>t</mi> </msub> </msub> </mrow> </semantics></math>. Consequently, the UCL and LCL limits, centered around <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>μ</mi> <mo stretchy="false">^</mo> </mover> <msub> <mi>Z</mi> <mi>t</mi> </msub> <mo>′</mo> </msubsup> </semantics></math>, are positioned asymmetrically relative to the actual mean of the <span class="html-italic">Z</span> statistic (i.e., <math display="inline"><semantics> <msub> <mover accent="true"> <mi>μ</mi> <mo stretchy="false">^</mo> </mover> <msub> <mi>Z</mi> <mi>t</mi> </msub> </msub> </semantics></math>). The <math display="inline"><semantics> <msub> <mi>Z</mi> <mi>t</mi> </msub> </semantics></math> values that exceed the limits UCL and LCL are marked as signals in red. Optional limits corresponding to <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math> RH are added to the observed time series.</p>
Full article ">Figure 4
<p>Main flowchart outlining the recommended steps to undertake when an alarm signal is triggered in the CHEWMA chart.</p>
Full article ">Figure 5
<p>Fictitious example to illustrate the calculation of SDI* on the signals generated by methods A and B. It can be observed that, despite the apparent similarity between both methods A and B, <math display="inline"><semantics> <mrow> <mi>SDI</mi> <mo>=</mo> <mn>0.22</mn> </mrow> </semantics></math> (Equation (14)) since there is only one exact match and the two methods together generate a total of nine signals. On the other hand, as indicated by the distance calculations in the figure, <math display="inline"><semantics> <mrow> <mrow> <mi>SDI</mi> <mo>*</mo> </mrow> <mo>=</mo> <mn>0.55</mn> </mrow> </semantics></math> (Equation (<a href="#FD15-sensors-25-01242" class="html-disp-formula">15</a>)), a value that may better quantify the similarity between the two series.</p>
Full article ">Figure 6
<p>Hotelling’s <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">T</mi> <mn>2</mn> </msup> </mrow> </semantics></math> statistic plots for each PCA model fitted with four components. In each plot, the statistic for the observed series (e.g., d06) is displayed first, followed by the ten simulated series (e.g., d06.1 to d06.10). The time series marked with a circle in d02 correspond to the series in <a href="#sensors-25-01242-f007" class="html-fig">Figure 7</a>. All Hotelling’s <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">T</mi> <mn>2</mn> </msup> </mrow> </semantics></math> statistics remain below the 95% confidence limit and exhibit no specific patterns.</p>
Full article ">Figure 7
<p>Examples of the application of CHEWMA chart (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics></math>) on an observed time series (d02, left) and a simulated series derived from it (d02.7, right). The upper panels show the control charts monitoring the <span class="html-italic">Z</span> statistic (in black), with the mean of the statistic equal to zero (<math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>Z</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, in blue), the upper and lower limits (in red), and the signals crossing the limits (in red). The lower panels display the alarm signals (in red) superimposed on the observed time series (<span class="html-italic">X</span>, in black) and the 14-day moving average (<math display="inline"><semantics> <msub> <mover accent="true"> <mi>μ</mi> <mo stretchy="false">^</mo> </mover> <msubsup> <mi>X</mi> <mi>t</mi> <mrow> <mn>24</mn> <mi mathvariant="normal">h</mi> </mrow> </msubsup> </msub> </semantics></math>, in blue). The control limits were set to detect 0.5% of the observations as alarms. This percentage was chosen to highlight a reduced number of STFs, attempting to simulate real-world usage.</p>
Full article ">Figure 8
<p>Similarity of the methods with respect to the CHEWMA chart and SMA-FT using simulated and real data, calculated using SDI* (Equation (<a href="#FD15-sensors-25-01242" class="html-disp-formula">15</a>)). Each cell value represents the average index calculated over the set of simulated series in (<b>a</b>) and real time series in (<b>b</b>). The cells outlined with dashed lines compare matching configurations with highest similarity. For a better comparison, CHEWMA and SMA-FT were adjusted to identify ∼<math display="inline"><semantics> <mrow> <mn>14</mn> <mo>%</mo> </mrow> </semantics></math> of the observations as signals. To improve clarity, values below 0.55 are omitted in the rows of SMA-FT and EN 15757:2010 as well as below 0.28 and 0.06 in the rows of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mn>5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mn>10</mn> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 9
<p>Comparison between the application of the EN 15757:2010 standard, SMA-FT (<math display="inline"><semantics> <mrow> <mi>w</mi> <mi>i</mi> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math>), and CHEWMA (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.06</mn> </mrow> </semantics></math>) on the RH time series recorded by datalogger d08. This time series was selected to highlight the differences between the three methods, which are less noticeable when applied to other time series. For a better comparison, CHEWMA and SMA-FT were adjusted to identify approximately 14% of the observations as signals. Charts in the left column depict the application of each method’s procedures. For EN 15757:2010 and SMA-FT, this includes fluctuations and the SB derived from percentiles. For the CHEWMA chart, it consists of monitoring the <span class="html-italic">Z</span> statistic and its limits. Charts in the right column represent the application of the generated signals to the original time series. The blue areas indicate discrepancies with the application of the EN 15757:2010 standard. The gray areas indicate the periods discarded by EN 15757:2010 and SMA-FT, which are required for the calculation of moving windows.</p>
Full article ">Figure 10
<p>Monthly distribution of the number of signals detected by each method evaluated in this study, applied to each of the real time series. The comparison is conducted between methods with similar configurations: EN 15757:2010, SMA-FT (<math display="inline"><semantics> <mrow> <mi>w</mi> <mi>i</mi> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>14</mn> </mrow> </semantics></math>), and CHEWMA (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.012</mn> </mrow> </semantics></math>). For better comparison, CHEWMA and SMA-FT were adjusted to identify approximately 14% of the observations as signals. Although all time series were generated from dataloggers within the same museum and therefore exhibit very similar seasonal patterns, each of them presents unique characteristics. Specifically, two of them, d04 and d06, are located in a different building with distinct microclimatic conditions compared to the rest [<a href="#B22-sensors-25-01242" class="html-bibr">22</a>].</p>
Full article ">Figure 11
<p>Analysis of the logarithmic relationship between <math display="inline"><semantics> <mrow> <mi>w</mi> <mi>i</mi> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mover> <mrow> <mi>S</mi> <mi>B</mi> <mi>W</mi> </mrow> <mo>¯</mo> </mover> </semantics></math> for EN 15757:2010 (red) and SMA-FT (orange). Temporal windows are applied within the range <math display="inline"><semantics> <mrow> <mi>w</mi> <mi>i</mi> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>7</mn> <mo>,</mo> <mn>14</mn> <mo>,</mo> <mn>22</mn> <mo>,</mo> <mn>30</mn> <mo>,</mo> <mn>45</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>. This relationship is analyzed in two different ways to construct the SB. First, without altering the percentiles used to determine the limits, keeping them at the 7th and 93rd percentiles, as conducted in [<a href="#B106-sensors-25-01242" class="html-bibr">106</a>], which can be observed in the circles and the solid line. Second, by adjusting the percentiles to identify approximately 14% of the observations as signals, which slightly differs from setting the limits at the 7th and 93 rd percentiles when applying a moving temporal window. The horizontal dashed lines help to visualize the relationship between EN 15757:2010 adjusted with <math display="inline"><semantics> <mrow> <mi>w</mi> <mi>i</mi> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> and SMA-FT adjusted with <math display="inline"><semantics> <mrow> <mi>w</mi> <mi>i</mi> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>14</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Comparison between the application of the EN 15757:2010 standard, SMA-FT (<math display="inline"><semantics> <mrow> <mi>w</mi> <mi>i</mi> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> <mi>s</mi> </mrow> </msub> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mn>30</mn> </mrow> </semantics></math>), and CHEWMA (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.06</mn> </mrow> </semantics></math>) applying the 10% criterion to the RH time series recorded by datalogger d08. As in <a href="#sensors-25-01242-f009" class="html-fig">Figure 9</a>, for better comparison, CHEWMA and SMA-FT were adjusted to identify approximately 14% of the observations as signals. Charts in the left column depict the application of each method’s procedures. For EN 15757:2010 and SMA-FT, this includes fluctuations and the SB derived from percentiles. For the CHEWMA chart, it consists of monitoring the <span class="html-italic">z</span> statistic and its limits. Charts in the right column represent the application of the generated signals to the original time series. The CHEWMA chart displays signals generated by the chart limits, shown in gray, that fall below the 10% threshold. The gray areas indicate the periods discarded by EN 15757:2010 and SMA-FT, which are required for the calculation of moving windows.</p>
Full article ">Figure 13
<p>Percentage of RH signals detected by the CHEWMA chart (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>50</mn> <mo>%</mo> </mrow> </semantics></math>) as a function of changes in <math display="inline"><semantics> <mi>β</mi> </semantics></math>. The chart was applied to the simulated time series with <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>0.25</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.75</mn> <mo>,</mo> <mn>1</mn> </mrow> </mrow> </semantics></math>. The amount of resulting signals has been averaged per datalogger. The chart limits were adjusted to detect 5% of the observations as signals. As <math display="inline"><semantics> <mi>β</mi> </semantics></math> increases, it can be appreciated that a higher proportion of signals is distributed towards extreme values. In this case, the distribution skews more towards high extremes (near 100% RH) than towards low extremes (near 0% RH) due to the greater number of fluctuations occurring near 100% RH in this time series. For clarity, the time series from each datalogger is labeled with its respective number (e.g., d02.1, …, d02.10 are averaged and correspond to label 2).</p>
Full article ">
23 pages, 21288 KiB  
Article
Analysis of Detailed Series Based on the Estimation of Hydrogeological Parameters by Indirect Methods Based on Fluvial and Piezometric Fluctuations
by José Luis Herrero-Pacheco, Javier Carrasco and Pedro Carrasco
Water 2025, 17(4), 576; https://doi.org/10.3390/w17040576 - 17 Feb 2025
Viewed by 196
Abstract
Piezometers located near watercourses experiencing periodic fluctuations provide a means to analyse soil properties and derive key hydrogeological parameters through pressure wave transmission analysis, which is affected in amplitude and time (lag). These techniques are invaluable for hydrogeological characterizations, such as assessing pollutant [...] Read more.
Piezometers located near watercourses experiencing periodic fluctuations provide a means to analyse soil properties and derive key hydrogeological parameters through pressure wave transmission analysis, which is affected in amplitude and time (lag). These techniques are invaluable for hydrogeological characterizations, such as assessing pollutant diffusion, conducting construction projects below the water table, and evaluating flood zones. While traditionally applied to study tidal influences in coastal areas, this research introduces their application to channels indirectly affected by tidal oscillations due to downstream confluences with tidal waterways. This innovative approach combines the analysis of tidal barriers with the effects of storms and droughts. This study synthesises findings from an experimental monitoring field equipped with advanced recording technologies, allowing for high-resolution, long-term analysis. The dataset, spanning dry periods, major storms, and channel overflows, offers unprecedented precision and insight into aquifer responses. This study analyses the application of wave transmission calculations using continuous level recording in a river and in observation piezometers. Two methods of analysis are applied to the series generated, one based on the variation in the amplitude and the other based on the phase shift produced by the transmission of the wave through the aquifer, both related to the hydrogeological characteristics of the medium. This study concludes that the determination of the fluctuation period is key in the calculation, being particularly more precise in the analysis of the amplitude than in the analysis of the phase difference, which has led to disparate results in previous studies. The results obtained make it possible to reconstruct and extrapolate real or calculated series of rivers and piezometers as a function of distance from the diffusivity obtained. Using the fluctuation period and diffusivity, it is possible to construct the wave associated with any event based on data from just one river or piezometer. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the geological framework research area (Geology adapted from EVE [<a href="#B27-water-17-00576" class="html-bibr">27</a>]).</p>
Full article ">Figure 2
<p>Lithological columns of the boreholes and observed piezometric levels (ordered by distance from the river).</p>
Full article ">Figure 3
<p>Location of the research conducted.</p>
Full article ">Figure 4
<p>Electrical tomography profile and borehole BB-2.</p>
Full article ">Figure 5
<p>Seismic refraction profile and borehole BB-2.</p>
Full article ">Figure 6
<p>Location of control points.</p>
Full article ">Figure 7
<p>Control series of the Herrerías river in Sodupe versus rainfall.</p>
Full article ">Figure 8
<p>Control series in the Bilbao estuary versus rainfall.</p>
Full article ">Figure 9
<p>Evolution of piezometer BB-2 between February 2018 and July 2024.</p>
Full article ">Figure 10
<p>Historical rainfall event recorded at piezometer BB-2.</p>
Full article ">Figure 11
<p>Evolution of piezometer BB-2 and Cadagua river between March and May 2021 (<b>top</b>) and between June and August 2021 (<b>bottom</b>).</p>
Full article ">Figure 12
<p>Detail of the spring tide interval in the Cadagua river and piezometer BB-2 in July.</p>
Full article ">Figure 13
<p>Amplitudes and time delays in the river and piezometer BB-2.</p>
Full article ">Figure 14
<p>River wave analysis and piezometer.</p>
Full article ">Figure 15
<p>Comparison of river and piezometer waves with the estuary.</p>
Full article ">Figure 16
<p>Evolution of all piezometers in the period considered.</p>
Full article ">Figure 17
<p>Evolution of piezometers in storm surge (<b>b</b>) and amplitude ratio (<b>a</b>).</p>
Full article ">Figure 18
<p>Evolution of piezometers at neap tide with major storm (<b>b</b>) and amplitude ratio (<b>a</b>).</p>
Full article ">Figure 19
<p>Observed and calculated maximum river discharge values from the BB-2 piezometer data.</p>
Full article ">Figure 20
<p>Maximum piezometric level values observed and calculated from the river level data.</p>
Full article ">Figure 21
<p>Adjustments obtained in the comparison between observed and calculated maximum values. (<b>a</b>) River water table; (<b>b</b>) Piezometric level in borehole BB-2.</p>
Full article ">Figure 22
<p>Minimum values observed in the river and piezometer BB-2 versus maximum water level.</p>
Full article ">Figure 23
<p>Evolution of the correlation between the observed and calculated minimum river discharge from the percentage approximation of the amplitude ratio.</p>
Full article ">Figure 24
<p>Calculated minimum values. (<b>a</b>) River water table; (<b>b</b>) Piezometric level BB-2.</p>
Full article ">Figure 25
<p>(<b>a</b>) Observed maximum and minimum levels; (<b>b</b>) Calculated maximum and minimum levels.</p>
Full article ">Figure 26
<p>Maximum and minimum values of the river sheet calculated and extrapolated to the rest of the year 2021.</p>
Full article ">Figure 27
<p>Calculated maximum and minimum values of the river water table compared with the oscillations observed in the Bilbao estuary.</p>
Full article ">
30 pages, 11416 KiB  
Article
Predictive Model for Erosion Rate of Concrete Under Wind Gravel Flow Based on K-Fold Cross-Validation Combined with Support Vector Machine
by Yanhua Zhao, Kai Zhang, Aojun Guo, Fukang Hao and Jie Ma
Buildings 2025, 15(4), 614; https://doi.org/10.3390/buildings15040614 - 17 Feb 2025
Viewed by 65
Abstract
In the Gobi region, concrete structures frequently suffer erosion from wind gravel flow. This erosion notably impairs their longevity. Therefore, creating a predictive model for wind gravel flow-related concrete damage is crucial to proactively address and manage this problem. Traditional theoretical models often [...] Read more.
In the Gobi region, concrete structures frequently suffer erosion from wind gravel flow. This erosion notably impairs their longevity. Therefore, creating a predictive model for wind gravel flow-related concrete damage is crucial to proactively address and manage this problem. Traditional theoretical models often fail to predict the erosion rate of concrete (CER) structures accurately. This issue arises from oversimplified assumptions and the failure to account for environmental variations and complex nonlinear relationships between parameters. Consequently, a single traditional model is inadequate for predicting the CER under wind gravel flow conditions in this region. To address this, the study utilized a machine learning (ML) model for a more precise prediction and evaluation of CER. The support vector machine (SVM) model demonstrates superior predictive performance, evidenced by its R2 value nearing one and a notable reduction in RMSE 1.123 and 1.573 less than the long short-term memory network (LSTM) and BP neural network (BPNN) models, respectively. Ensuring that the training set comprises at least 80% of the total data volume is crucial for the SVM model’s prediction accuracy. Moreover, erosion time is identified as the most significant factor affecting the CER. An enhanced theoretical erosion model, derived from the Bitter and Oka framework and integrating concrete strength and erosion parameters, was formulated. It showed average relative errors of 22% and 31.6% for the Bitter and Oka models, respectively. The SVM model, however, recorded a minimal average relative error of just −0.5%, markedly surpassing these improved theoretical models in terms of prediction accuracy. Theoretical models often rely on simplifying assumptions, such as linear relationships and homogeneous material properties. In practice, however, factors like concrete materials, wind gravel flow, and climate change are nonlinear and non-homogeneous. This significantly limits the applicability of these models in real-world environments. Ultimately, the SVM algorithm is highly effective in developing a reliable prediction model for CER. This model is crucial for safeguarding concrete structures in wind gravel flow environments. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The scene of the Lanzhou–Xinjiang Railway accident in the Gobi windy region.</p>
Full article ">Figure 2
<p>Damage to concrete in Gobi gale zone.</p>
Full article ">Figure 3
<p>Schematic diagram of airflow sand transport test device.</p>
Full article ">Figure 4
<p>Variation rule of CER with strength.</p>
Full article ">Figure 5
<p>Variation rule of erosion rate with gravel flow rate for specimen A-3.</p>
Full article ">Figure 6
<p>Variation rule of erosion rate with erosion angle for specimen A-3.</p>
Full article ">Figure 7
<p>Variation rule of erosion rate with erosion wind speed for specimen A-3.</p>
Full article ">Figure 8
<p>Variation rule of erosion rate with erosion time for specimen A-3.</p>
Full article ">Figure 9
<p>Damage mechanism of concrete under the erosion of sand and gravel with different particle sizes. Red dashed circles indicate chiseling.</p>
Full article ">Figure 10
<p>Erosion macroscopic morphology of concrete surface under different erosion parameters.</p>
Full article ">Figure 11
<p>Nonlinear-to-linear mapping of SVM model.</p>
Full article ">Figure 12
<p>Schematic diagram of K-fold cross-validation procedure.</p>
Full article ">Figure 13
<p>Parameter optimization result.</p>
Full article ">Figure 14
<p>Architecture of the SVM model.</p>
Full article ">Figure 15
<p>Comparison of SVM model prediction results.</p>
Full article ">Figure 15 Cont.
<p>Comparison of SVM model prediction results.</p>
Full article ">Figure 16
<p>Schematic diagram of BPNN model structure.</p>
Full article ">Figure 17
<p>Schematic diagram of LSTM model structure.</p>
Full article ">Figure 18
<p>Schematic diagram of RF model structure.</p>
Full article ">Figure 19
<p>Comparison of predicted values and true values of different ML models.</p>
Full article ">Figure 19 Cont.
<p>Comparison of predicted values and true values of different ML models.</p>
Full article ">Figure 20
<p>Representation of the prediction accuracy of different ML models by radar charts.</p>
Full article ">Figure 21
<p>SHAP interpretability analysis of SVM model.</p>
Full article ">Figure 21 Cont.
<p>SHAP interpretability analysis of SVM model.</p>
Full article ">Figure 22
<p>SVM model prediction results under different combinations.</p>
Full article ">Figure 22 Cont.
<p>SVM model prediction results under different combinations.</p>
Full article ">Figure 23
<p>Comparison of test results of CER with Oka and Bitter-improved models for different erosion times.</p>
Full article ">Figure 24
<p>Comparison of SVM model and Oka and Bitter-improved model prediction results.</p>
Full article ">
19 pages, 5366 KiB  
Article
Integration of Color Analysis, Firmness Testing, and visNIR Spectroscopy for Comprehensive Tomato Quality Assessment and Shelf-Life Prediction
by Sotirios Tasioulas, Jessie Watson, Dimitrios S. Kasampalis and Pavlos Tsouvaltzis
Agronomy 2025, 15(2), 478; https://doi.org/10.3390/agronomy15020478 - 16 Feb 2025
Viewed by 273
Abstract
This study evaluates the potential of integrating visible and near-infrared (visNIR) spectroscopy, color analysis, and firmness testing for non-destructive tomato quality assessment and shelf-life prediction. Tomato fruit (cv. HM1823) harvested at four ripening stages were monitored over 12 days at 22 °C to [...] Read more.
This study evaluates the potential of integrating visible and near-infrared (visNIR) spectroscopy, color analysis, and firmness testing for non-destructive tomato quality assessment and shelf-life prediction. Tomato fruit (cv. HM1823) harvested at four ripening stages were monitored over 12 days at 22 °C to investigate ripening stage-specific variations in key quality parameters, including color (hue angle), firmness (compression), and nutritional composition (pH, soluble solids content, and titratable acidity ratio). Significant changes in these parameters during storage highlighted the need for advanced tools to monitor and predict quality attributes. Spectral data (340–2500 nm) captured using advanced and cost-effective portable spectroradiometers, coupled with chemometric models such as partial least squares regression (PLSR), demonstrated reliable predictions of shelf-life and nutritional quality. The near-infrared spectrum (900–1700 nm) was particularly effective, with variable selection methods such as genetic algorithm (GA) and variable importance in projection (VIP) scores enhancing model accuracy. This study highlights the promising role of visNIR spectroscopy as a rapid, non-destructive tool for optimizing postharvest management in tomato. By enabling real-time quality assessments, these technologies support sustainable agricultural practices through improved decision-making, reduced postharvest losses, and enhanced consumer satisfaction. The findings also validate the utility of affordable spectroradiometers, offering practical solutions for stakeholders aiming to balance cost efficiency and reliability in postharvest quality monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Color (h°) (<b>A</b>) and firmness (compression) (<b>B</b>) of tomato fruit that were harvested at four ripening stages and stored at shelf-life conditions for up to 12 days. Each value in each ripening stage and storage day represents mean of 80 fruit. Vertical bar represents least significant difference (LSD) at <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 2
<p>The pH (<b>A</b>) and the ratio of soluble solids content to titratable acidity (SSC/TA) (<b>B</b>) of tomato fruit that were harvested at four ripening stages and stored at shelf-life conditions for up to 12 days. Each column represents the mean of the four replications with five fruit in each replication within each ripening stage and storage day. The vertical bar represents the least significant difference (LSD) at <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 3
<p>The predicted storage duration in relation to the actual one of the tomato fruits that were harvested at four ripening stages and stored at shelf-life conditions for up to 12 days, based on color (h°). Each box shows the interquartile range of the predicted data based on the liner regression of color (h°) versus storage period. The vertical line that splits the box in half is the median, which shows where 50% of the data falls and the single points on the plot indicate outliers. Each box shows the results from the data captured on 80 fruits within each ripening stage and storage day. The equation represents a linear regression between the predicted and actual storage period, and the Rcv is the regression coefficient of the cross validated data based on the random subset algorithm. ‘***’ represents the significance of the regression analysis (<span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">Figure 4
<p>The predicted storage duration in relation to the actual one of the tomato fruits that were harvested at four ripening stages and stored at shelf-life conditions for up to 12 days, based on firmness (compression). Each box shows the interquartile range of the predicted data based on the liner regression of compression versus storage period. The vertical line that splits the box in half is the median, which shows where 50% of the data falls and the single points on the plot indicate outliers. Each box shows the results from the data captured on 80 fruits within each ripening stage and storage day. The equation represents a linear regression between the predicted and actual storage period, and the Rcv is the regression coefficient of the cross validated data based on the random subsets algorithm. ‘***’ represents the significance of the regression analysis (<span class="html-italic">p</span> &lt; 0.001), whereas ‘ns’ implies no significant regression between compression and shelf-life period.</p>
Full article ">Figure 5
<p>The spectral reflectance that was measured with a portable spectroradiometer (PSR+ 3500) in the region 350–2500 nm on tomato fruits that were harvested at four ripening stages and stored at shelf-life conditions for up to 12 days. Each line represents the mean of 80 fruits in each ripening stage and storage day.</p>
Full article ">Figure 6
<p>The spectral reflectance that was measured with a portable spectroradiometer (PSR+ 3500) in the region 900–1700 nm on the tomato fruits that were harvested at four ripening stages and stored at shelf-life conditions for up to 12 days. Each line represents the mean of the 80 fruits in each ripening stage and storage day.</p>
Full article ">Figure 7
<p>The spectral reflectance that was measured with a portable spectroradiometer (DLP NIR Nano Scan) in the region 900–1700 nm on the tomato fruits that were harvested at four ripening stages and stored at shelf-life conditions for up to 12 days. Each line represents the mean of the 80 fruits in each ripening stage and storage day.</p>
Full article ">Figure 8
<p>The average spectral reflectance in the visNIR part of the spectrum (340–2500 nm) of the tomato fruits that were harvested at four ripening stages and stored at shelf-life conditions for 12 days (green line). The wavelength regions with the most significant impact on assessing the shelf life period of a fruit, irrespective of ripening stage at harvest, that were detected using the genetic algorithm (GA) are highlighted with a pale green color.</p>
Full article ">Figure 9
<p>The variable importance in projection (VIP) scores of the spectral reflectance data in the visNIR part of the spectrum (3400–2500 nm) of the tomato fruits with the most significant impact on assessing the SSC/TA, irrespective of ripening stage at harvest or storage duration. The vertical lines correspond to the variables with the most significant effect in the prediction model.</p>
Full article ">Figure 10
<p>Regression coefficients and root mean square errors of calibration (Rc, RMSEc) and cross validation (Rcv, RMSEcv) prediction models of storage period (in days) based on spectral reflectance data in visNIR part of spectrum (340–2500 nm) of tomato fruit, irrespective of ripening stage at harvest or storage duration.</p>
Full article ">Figure 11
<p>Regression coefficients and root mean square errors of calibration (Rc, RMSEc) and cross validation (Rcv, RMSEcv) prediction models of pH (<b>A</b>) and SSC/TA (<b>B</b>) based on spectral reflectance data in visNIR part of spectrum (340–2500 nm) of tomato fruit, irrespective of ripening stage at harvest or storage duration.</p>
Full article ">
101 pages, 7201 KiB  
Systematic Review
Challenging Cognitive Load Theory: The Role of Educational Neuroscience and Artificial Intelligence in Redefining Learning Efficacy
by Evgenia Gkintoni, Hera Antonopoulou, Andrew Sortwell and Constantinos Halkiopoulos
Brain Sci. 2025, 15(2), 203; https://doi.org/10.3390/brainsci15020203 - 15 Feb 2025
Viewed by 700
Abstract
Background/Objectives: This systematic review integrates Cognitive Load Theory (CLT), Educational Neuroscience (EdNeuro), Artificial Intelligence (AI), and Machine Learning (ML) to examine their combined impact on optimizing learning environments. It explores how AI-driven adaptive learning systems, informed by neurophysiological insights, enhance personalized education for [...] Read more.
Background/Objectives: This systematic review integrates Cognitive Load Theory (CLT), Educational Neuroscience (EdNeuro), Artificial Intelligence (AI), and Machine Learning (ML) to examine their combined impact on optimizing learning environments. It explores how AI-driven adaptive learning systems, informed by neurophysiological insights, enhance personalized education for K-12 students and adult learners. This study emphasizes the role of Electroencephalography (EEG), Functional Near-Infrared Spectroscopy (fNIRS), and other neurophysiological tools in assessing cognitive states and guiding AI-powered interventions to refine instructional strategies dynamically. Methods: This study reviews n = 103 papers related to the integration of principles of CLT with AI and ML in educational settings. It evaluates the progress made in neuroadaptive learning technologies, especially the real-time management of cognitive load, personalized feedback systems, and the multimodal applications of AI. Besides that, this research examines key hurdles such as data privacy, ethical concerns, algorithmic bias, and scalability issues while pinpointing best practices for robust and effective implementation. Results: The results show that AI and ML significantly improve Learning Efficacy due to managing cognitive load automatically, providing personalized instruction, and adapting learning pathways dynamically based on real-time neurophysiological data. Deep Learning models such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Support Vector Machines (SVMs) improve classification accuracy, making AI-powered adaptive learning systems more efficient and scalable. Multimodal approaches enhance system robustness by mitigating signal variability and noise-related limitations by combining EEG with fMRI, Electrocardiography (ECG), and Galvanic Skin Response (GSR). Despite these advances, practical implementation challenges remain, including ethical considerations, data security risks, and accessibility disparities across learner demographics. Conclusions: AI and ML are epitomes of redefinition potentials that solid ethical frameworks, inclusive design, and scalable methodologies must inform. Future studies will be necessary for refining pre-processing techniques, expanding the variety of datasets, and advancing multimodal neuroadaptive learning for developing high-accuracy, affordable, and ethically responsible AI-driven educational systems. The future of AI-enhanced education should be inclusive, equitable, and effective across various learning populations that would surmount technological limitations and ethical dilemmas. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of PRISMA methodology.</p>
Full article ">Figure 2
<p>Risk of bias assessment visualization.</p>
Full article ">Figure 3
<p>Circular network graph of CLT, EdNeuro.</p>
Full article ">Figure 4
<p>Heatmap of AI, ML, CLT, EdNeuro, and LE alignment in Learning Optimization.</p>
Full article ">Figure 5
<p>Related methodological approaches across metrics.</p>
Full article ">Figure 6
<p>Conceptual framework: AI-powered AL for cognitive load optimization.</p>
Full article ">Figure 7
<p>Heatmap: relationship between cognitive load optimization and learning outcomes.</p>
Full article ">Figure 8
<p>Heatmap: AI-driven personalization metrics in STEM and professional education.</p>
Full article ">Figure 9
<p>Layered spider chart: AI impact on high cognitive load learning domains.</p>
Full article ">Figure 10
<p>Ethical considerations in AI and ML education.</p>
Full article ">Figure 11
<p>Cognitive state detection accuracy over time.</p>
Full article ">Figure 12
<p>Correlation between cognitive monitoring methods.</p>
Full article ">Figure 13
<p>Key study outcomes related to AI/ML applications in education.</p>
Full article ">Figure 14
<p>Three-dimensional scatter plot depicting 103 research studies across three dimensions: AI complexity, learning outcome impact, and scalability potential.</p>
Full article ">Figure 15
<p>Intrinsic, extraneous, and GCL types.</p>
Full article ">Figure 16
<p>Heatmap of ethical and practical challenges across study domains.</p>
Full article ">
14 pages, 3730 KiB  
Article
Near-Real-Time Event-Driven System for Calculating Peak Ground Acceleration (PGA) in Earthquake-Affected Areas: A Critical Tool for Seismic Risk Management in the Campi Flegrei Area
by Claudio Martino, Pasquale Cantiello and Rosario Peluso
GeoHazards 2025, 6(1), 8; https://doi.org/10.3390/geohazards6010008 - 15 Feb 2025
Viewed by 257
Abstract
Peak Ground Acceleration (PGA) is a measure of the maximum ground shaking intensity during an earthquake. The estimation of PGA in areas affected by earthquakes is a fundamental task in seismic hazard assessment and emergency response. This paper presents an automated service capable [...] Read more.
Peak Ground Acceleration (PGA) is a measure of the maximum ground shaking intensity during an earthquake. The estimation of PGA in areas affected by earthquakes is a fundamental task in seismic hazard assessment and emergency response. This paper presents an automated service capable of rapidly calculating the PGA’s values in regions impacted by seismic events and publishing its results on an interactive website. The importance of such a service is discussed, focusing on its contribution to timely response efforts and infrastructure resilience. The necessity for automatic and real-time systems in earthquake-prone areas is emphasized, enabling decision-makers to assess damage potential and deploy resources efficiently. Thanks to a collaboration agreement with the Civil Protection Department, we are able to acquire accelerometric data from the Italian National Accelerometric Network (RAN) in real time at the monitoring center of the Osservatorio Vesuviano. These data, in addition to those normally acquired by the INGV network, enable us to utilize all available accelerometric data in the Campi Flegrei area, enhancing our capacity to provide timely and accurate PGA estimates during seismic events in this highly active volcanic region. Full article
Show Figures

Figure 1

Figure 1
<p>The red triangles represent the INGV-OV seismic stations of the permanent seismic network, the blue triangles indicate the RAN accelerometers, and the yellow ones represent the accelerometers temporarily installed by INGV, which integrate the seismic network with proprietary dataloggers and high-quality MEMS accelerometers.</p>
Full article ">Figure 2
<p>Architecture of UrbanSM.</p>
Full article ">Figure 3
<p>Main page of UrbanSM. The page is also available on mobile devices with a simpler interface. The red star represents event location.</p>
Full article ">Figure 4
<p>Example of Pseudo-Spectral Acceleration for POZA site. ** they represent square power.</p>
Full article ">Figure 5
<p>Popup to show information on a single seismic station. The red star represents event location.</p>
Full article ">
24 pages, 8709 KiB  
Article
Optical Remote Sensing Analysis of Exhaust Emissions During Aircraft Taxiing at Hefei Xinqiao International Airport
by Yusheng Qin, Xin Han, Xiangxian Li, Huaqiao Gui, Weiwei Xue, Minguang Gao, Jingjing Tong, Yujun Zhang and Zheng Shi
Remote Sens. 2025, 17(4), 664; https://doi.org/10.3390/rs17040664 - 15 Feb 2025
Viewed by 192
Abstract
The taxiing stage of an aircraft is characterized by its long duration, low operating thrust, and low combustion efficiency, resulting in substantial emissions of CO, CO2, and VOCs, which adversely affect air quality near airports. This study has developed an open-path [...] Read more.
The taxiing stage of an aircraft is characterized by its long duration, low operating thrust, and low combustion efficiency, resulting in substantial emissions of CO, CO2, and VOCs, which adversely affect air quality near airports. This study has developed an open-path Fourier transform infrared spectroscopy (OP-FTIR) monitor with second-level time resolution to enable the optical remote monitoring of pollutants during taxiing. Measurements of CO, CO2, and VOCs were conducted over one month at Hefei Xinqiao International Airport (HXIA). The generalized additive model (GAM) is used for data analysis to reveal complex nonlinear relationships between aircraft emission concentrations and meteorological factors, aircraft models, and their corresponding registration numbers. The GAM analysis shows that among meteorological factors, humidity, and atmospheric pressure have the most significant impact on aircraft exhaust monitoring, with a relative average contribution value as high as approximately six. The explanatory power of aircraft models for emissions is low (R2 < 0.18), whereas that of registration numbers is high (R2 > 0.6), suggesting that individual differences between aircrafts play a crucial role in emission concentration variations. Furthermore, a noticeable correlation was found between the CO/CO2 ratio and volatile organic compound (VOC) concentrations (R2 > 0.63), indicating that combustion efficiency significantly affects VOC emissions. This study not only advances the real-time remote sensing monitoring of pollutants during aircraft taxiing but also underscores the crucial role of the GAM in identifying the key drivers of emissions, providing a scientific basis for precise environmental protection management and policy-making. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of OP-FTIR measurement.</p>
Full article ">Figure 2
<p>HXIA information. (<b>a</b>) The location of HXIA; (<b>b</b>) aircraft runway map.</p>
Full article ">Figure 3
<p>Schematic diagram of airport runway monitoring.</p>
Full article ">Figure 4
<p>OP-FTIR actual online monitoring.</p>
Full article ">Figure 5
<p>The relative average contribution value of each meteorological variable.</p>
Full article ">Figure 6
<p>The dynamic changes are affected by various meteorological variables.</p>
Full article ">Figure 7
<p>PDPs of emission components on different aircraft models. (<b>a</b>) The PDP of CO<sub>2</sub> impact on different aircraft models. (<b>b</b>) The PDP of CO impact on different aircraft models. (<b>c</b>) The PDP of VOCs’ impact on different aircraft models.</p>
Full article ">Figure 8
<p>The actual observed emissions and model-predicted emissions for different aircraft models.</p>
Full article ">Figure 9
<p>PDPs for emission components for different aircraft of the 320 model.</p>
Full article ">Figure 9 Cont.
<p>PDPs for emission components for different aircraft of the 320 model.</p>
Full article ">Figure 10
<p>The APD for emission components for different aircraft of the A320 model.</p>
Full article ">Figure 10 Cont.
<p>The APD for emission components for different aircraft of the A320 model.</p>
Full article ">Figure 11
<p>The APD for emission components based on different aircraft registration numbers.</p>
Full article ">
20 pages, 1820 KiB  
Article
Hybrid Solution Through Systematic Electrical Impedance Tomography Data Reduction and CNN Compression for Efficient Hand Gesture Recognition on Resource-Constrained IoT Devices
by Salwa Sahnoun, Mahdi Mnif, Bilel Ghoul, Mohamed Jemal, Ahmed Fakhfakh and Olfa Kanoun
Future Internet 2025, 17(2), 89; https://doi.org/10.3390/fi17020089 - 14 Feb 2025
Viewed by 233
Abstract
The rapid advancement of edge computing and Tiny Machine Learning (TinyML) has created new opportunities for deploying intelligence in resource-constrained environments. With the growing demand for intelligent Internet of Things (IoT) devices that can efficiently process complex data in real-time, there is an [...] Read more.
The rapid advancement of edge computing and Tiny Machine Learning (TinyML) has created new opportunities for deploying intelligence in resource-constrained environments. With the growing demand for intelligent Internet of Things (IoT) devices that can efficiently process complex data in real-time, there is an urgent need for innovative optimisation techniques that overcome the limitations of IoT devices and enable accurate and efficient computations. This study investigates a novel approach to optimising Convolutional Neural Network (CNN) models for Hand Gesture Recognition (HGR) based on Electrical Impedance Tomography (EIT), which requires complex signal processing, energy efficiency, and real-time processing, by simultaneously reducing input complexity and using advanced model compression techniques. By systematically reducing and halving the input complexity of a 1D CNN from 40 to 20 Boundary Voltages (BVs) and applying an innovative compression method, we achieved remarkable model size reductions of 91.75% and 97.49% for 40 and 20 BVs EIT inputs, respectively. Additionally, the Floating-Point operations (FLOPs) are significantly reduced, by more than 99% in both cases. These reductions have been achieved with a minimal loss of accuracy, maintaining the performance of 97.22% and 94.44% for 40 and 20 BVs inputs, respectively. The most significant result is the 20 BVs compressed model. In fact, at only 8.73 kB and a remarkable 94.44% accuracy, our model demonstrates the potential of intelligent design strategies in creating ultra-lightweight, high-performance CNN-based solutions for resource-constrained devices with near-full performance capabilities specifically for the case of HGR based on EIT inputs. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems)
Show Figures

Figure 1

Figure 1
<p>EIT data collection and measurement system.</p>
Full article ">Figure 2
<p>ASL set of gestures.</p>
Full article ">Figure 3
<p>Methodology of EIT data reduction.</p>
Full article ">Figure 4
<p>Methodology for compensating the effect of reduced input dimensionality.</p>
Full article ">Figure 5
<p>Comparison of different data reduction methods in terms of accuracy for 10 subjects.</p>
Full article ">Figure 6
<p>Effects of input reduction from 40 to 20 BVs and CNN model compression on model size (<b>a</b>), Accuracy (<b>b</b>), FLOPS (<b>c</b>), Loss (<b>d</b>), and F1 Score (<b>e</b>).</p>
Full article ">
27 pages, 2258 KiB  
Review
The Medical Basis for the Photoluminescence of Indocyanine Green
by Wiktoria Mytych, Dorota Bartusik-Aebisher and David Aebisher
Molecules 2025, 30(4), 888; https://doi.org/10.3390/molecules30040888 - 14 Feb 2025
Viewed by 176
Abstract
Indocyanine green (ICG), a near-infrared (NIR) fluorescent dye with unique photoluminescent properties, is a helpful tool in many medical applications. ICG produces fluorescence when excited by NIR light, enabling accurate tissue visualization and real-time imaging. This study investigates the fundamental processes behind ICG’s [...] Read more.
Indocyanine green (ICG), a near-infrared (NIR) fluorescent dye with unique photoluminescent properties, is a helpful tool in many medical applications. ICG produces fluorescence when excited by NIR light, enabling accurate tissue visualization and real-time imaging. This study investigates the fundamental processes behind ICG’s photoluminescence as well as its present and possible applications in treatments and medical diagnostics. Fluorescence-guided surgery (FGS) has been transformed by ICG’s capacity to visualize tumors, highlight blood flow, and facilitate lymphatic mapping, all of which have improved surgical accuracy and patient outcomes. Furthermore, the fluorescence of the dye is being studied for new therapeutic approaches, like photothermal therapy, in which NIR light can activate ICG to target and destroy cancer cells. We go over the benefits and drawbacks of ICG’s photoluminescent qualities in therapeutic contexts, as well as current studies that focus on improving its effectiveness, security, and adaptability. More precise disease detection, real-time monitoring, and tailored therapy options across a variety of medical specialties are made possible by the ongoing advancement of ICG-based imaging methods and therapies. In the main part of our work, we strive to take into account the latest reports; therefore, we used clinical articles going back to 2020. However, for the sake of the theoretical part, the oldest article used by us is from 1995. Full article
(This article belongs to the Special Issue Chemiluminescence and Photoluminescence of Advanced Compounds)
Show Figures

Figure 1

Figure 1
<p>Indocyanine green (manufactured by Carl Roth (Karsruhe, Germany). Original photography conducted by co-authors.</p>
Full article ">Figure 2
<p>Indocyanine green chemical structure. Original work created by co-authors in bioRender (Toronto, ON, Canada).</p>
Full article ">Figure 3
<p>Photoluminescence mechanism. Original work created by the co-authors in bioRender (Toronto, ON, Canada). The colors of the arrows are random and have no scientific significance.</p>
Full article ">Figure 4
<p>Sentinel node biopsy. Dye injected from the side of the tumor spreads to the sentinel nodes, making the nodes clearly visible. ICG dye contrasts between healthy and abnormal tissue, allowing for the removal of the necessary tissue. In the laboratory, the tissues are tested to determine if they are malignant. Original work created by the co-authors in Chem Draw 20.1.</p>
Full article ">Figure 5
<p>Transrectal ICG angiography imaging the blood supply to the mucosa and anastomosis so that tissue perfusion defects that can lead to anastomotic failure can be detected. Pink color—poor tissue perfusion, green—good tissue perfusion. Original work created by the co-authors in Chem Draw 20.1.</p>
Full article ">
22 pages, 4177 KiB  
Article
Vibration Control of Light Bridges Under Moving Loads Using Nonlinear Semi-Active Absorbers
by Hamed Saber, Farhad S. Samani, Francesco Pellicano, Moslem Molaie and Antonio Zippo
Math. Comput. Appl. 2025, 30(1), 19; https://doi.org/10.3390/mca30010019 - 14 Feb 2025
Viewed by 255
Abstract
The dynamic response of light bridges to moving loads presents significant challenges in controlling vibrations that can impact on the structural integrity and the user comfort. This study investigates the effectiveness of nonlinear semi-active absorbers in mitigating these vibrations on light bridges that [...] Read more.
The dynamic response of light bridges to moving loads presents significant challenges in controlling vibrations that can impact on the structural integrity and the user comfort. This study investigates the effectiveness of nonlinear semi-active absorbers in mitigating these vibrations on light bridges that are particularly susceptible to human-induced vibrations, due to their inherent low damping and flexibility, especially under near-resonance conditions. Traditional passive vibration control methods, such as dynamic vibration absorbers (DVAs), may not be entirely adequate for mitigating vibrations, as they require adjustments in damping and stiffness when operating conditions change over time. Therefore, suitable strategies are needed to dynamically adapt DVA parameters and ensure optimal performance. This paper explores the effectiveness of linear and nonlinear DVAs in reducing vertical vibrations of lightweight beams subjected to moving loads. Using the Bubnov-Galerkin method, the governing partial differential equations are reduced to a set of ordinary differential equations and a novel nonlinear DVA with a variable damping dashpot is investigated, showing better performances compared to traditional constant-parameter DVAs. The nonlinear viscous damping device enables real-time adjustments, making the DVA semi-active and more effective. A footbridge case study demonstrates significant vibration reductions using optimized nonlinear DVAs for lightweight bridges, showing broader frequency effectiveness than linear ones. The quadratic nonlinear DVA is the most efficient, achieving a 92% deflection reduction in the 1.5–2.5 Hz range, and under running and jumping reduces deflection by 42%. Full article
Show Figures

Figure 1

Figure 1
<p>A bridge structure featuring a vibration absorber connected to it, exposed to a dynamic force.</p>
Full article ">Figure 2
<p>Flowchart outlining the steps of the computer algorithm used to solve the equations of the human-structure interaction system with attached DVA.</p>
Full article ">Figure 3
<p>The force-time response produced by a walking person with a frequency of 2 Hz; ──: the model presented in Equation (14); filled circles <span style="color:#2F5496">●</span>: extracted data from [<a href="#B25-mca-30-00019" class="html-bibr">25</a>].</p>
Full article ">Figure 4
<p>Verification of the bridges subjected to moving load with constant amplitude an attached DVA (cubic stiffness and linear damping) with the data extracted from Ref. [<a href="#B1-mca-30-00019" class="html-bibr">1</a>].</p>
Full article ">Figure 5
<p>Parameters optimized for DVA. (<b>A</b>) Optimization method based on energy approach; (<b>B</b>) Optimization method based on deflection approach.</p>
Full article ">Figure 6
<p>(<b>A</b>) The quantity of energy dissipated by linear DVAs; (<b>B</b>) Largest amount of displacement experienced by footbridges when subjected to linear DVAs in the frequency domain.</p>
Full article ">Figure 7
<p>A footbridge connected to a Dynamic Vibration Absorber characterized by quadratic stiffness and linear damping in the frequency domain; (<b>A</b>) The quantity of energy dissipated by DVA (<b>B</b>) Maximum deflection of footbridge.</p>
Full article ">Figure 8
<p>A footbridge equipped with a Dynamic Vibration Absorber exhibiting quadratic stiffness and quadratic damping characteristics in the frequency domain; (<b>A</b>) The energy dissipated by DVA, (<b>B</b>) Maximum deflection.</p>
Full article ">Figure 9
<p>The initial four DLFs for (<b>A</b>) running pedestrian, and (<b>B)</b> jumping pedestrian.</p>
Full article ">Figure 10
<p>A DVA with geometric nonlinearity.</p>
Full article ">Figure 11
<p>The VDVD is represented by a schematic diagram; (<b>A</b>) 3D model; (<b>B</b>) flowing of oil.</p>
Full article ">Figure 12
<p>Evaluation of the nonlinearity of the DVAs on higher modes; (<b>A</b>): DVA with quadratic stiffness and linear damping elements; (<b>B</b>): DVA with both quadratic stiffness damping elements.</p>
Full article ">Figure 13
<p>Comparison of a footbridge deflection equipped with various types of DVAs.</p>
Full article ">
33 pages, 3673 KiB  
Article
REO: Revisiting Erase Operation for Improving Lifetime and Performance of Modern NAND Flash-Based SSDs
by Beomjun Kim and Myungsuk Kim
Electronics 2025, 14(4), 738; https://doi.org/10.3390/electronics14040738 - 13 Feb 2025
Viewed by 325
Abstract
This work investigates a new erase scheme in NAND flash memory to improve the lifetime and performance of modern solid-state drives (SSDs). In NAND flash memory, an erase operation applies a high voltage (e.g., >20 V) to flash cells for a long time [...] Read more.
This work investigates a new erase scheme in NAND flash memory to improve the lifetime and performance of modern solid-state drives (SSDs). In NAND flash memory, an erase operation applies a high voltage (e.g., >20 V) to flash cells for a long time (e.g., >3.5 ms), which degrades cell endurance and potentially delays user I/O requests. While a large body of prior work has proposed various techniques to mitigate the negative impact of erase operations, no work has yet investigated how erase latency and voltage should be set to fully exploit the potential of NAND flash memory; most existing techniques use a fixed latency and voltage for every erase operation, which is set to cover the worst-case operating conditions. To address this, we propose Revisiting Erase Operation, (REO) a new erase scheme that dynamically adjusts erase latency and voltage depending on the cells’ current erase characteristics. We design REO by two key apporaches. First, REO accurately predicts such near-optimal erase latency based on the number of fail bits during an erase operation. To maximize its benefits, REO aggressively yet safely reduces erase latency by leveraging a large reliability margin present in modern SSDs. Second, REO applies near-optimal erase voltage to each WL based on its unique erase characteristics. We demonstrate the feasibility and reliability of REO using 160 real 3D NAND flash chips, showing that it enhances SSD lifetime over the conventional erase scheme by 43% without change to existing NAND flash chips. Our system-level evaluation using eleven real-world workloads shows that an REO-enabled SSD reduces average I/O performance and read tail latency by 12% and 38%, respectivley, on average over a state-of-the-art technique. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Figure 1
<p>An organizational overview of NAND flash memory [<a href="#B44-electronics-14-00738" class="html-bibr">44</a>].</p>
Full article ">Figure 2
<p>V<sub>TH</sub> distributions of 2<sup><span class="html-italic">m</span></sup>-state multi-level cell NAND flash memory.</p>
Full article ">Figure 3
<p>Illustration of organizational difference between 2D and 3D NAND flash memory.</p>
Full article ">Figure 4
<p>Schematic diagram of the flash cell of 3D NAND flash memory.</p>
Full article ">Figure 5
<p>Schematic diagram of a 3D NAND flash cell and its operations.</p>
Full article ">Figure 6
<p>Endurance impact of erase and program operations.</p>
Full article ">Figure 7
<p>Incremental Step Pulse Erasure (ISPE) scheme.</p>
Full article ">Figure 8
<p>Details of verify-read (VR) step in ISPE scheme.</p>
Full article ">Figure 9
<p>Erase latency variation under different P/E cycles.</p>
Full article ">Figure 10
<p>Erase speed measurement of flash cells in different WLs.</p>
Full article ">Figure 11
<p>Erase speed variation within the block under different P/E cycles.</p>
Full article ">Figure 12
<p>High-level overview of existing ISPE optimizations.</p>
Full article ">Figure 13
<p>Fail-bit-count-based Erase Latency Prediction (FELP).</p>
Full article ">Figure 14
<p>An overview of Selective Erase Voltage Adjustment (SEVA).</p>
Full article ">Figure 15
<p>Impact of erase latency on the fail-bit count.</p>
Full article ">Figure 16
<p>Erase-pulse latency depending on the fail-bit count.</p>
Full article ">Figure 17
<p>Reliability margin depending on erase status.</p>
Full article ">Figure 18
<p>Impact of WL gate voltage on erase speed.</p>
Full article ">Figure 19
<p>Erase characteristics of other chip types.</p>
Full article ">Figure 20
<p>Operational overview of R<span class="html-small-caps">EO</span>FTL.</p>
Full article ">Figure 21
<p>Comparison of SSD lifetime and reliability.</p>
Full article ">Figure 22
<p>Average I/O performance of erase schemes normalized to baseline.</p>
Full article ">Figure 23
<p>Distribution of the number read retires in R<span class="html-small-caps">EO</span> and R<span class="html-small-caps">EO</span>+ at PEC = 4.5 K.</p>
Full article ">Figure 24
<p>The 99.99th (❶) and 99.9999th (❷) percentile read latency at <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>E</mi> <mi>C</mi> <mo>=</mo> <mo>〈</mo> </mrow> </semantics></math>(<b>a</b>) 0.5K, (<b>b</b>) 2.5K, and (<b>c</b>) 4.5K〉.</p>
Full article ">Figure 25
<p>Impact of misprediction rate on R<span class="html-small-caps">EO</span>’s benefits.</p>
Full article ">Figure 26
<p>Impact of RBER requirement on R<span class="html-small-caps">EO</span>’s benefits.</p>
Full article ">
20 pages, 8870 KiB  
Article
Near Real-Time 3D Reconstruction of Construction Sites Based on Surveillance Cameras
by Aoran Sun, Xuehui An, Pengfei Li, Miao Lv and Wenzhe Liu
Buildings 2025, 15(4), 567; https://doi.org/10.3390/buildings15040567 - 12 Feb 2025
Viewed by 396
Abstract
The 3D reconstruction of construction sites is of great importance for construction progress, quality, and safety management. Currently, most of the existing 3D reconstruction methods are unable to conduct continuous and uninterrupted perception, and it is difficult to achieve registration with real coordinates [...] Read more.
The 3D reconstruction of construction sites is of great importance for construction progress, quality, and safety management. Currently, most of the existing 3D reconstruction methods are unable to conduct continuous and uninterrupted perception, and it is difficult to achieve registration with real coordinates and dimensions. This study proposes a hierarchical registration framework for 3D reconstruction of construction sites based on surveillance cameras. This method can quickly perform on-site 3D reconstruction and restoration by taking surveillance camera images as inputs. It combines 2D and 3D features and does not need transfer learning or camera calibration. By experimenting on one construction site, we found that this framework can complete the 3D point cloud estimation and registration of construction sites within an average of 3.105 s through surveillance images. The average RMSE of the point cloud within the site is 0.358 m, which is better than most point cloud registration methods. Through this method, 3D data within the scope of surveillance cameras can be quickly obtained, and the connection between 2D and 3D can be effectively established. Combined with visual information, it is beneficial to the digital twin management of construction sites. Full article
Show Figures

Figure 1

Figure 1
<p>3D reconstruction framework based on surveillance cameras.</p>
Full article ">Figure 2
<p>Steps of sparse point cloud reconstruction.</p>
Full article ">Figure 3
<p>DUSt3R framework.</p>
Full article ">Figure 4
<p>Initial point cloud with dimensional differences from the conference point cloud.</p>
Full article ">Figure 5
<p>Flowchart of the 2DFM-RANSAC.</p>
Full article ">Figure 6
<p>Two-dimensional feature point-matching effect. The same color dots indicate matching points.</p>
Full article ">Figure 7
<p>The distribution of the calculated scaling factors, where the red line represents the median of the scale factor.</p>
Full article ">Figure 8
<p>Schematic diagrams of the experimental site: (<b>a</b>) the experimental site and the layout of surveillance cameras; (<b>b</b>) the digital orthophoto map of the experimental site; (<b>c</b>) the sparse point cloud of the experimental site obtained through oblique photography and SfM reconstruction.</p>
Full article ">Figure 9
<p>Steps of point cloud error calculation.</p>
Full article ">Figure 10
<p>Qualitative result of comparison experiment. (<b>a</b>–<b>e</b>) different camera images and the results of the compared point clouds and reference point clouds by different registration methods.</p>
Full article ">Figure 11
<p>Error distributions of different point cloud registration methods.</p>
Full article ">Figure 12
<p>Running time distribution of the initial point cloud reconstruction by DUSt3R.</p>
Full article ">Figure 13
<p>Distribution of the calculation duration of different registration methods.</p>
Full article ">
17 pages, 6429 KiB  
Article
Impacts of Reference Precipitation on the Assessment of Global Precipitation Measurement Precipitation Products
by Ye Zhang, Leizhi Wang, Yilan Li, Yintang Wang, Fei Yao and Yiceng Chen
Remote Sens. 2025, 17(4), 624; https://doi.org/10.3390/rs17040624 - 12 Feb 2025
Viewed by 323
Abstract
Reference precipitation (RP) serves as a benchmark for evaluating the accuracy of precipitation products; thus, the selection of RP considerably affects the evaluation. In order to quantify this impact and provide guidance for RP selection, three interpolation methods, namely inverse distance weighting (IDW), [...] Read more.
Reference precipitation (RP) serves as a benchmark for evaluating the accuracy of precipitation products; thus, the selection of RP considerably affects the evaluation. In order to quantify this impact and provide guidance for RP selection, three interpolation methods, namely inverse distance weighting (IDW), ordinary kriging (OK), and geographical weighted regression (GWR), along with six groups of station densities, were adopted to generate different RPs, based on the super-high-density rainfall observations as true values, and we analyzed the errors of different RPs and the impacts of RP selection on the assessment of GPM precipitation products. Results indicate that the RPs from IDW and GWR both approached the true values as the station density increased (CC > 0.90); while the RP from OK showed some differences (CC < 0.80), it was similar to GWR when the station density was low, but the accuracy improved at first and then worsened as the station density continued to increase; the evaluation results based on different RPs showed remarkable differences even under the same conditions; when the average distance between rainfall gauges that were utilized to generate RPs was below the medium value (i.e., d < 20 km), the evaluation based on RP derived from IDW and GWR was close enough to that based on the true precipitation, which indicates its feasibility in evaluating satellite precipitation products. Full article
Show Figures

Figure 1

Figure 1
<p>The geography survey of the study area.</p>
Full article ">Figure 2
<p>Technical scheme.</p>
Full article ">Figure 3
<p>Boxplots of the area-averaged <span class="html-italic">MAE</span> of RP.</p>
Full article ">Figure 4
<p>Spatial distribution of the accumulated precipitation when <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>A</mi> <mi>E</mi> <mo>=</mo> <msub> <mrow> <mi>M</mi> <mi>A</mi> <mi>E</mi> </mrow> <mrow> <mi mathvariant="normal">M</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">a</mi> <mi mathvariant="normal">n</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Boxplots of the evaluation index of interpolated precipitation relative to the true precipitation.</p>
Full article ">Figure 6
<p>Comprehensive evaluation results of the median and mean values of the total composite index in multi-group experiments.</p>
Full article ">Figure 7
<p>Normalized median distributions of the evaluation indices of the RP and true precipitation for evaluating near-real-time satellite precipitation products: (a), (b), and (c) present the RP obtained by the IDW, OK, and GWR methods to evaluate the IMERG early, and (d), (e), and (f) present the RP obtained by the IDW, OK, and GWR methods to evaluate the IMERG late. The circular symbols represent the average station distance <span class="html-italic">d</span> of 32.6, 23.1 km, 16.3 km, 13.3 km, 10.3 km, and 8.7 km. The smaller the average distance is, the larger the diameter of the circles is. The blue and red lines represent the results of the IMERG early and IMERG late products relative to the true precipitation.</p>
Full article ">
31 pages, 3473 KiB  
Article
Deep Reinforcement Learning-Driven Hybrid Precoding for Efficient Mm-Wave Multi-User MIMO Systems
by Adeb Salh, Mohammed A. Alhartomi, Ghasan Ali Hussain, Chang Jing Jing, Nor Shahida M. Shah, Saeed Alzahrani, Ruwaybih Alsulami, Saad Alharbi, Ahmad Hakimi and Fares S. Almehmadi
J. Sens. Actuator Netw. 2025, 14(1), 20; https://doi.org/10.3390/jsan14010020 - 12 Feb 2025
Viewed by 409
Abstract
High route loss and line-of-sight requirements are two of the fundamental challenges of millimeter-wave (mm-wave) communications that are mitigated by incorporating sensor technology. Sensing gives the deep reinforcement learning (DRL) agent comprehensive environmental feedback, which helps it better predict channel fluctuations and modify [...] Read more.
High route loss and line-of-sight requirements are two of the fundamental challenges of millimeter-wave (mm-wave) communications that are mitigated by incorporating sensor technology. Sensing gives the deep reinforcement learning (DRL) agent comprehensive environmental feedback, which helps it better predict channel fluctuations and modify beam patterns accordingly. For multi-user massive multiple-input multiple-output (mMIMO) systems, hybrid precoding requires sophisticated real-time low-complexity power allocation (PA) approaches to achieve near-optimal capacity. This study presents a unique angular-based hybrid precoding (AB-HP) framework that minimizes radio frequency (RF) chain and channel estimation while optimizing energy efficiency (EE) and spectral efficiency (SE). DRL is essential for mm-wave technology to make adaptive and intelligent decision-making possible, which effectively transforms wireless communication systems. DRL optimizes RF chain usage to maintain excellent SE while drastically lowering hardware complexity and energy consumption in an AB-HP architecture by dynamically learning optimal precoding methods using environmental angular information. This article proposes enabling dual optimization of EE and SE while drastically lowering beam training overhead by incorporating maximum reward beam training driven (RBT) in the DRL. The proposed RBT-DRL improves system performance and flexibility by dynamically modifying the number of active RF chains in dynamic network situations. The simulation results show that RBT-DRL-driven beam training guarantees good EE performance for mobile users while increasing SE in mm-wave structures. Even though total power consumption rises by 45%, the SE improves by 39%, increasing from 14 dB to 20 dB, suggesting that this strategy could successfully achieve a balance between performance and EE in upcoming B5G networks. Full article
(This article belongs to the Section Communications and Networking)
Show Figures

Figure 1

Figure 1
<p>Channel estimation in OFDM systems using a DNN [<a href="#B61-jsan-14-00020" class="html-bibr">61</a>].</p>
Full article ">Figure 2
<p>A representation of the DNN training in mm-wave sets for training [<a href="#B50-jsan-14-00020" class="html-bibr">50</a>].</p>
Full article ">Figure 3
<p>Massive MIMO system, the BS receiver employs 1-bit ADC converters with DL [<a href="#B32-jsan-14-00020" class="html-bibr">32</a>].</p>
Full article ">Figure 4
<p>Related SE to a number of iterations.</p>
Full article ">Figure 5
<p>SE versus factor.</p>
Full article ">Figure 6
<p>EE versus factor.</p>
Full article ">Figure 7
<p>Related of SE to SNR.</p>
Full article ">Figure 8
<p>Related of EE to SNR.</p>
Full article ">Figure 9
<p>Reward versus SNR.</p>
Full article ">Figure 10
<p>SE versus a number of antennas.</p>
Full article ">Figure 11
<p>SE versus a sample of an mm-wave channel.</p>
Full article ">Figure 12
<p>EE versus time step for memory and processing optimization.</p>
Full article ">Figure 13
<p>SE versus time step for memory and processing optimization.</p>
Full article ">
21 pages, 2061 KiB  
Article
Hardware Acceleration of Division-Free Quadrature-Based Square Rooting Approach for Near-Lossless Compression of Hyperspectral Images
by Amal Altamimi and Belgacem Ben Youssef
Sensors 2025, 25(4), 1092; https://doi.org/10.3390/s25041092 - 12 Feb 2025
Viewed by 252
Abstract
Recent advancements in hyperspectral imaging have significantly increased the acquired data volume, creating a need for more efficient compression methods for handling the growing storage and transmission demands. These challenges are particularly critical for onboard satellite systems, where power and computational resources are [...] Read more.
Recent advancements in hyperspectral imaging have significantly increased the acquired data volume, creating a need for more efficient compression methods for handling the growing storage and transmission demands. These challenges are particularly critical for onboard satellite systems, where power and computational resources are limited, and real-time processing is essential. In this article, we present a novel FPGA-based hardware acceleration of a near-lossless compression technique for hyperspectral images by leveraging a division-free quadrature-based square rooting method. In this regard, the two division operations inherent in the original approach were replaced with pre-computed reciprocals, multiplications, and a geometric series expansion. Optimized for real-time applications, the synthesis results show that our approach achieves a high throughput of 1611.77 Mega Samples per second (MSps) and a low power requirement of 0.886 Watts on the economical Cyclone V FPGA. This results in an efficiency of 1819.15 MSps/Watt, which, to the best of our knowledge, surpasses recent state-of-the-art hardware implementations in the context of near-lossless compression of hyperspectral images. Full article
(This article belongs to the Special Issue Applications of Sensors Based on Embedded Systems)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the near-lossless compression of HSI employing the quadrature-based square rooting method.</p>
Full article ">Figure 2
<p>Geometric construction of a square with the same area <span class="html-italic">x</span> using the quadrature of rectangle ABCD.</p>
Full article ">Figure 3
<p>Skewed distribution of <math display="inline"><semantics> <msub> <mi>s</mi> <mn>0</mn> </msub> </semantics></math> across the range of <math display="inline"><semantics> <mrow> <mo form="prefix">sin</mo> <mi>θ</mi> </mrow> </semantics></math> values for <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> <mo>−</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Near-lossless compression of CASI (t0477f06, U) band 70.</p>
Full article ">Figure 5
<p>Near-lossless compression of AIRS (granule 16, U) band 208.</p>
Full article ">Figure 6
<p>Near-lossless compression of AVIRIS Yellowstone (sc10, C) band 106.</p>
Full article ">Figure 7
<p>Block diagram showing the computation process of the initial estimate of the square root value, <math display="inline"><semantics> <msub> <mi>s</mi> <mn>0</mn> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>A nine-stage pipeline employed to bypass the two division operations in the quadrature-based method.</p>
Full article ">Figure 9
<p>Example illustrating the computation of the index value for the seed (<math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>) within the set corresponding to a <math display="inline"><semantics> <mrow> <mo form="prefix">sin</mo> <mi>θ</mi> </mrow> </semantics></math> of 0.006. A value of 2 for this index is obtained by summing the ones up to bit position 6.</p>
Full article ">
Back to TopTop