Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 69, WDSA/CCWI 2024
Previous Issue
Volume 67, ECP 2024
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
engproc-logo

Journal Browser

Journal Browser

Eng. Proc., 2024, ITISE 2024

The 10th International Conference on Time Series and Forecasting

Gran Canaria, Spain | 15–17 July 2024

Volume Editors:
Olga Valenzuela, University of Granada, Spain
Fernando Rojas, University of Granada, Spain
Luis Javier Herrera, University of Granada, Spain
Hector Pomares, University of Granada, Spain
Ignacio Rojas, University of Granada, Spain

Number of Papers: 65
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Cover Story (view full-size image): The ITISE 2024 (10th International conference on Time Series and Forecasting) seeks to provide a discussion forum for scientists, engineers, educators and students about the latest ideas and [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:

Other

8 pages, 455 KiB  
Proceeding Paper
Explaining When Deep Learning Models Are Better for Time Series Forecasting
by Martín Solís and Luis-Alexander Calvo-Valverde
Eng. Proc. 2024, 68(1), 1; https://doi.org/10.3390/engproc2024068001 - 27 Jun 2024
Cited by 1 | Viewed by 703
Abstract
There is a gap of knowledge about the conditions that explain why a method has a better forecasting performance than another. Specifically, this research aims to find the factors that can influence deep learning models to work better with time series. We generated [...] Read more.
There is a gap of knowledge about the conditions that explain why a method has a better forecasting performance than another. Specifically, this research aims to find the factors that can influence deep learning models to work better with time series. We generated linear regression models to analyze if 11 time series characteristics influence the performance of deep learning models versus statistical models and other machine learning models. For the analyses, 2000 time series of M4 competition were selected. The results show findings that can help explain better why a pretrained deep learning model is better than another kind of model. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Average MAPE percentual change between the deep learning models and the other models.</p>
Full article ">
11 pages, 1006 KiB  
Proceeding Paper
The Cassandra Method: Dystopian Visions as a Basis for Responsible Design
by Sarah Diefenbach and Daniel Ullrich
Eng. Proc. 2024, 68(1), 2; https://doi.org/10.3390/engproc2024068002 - 27 Jun 2024
Viewed by 421
Abstract
Innovative technologies often have unforeseen negative consequences on an individual, societal, or environmental level. To minimize these, the Cassandra method aims to foresee such negative effects by systematically investigating dystopian visions. Starting with the activation of a (self-)critical mindset, the next steps are [...] Read more.
Innovative technologies often have unforeseen negative consequences on an individual, societal, or environmental level. To minimize these, the Cassandra method aims to foresee such negative effects by systematically investigating dystopian visions. Starting with the activation of a (self-)critical mindset, the next steps are collecting a maximum number of negative effects and assessing their relevance. Finally, the envisioned impairments are used to improve the product concepts in a responsible way. This paper broadly outlines the method, its applications during product development and research, and reports on experiences from an expert workshop. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Sketch of the Cassandra method.</p>
Full article ">Figure 2
<p>Cassandra diagram of negative effects for the example, a smart heating system that automatically reduces the temperature when nobody is at home, discussed in the expert workshop. <span class="html-italic">Note</span>: Circle size represents the severity of effects.</p>
Full article ">
8 pages, 571 KiB  
Proceeding Paper
Forecasting Methods for Road Accidents in the Case of Bucharest City
by Cristina Oprea, Eugen Rosca, Ionuț Preda, Anamaria Ilie, Mircea Rosca and Florin Rusca
Eng. Proc. 2024, 68(1), 3; https://doi.org/10.3390/engproc2024068003 - 27 Jun 2024
Viewed by 512
Abstract
This paper aims to emphasize the necessity for policy reform, improvements in vehicle design and enhanced public awareness through the projection of future trends in road accidents, injuries and fatalities. The statistical methods that are used in this study are the empirical laws [...] Read more.
This paper aims to emphasize the necessity for policy reform, improvements in vehicle design and enhanced public awareness through the projection of future trends in road accidents, injuries and fatalities. The statistical methods that are used in this study are the empirical laws of Smeed and Andreassen. The main gap that the researchers identify is the lack of a standardized methodology with the help of which the appropriate forecasting method can be chosen in the area of traffic accidents. In the present study, the authors propose such a methodology that can be generalized, being suitable for use for any urban agglomeration at the micro and macro level. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>The functional diagram of the used methodology.</p>
Full article ">
8 pages, 880 KiB  
Proceeding Paper
Deep Learning for Crime Forecasting of Multiple Regions, Considering Spatial–Temporal Correlations between Regions
by Martín Solís and Luis-Alexander Calvo-Valverde
Eng. Proc. 2024, 68(1), 4; https://doi.org/10.3390/engproc2024068004 - 28 Jun 2024
Viewed by 565
Abstract
Crime forecasting has gained popularity in recent years; however, the majority of studies have been conducted in the United States, which may result in a bias towards areas with a substantial population. In this study, we generated different models capable of forecasting the [...] Read more.
Crime forecasting has gained popularity in recent years; however, the majority of studies have been conducted in the United States, which may result in a bias towards areas with a substantial population. In this study, we generated different models capable of forecasting the number of crimes in 83 regions of Costa Rica. These models include the spatial–temporal correlation between regions. The findings indicate that the architecture based on an LSTM encoder–decoder achieved superior performance. The best model achieved the best performance in regions where crimes occurred more frequently; however, in more secure regions, the performance decayed. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>CD diagram of the mean absolute percentage error. Note: transformer = transformer, tcn = Temporal Convolutional Network, lstm = long short-term memory, encDec = LSTM encoder–decoder, arimaxCorr = SARIMAX, transEnc = encoder transformer.</p>
Full article ">Figure 2
<p>CD diagram of the mean absolute error. Note: transformer = transformer, tcn = Temporal Convolutional Network, lstm = long short-term memory, encDec = LSTM encoder–decoder, arimaxCorr = SARIMAX, transEnc = encoder transformer.</p>
Full article ">Figure 3
<p>The average MAPE of the regions, according to the LSTM encoder–decoder with and without (baseline) spatial–temporal correlation.</p>
Full article ">
9 pages, 282 KiB  
Proceeding Paper
Optimizing Social Security Contributions for Spanish Self-Employed Workers: Combining Data Preprocessing and Ensemble Models for Accurate Revenue Estimation
by Luis Palomero, Vicente García and José Salvador Sánchez
Eng. Proc. 2024, 68(1), 5; https://doi.org/10.3390/engproc2024068005 - 28 Jun 2024
Viewed by 455
Abstract
The Real Decreto-ley 13/2022 has amended the framework governing the calculation of Social Security contributions for Spanish self-employed workers. This framework obligates taxpayers to the annual revenue projection, under the possibility of lending money for free or paying unexpected taxes at the end [...] Read more.
The Real Decreto-ley 13/2022 has amended the framework governing the calculation of Social Security contributions for Spanish self-employed workers. This framework obligates taxpayers to the annual revenue projection, under the possibility of lending money for free or paying unexpected taxes at the end of the year in the case of deviations. To address this issue, the Declarando firm has developed an algorithm to recommend the optimal contributions that combines a Simple Moving Average forecasting method with an offset-adjustment technique. This paper examines how this strategy can be improved by cleaning the input data and combining different forecasts using an Ensemble-based approach. After testing experimentally various alternatives, a promising strategy involves employing a median-based Ensemble on preprocessed data. Although this Ensemble-based approach significantly reduces forecasting errors, the improvements are diluted when the predictions are combined with the offset-adjustment process. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Averaged ranked values of the preprocessing methods evaluated on the training subset.</p>
Full article ">Figure 2
<p>Average ranked values of the four variations and the SMA results used as baseline.</p>
Full article ">Figure 3
<p>Averaged ranked values of the preprocessing methods compared on the training subset.</p>
Full article ">
7 pages, 668 KiB  
Proceeding Paper
Studying LF and HF Time Series to Characterize Cardiac Physiological Responses to Mental Fatigue
by Alexis Boffet, Veronique Deschodt Arsac and Eric Grivel
Eng. Proc. 2024, 68(1), 6; https://doi.org/10.3390/engproc2024068006 - 28 Jun 2024
Viewed by 460
Abstract
Heart rate variability (HRV) was largely used to evaluate psychophysiological status of Human at rest as well as during cognitive tasks, for both healthy subjects and patients. Among the approaches used for assessing cardiac autonomic control from HRV analysis, biomarkers such as the [...] Read more.
Heart rate variability (HRV) was largely used to evaluate psychophysiological status of Human at rest as well as during cognitive tasks, for both healthy subjects and patients. Among the approaches used for assessing cardiac autonomic control from HRV analysis, biomarkers such as the power in low and high frequencies (LF-HF) are often extracted from short-term recordings lasting 2 to 5 min. Although they correctly reflect the average psychophysiological state of a subject in situation, they fail to analyse cardiac autonomic control over time. For this reason, we suggest investigating the LF-HF biomarkers over time to identify mental fatigue and determine different physiological profiles. The following step consists in defining the set of parameters that characterise the LF-HF time series and that can be interpreted easily by the physiologists. In this work, polynomial models are considered to describe the trends of the LF-HF time series. The latter are then decomposed into decreasing (d) and increasing (i) parts. Finally, the proportion of the i parts of the polynomial trends of the LF and HF powers over time are combined with classically-used metrics to define individual profiles in response to mental fatigue. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Experimental protocol. INITIAL: Mental Fatigue perceived (VAS); FINAL: Mental Fatigue Perceived (VAS) and Workload perceived (NASA_TLX).</p>
Full article ">Figure 2
<p>Illustration of the approach with patient 29. The HF process (in blue) modelled by a 5<sup><span class="html-italic">th</span></sup>-degree polynomial (in red) and resulting signed parameters: <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>H</mi> <mi>F</mi> </mrow> </semantics></math> = 0.311. The sign − is added before the value of the percentage to express the fact that it is a decreasing trend.</p>
Full article ">Figure 3
<p>Evolution of perceived mental fatigue before and after CVT. Score 1: “absence of mental fatigue”; Score 10: “extreme level of mental fatigue”; ***: <span class="html-italic">p</span>-value &lt; 0.001.</p>
Full article ">Figure 4
<p>K-means clustering performed on a dataset of 42 volunteers from the average of RR, RMSSD, LF power, HF power, LF/HF ratio, and RCMSE calculated over the first 10 min and last 10 min of CVT, combined with the time course markers <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>L</mi> <mi>F</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>H</mi> <mi>F</mi> </mrow> </semantics></math>. For the sake of simplicity, we represent the clusters with their respective characteristics with respect to each dimension (Dim1 and Dim2).</p>
Full article ">
9 pages, 11746 KiB  
Proceeding Paper
Annual Runoff Forecasting through Bayesian Causality
by Santiago Zazo, Jose-Luis Molina, Carmen Patino-Alonso, Fernando Espejo and Juan Carlos García-Prieto
Eng. Proc. 2024, 68(1), 7; https://doi.org/10.3390/engproc2024068007 - 28 Jun 2024
Viewed by 393
Abstract
This contribution is focused on forecasting ability of Bayesian causality (BC) on annual runoff series. For that, the time series was synthetized through a Bayesian net, in which the probability propagation over the time was performed. The BC analytical ability identified the hidden [...] Read more.
This contribution is focused on forecasting ability of Bayesian causality (BC) on annual runoff series. For that, the time series was synthetized through a Bayesian net, in which the probability propagation over the time was performed. The BC analytical ability identified the hidden logic structure of the hydrological records that describes its overall behavior. This allowed us to quantify the runoff, through a novel dependence matrix, through two fractions, one conditional on time (temporally conditioned runoff) and one not (temporally nonconditioned runoff). This conditionality allowed the development of two predictive models for each fraction, analyzing their reliability under a double probabilistic and metrological approach. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>(<b>Upper</b>): Case study locations. Bottom: Runoff time series and temporal correlograms, Mijares (<b>left</b>) and Voltoya (<b>right</b>) case studies.</p>
Full article ">Figure 2
<p>Methodological scheme applied.</p>
Full article ">Figure 3
<p>DMG graphics. Mijares River (<b>upper</b>). Voltoya River, Serones reservoir (<b>lower</b>) [<a href="#B9-engproc-68-00007" class="html-bibr">9</a>,<a href="#B10-engproc-68-00007" class="html-bibr">10</a>].</p>
Full article ">Figure 4
<p>Causal models, marginal dependence graphs. Mijares (<b>left</b>) and Voltoya–Serones reservoir (<b>right</b>). Graphs obtained using HUGIN Expert software version 7.3 [<a href="#B9-engproc-68-00007" class="html-bibr">9</a>,<a href="#B10-engproc-68-00007" class="html-bibr">10</a>].</p>
Full article ">Figure 5
<p>Independence matrices (the values are <span class="html-italic">p</span>_value). Dependence matrices (the values are DRs). Please see <a href="#engproc-68-00007-t001" class="html-table">Table 1</a> for code of color. Mijares (<b>upper</b>) and Voltoya–Serones reservoir (<b>lower</b>) [<a href="#B9-engproc-68-00007" class="html-bibr">9</a>,<a href="#B10-engproc-68-00007" class="html-bibr">10</a>].</p>
Full article ">Figure 6
<p>Predictive models. F(x): cumulative probability functions [<a href="#B9-engproc-68-00007" class="html-bibr">9</a>,<a href="#B10-engproc-68-00007" class="html-bibr">10</a>].</p>
Full article ">
7 pages, 2414 KiB  
Proceeding Paper
Towards Resolving the Ambiguity in Low-Field, All-Optical Magnetic Field Sensing with High NV-Density Diamonds
by Ludwig Horsthemke, Jens Pogorzelski, Dennis Stiegekötter, Frederik Hoffmann, Ann-Sophie Bülter, Sarah Trinschek, Markus Gregor and Peter Glösekötter
Eng. Proc. 2024, 68(1), 8; https://doi.org/10.3390/engproc2024068008 - 1 Jul 2024
Viewed by 541
Abstract
In all-optical magnetic field sensing using nitrogen-vacancy-center-rich diamonds, an ambiguity in the range of 0–8 mT can be observed. We propose a way to resolve this ambiguity using the magnetic-field-dependent fluorescence lifetime. We therefore recorded the frequency response of the fluorescence upon modulation [...] Read more.
In all-optical magnetic field sensing using nitrogen-vacancy-center-rich diamonds, an ambiguity in the range of 0–8 mT can be observed. We propose a way to resolve this ambiguity using the magnetic-field-dependent fluorescence lifetime. We therefore recorded the frequency response of the fluorescence upon modulation of the excitation intensity in a frequency range of 1–100MHz. The magnetic-field-dependent decay dynamics led to different response characteristics for magnetic fields below and above 3mT, allowing us to resolve the ambiguity. We used a physics-based model function to extract fit parameters, which we used for regression, and compared it to an alternative approach purely based on an artificial neural network. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Schematic of the machine learning approach. A data set of labeled spectra at different magnetic fields up to <math display="inline"><semantics> <mrow> <mn>15</mn> <mspace width="3.33333pt"/> <mi>mT</mi> </mrow> </semantics></math> is captured and processed in two ways. The spectra consist of the magnitude and phase of the fluorescence from a frequency sweep of the optical excitation up to <math display="inline"><semantics> <mrow> <mn>100</mn> <mspace width="3.33333pt"/> <mi>MHz</mi> </mrow> </semantics></math>. They are processed by either fitting a model function with two degrees of freedom and a subsequent simple regression neural network or with a more complex neural network which is trained on the raw spectra.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic of the optical and electrical setup used for frequency domain measurements. (<b>b</b>) Response of the system at port 2 of the VNA to a frequency sweep at port 1 at <math display="inline"><semantics> <mrow> <mi>B</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Subset of measured frequency responses with magnitude <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>H</mi> <mi>r</mi> </msub> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> (<b>a</b>) and phase <math display="inline"><semantics> <mrow> <mo>∠</mo> <msub> <mi>H</mi> <mi>r</mi> </msub> </mrow> </semantics></math> (<b>b</b>) at different B-fields and corresponding fits of <math display="inline"><semantics> <msub> <mi>H</mi> <mi>r</mi> </msub> </semantics></math> with constant <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>B</mi> <mo>=</mo> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mn>6.04</mn> <mspace width="3.33333pt"/> <mi>ns</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>B</mi> <mo>=</mo> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mn>11.89</mn> <mspace width="3.33333pt"/> <mi>ns</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>(<b>a</b>) Magnetic-field-dependent changes in lifetimes from fits to frequency responses at <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>B</mi> <mo>=</mo> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mn>6.04</mn> <mspace width="3.33333pt"/> <mi>ns</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>B</mi> <mo>=</mo> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mn>11.89</mn> <mspace width="3.33333pt"/> <mi>ns</mi> </mrow> </semantics></math>. (<b>b</b>) Root-mean-square error of predictions of the simple FCNN on the validation set as a function of the number of hidden nodes <math display="inline"><semantics> <msub> <mi>n</mi> <mi mathvariant="normal">h</mi> </msub> </semantics></math>. (<b>c</b>) Overlay of predictions of the FCNN (<math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi mathvariant="normal">h</mi> </msub> <mo>=</mo> <mn>45</mn> </mrow> </semantics></math>) on the test set in comparison to the optimum linear relationship. (<b>d</b>) Differences in predictions in (<b>c</b>) to the linear function. (<b>e</b>) Mean average error in the training and test set during the training of the FCNN.</p>
Full article ">Figure 5
<p>(<b>a</b>) Overlay of predictions of the FCNN (<math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mrow> <mi mathvariant="normal">h</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>75</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mrow> <mi mathvariant="normal">h</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mrow> <mi mathvariant="normal">h</mi> <mo>,</mo> <mn>3</mn> </mrow> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>) on the test set in comparison to the optimum linear relationship. (<b>b</b>) Differences in predictions in (<b>a</b>) compared to the linear function. (<b>c</b>) Mean average error in the training and test set during the training of the FCNN.</p>
Full article ">
8 pages, 942 KiB  
Proceeding Paper
Modeling a Set of Variables with Different Attributes on a Quantitative Dependent Variable: An Application of Dichotomous Variables
by Gerardo Covarrubias and Xuedong Liu
Eng. Proc. 2024, 68(1), 9; https://doi.org/10.3390/engproc2024068009 - 1 Jul 2024
Viewed by 378
Abstract
This study outlines the methodology employed to model the relationship among a set of dichotomous variables, which represent attributes, on a nominal scale. The objective is to elucidate their influence on a quantitative dependent variable measured on a ratio scale. This approach allows [...] Read more.
This study outlines the methodology employed to model the relationship among a set of dichotomous variables, which represent attributes, on a nominal scale. The objective is to elucidate their influence on a quantitative dependent variable measured on a ratio scale. This approach allows for the quantification of the impact of these attributes and their significance in shaping the behavior of the entity possessing them. The resolution method employed for the estimation is ordinary least squares. However, it is crucial to note that interpreting the estimators in the resulting model requires a nuanced perspective, distinguishing it from the conventional interpretation of slope or rate of change in a classic model. To clarify, these estimators align with the average behavior of the dependent variable concerning binary characteristics, and the outcomes are consistent with the analysis of variance. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Model residuals.</p>
Full article ">Figure 2
<p>Jarque–Bera test for errors.</p>
Full article ">
8 pages, 1495 KiB  
Proceeding Paper
A Python Module for Implementing Cointegration Tests with Multiple Endogenous Structural Breaks
by Abdulnasser Hatemi-J and Alan Mustafa
Eng. Proc. 2024, 68(1), 10; https://doi.org/10.3390/engproc2024068010 - 2 Jul 2024
Viewed by 595
Abstract
Testing for long-run relationships between time series variables with short-run adjustments is an integral part of many empirical studies nowadays. Allowing for structural breaks in the estimations is a pertinent issue within this context. The purpose of this paper is to provide a [...] Read more.
Testing for long-run relationships between time series variables with short-run adjustments is an integral part of many empirical studies nowadays. Allowing for structural breaks in the estimations is a pertinent issue within this context. The purpose of this paper is to provide a consumer-friendly module that is created in Python for implementing three residuals-based cointegration tests with two unknown regime shifts. The timing of each shift is revealed endogenously. The software is easy to use via a Graphical User Interface (GUI). In addition to implementing cointegration tests, the software also estimates the underlying parameters along with the standard errors and the significance tests for the parameters. An application is also provided using real data to demonstrate how the software can be used. To our best knowledge, this is the first software component created in Python that implements cointegration tests with structural breaks. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>A view of the pseudocode for the PMCT2ES Module.</p>
Full article ">Figure 2
<p>A snapshot of the Graphical User Interface used to ease the process of data entry and generate outputs to/from the system [<a href="#B9-engproc-68-00010" class="html-bibr">9</a>].</p>
Full article ">Figure 3
<p>One of formats of output generated by the module in a text format [<a href="#B9-engproc-68-00010" class="html-bibr">9</a>].</p>
Full article ">Figure 4
<p>A view of data output for both break points in the form of a chart for both gold and World Stock Price Index.</p>
Full article ">
8 pages, 1592 KiB  
Proceeding Paper
Big Data Techniques Applied to Forecast Photovoltaic Energy Demand in Spain
by J. Tapia-García, L. G. B. Ruiz, D. Criado-Ramón and M. C. Pegalajar
Eng. Proc. 2024, 68(1), 11; https://doi.org/10.3390/engproc2024068011 - 3 Jul 2024
Viewed by 466
Abstract
Renewable energies play an important role in our society’s development, addressing the challenges presented by climate change. Specifically, in countries like Spain, technologies such as solar energy assume a crucial significance, enabling the generation of clean energy. This study addresses the critical need [...] Read more.
Renewable energies play an important role in our society’s development, addressing the challenges presented by climate change. Specifically, in countries like Spain, technologies such as solar energy assume a crucial significance, enabling the generation of clean energy. This study addresses the critical need to accurately predict photovoltaic (PV) energy demand in Spain. By using the data collected from the Spanish Electricity System, four models (Linear Regression, Random Forest, Recurrent Neural Network, and LightGBM) were implemented, with adaptations for Big Data. The LR model proved unsuitable, while the LGBM emerged as the most accurate and timely performer. The incorporation of Big Data adaptations amplifies the significance of our findings, highlighting the effectiveness of the LGBM in forecasting PV energy demand with both accuracy and efficiency. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Overall trend of photovoltaic demand over a long period.</p>
Full article ">Figure 2
<p>The cyclical period of the PV demand time series.</p>
Full article ">Figure 3
<p>The lower (<b>a</b>) and upper (<b>b</b>) portions of the PV demand cycle.</p>
Full article ">
10 pages, 6368 KiB  
Proceeding Paper
Detecting Trend Turning Points in PS-InSAR Time Series: Slow-Moving Landslides in Province of Frosinone, Italy
by Ebrahim Ghaderpour, Benedetta Antonielli, Francesca Bozzano, Gabriele Scarascia Mugnozza and Paolo Mazzanti
Eng. Proc. 2024, 68(1), 12; https://doi.org/10.3390/engproc2024068012 - 3 Jul 2024
Cited by 1 | Viewed by 509
Abstract
Detecting slow-moving landslides is a crucial task for mitigating potential risk to human lives and infrastructures. In this research, Persistent Scatterer Interferometric Synthetic Aperture Radar (PS-InSAR) time series, provided by the European Ground Motion Service (EGMS), for the province of Frosinone in Italy [...] Read more.
Detecting slow-moving landslides is a crucial task for mitigating potential risk to human lives and infrastructures. In this research, Persistent Scatterer Interferometric Synthetic Aperture Radar (PS-InSAR) time series, provided by the European Ground Motion Service (EGMS), for the province of Frosinone in Italy are employed, and Sequential Turning Point Detection (STPD) is applied to them to estimate the dates when the displacement rates change. The estimated dates are classified based on the land cover/use of the province. Moreover, local precipitation time series are employed to investigate how precipitation rate changes might have triggered the landslides. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>The study region. (<b>a</b>) A map of Italy showing the study region in red, (<b>b</b>) a Google map of province of Frosinone, and (<b>c</b>) the CORINE land cover/use map of the study region (100 m).</p>
Full article ">Figure 2
<p>Spatiotemporal maps of STPD results for both ascending and descending PS-InSAR time series of EGMS.</p>
Full article ">Figure 3
<p>The STPD trend results of two pairs of PS-InSAR time series whose TPs in calendar month (see blue arrows) are shown in <a href="#engproc-68-00012-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>The bar charts of TPs whose locations are within 20 km of the towns of Cassino (<b>a</b>), Sora (<b>b</b>), Frosinone (<b>c</b>), and Ausonia (<b>d</b>), classified based on the CORINE land use/cover map. Panel (<b>e</b>) shows the bar chart of TPs for all polygons susceptible to landslides.</p>
Full article ">Figure 5
<p>Monthly precipitation bar charts and their accumulated precipitation time series with STPD linear trend results for Cassino (<b>a</b>), Sora (<b>b</b>), Frosinone (<b>c</b>), and Ausonia (<b>d</b>).</p>
Full article ">
9 pages, 2175 KiB  
Proceeding Paper
Measuring the Efficiency of Introducing Businesses’ Digitalization Elements over Time in Relation to Their Performance
by Jarmila Horváthová and Martina Mokrišová
Eng. Proc. 2024, 68(1), 13; https://doi.org/10.3390/engproc2024068013 - 3 Jul 2024
Viewed by 449
Abstract
The introduction of digitalization elements into the life of companies is significant in terms of achieving better economic results. The aim of the research was to determine the technical efficiency, as well as the change in efficiency and the technological change in the [...] Read more.
The introduction of digitalization elements into the life of companies is significant in terms of achieving better economic results. The aim of the research was to determine the technical efficiency, as well as the change in efficiency and the technological change in the digital transformation of companies in EU countries in relation to their performance. The Malmquist index was used to measure these parameters over time. The results of the research indicate the significance of the dynamic measurement of the efficiency of digital transformation. Interesting results also point to the importance of evaluating the efficiency of the use of already established elements, as well as evaluating the introduction of new technological changes. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Development of medians of (<b>a</b>) E-commerce sales in EU countries and (<b>b</b>) cloud computing services in EU countries. Source: authors.</p>
Full article ">Figure 2
<p>Development of medians of (<b>a</b>) social media use by type, internet advertising in EU countries and (<b>b</b>) websites and functionalites in EU countries. Source: authors.</p>
Full article ">Figure 3
<p>Comparison of MI DEA results in analyzed years. Source: authors.</p>
Full article ">Figure 4
<p>Comparison of ECH results in analyzed years. Source: authors.</p>
Full article ">Figure 5
<p>Comparison of FS results in analyzed years. Source: authors.</p>
Full article ">
9 pages, 915 KiB  
Proceeding Paper
Evaluation of the University of Lagos Waste Generation Trend
by Charles A. Mbama, Austin Otegbulu, Iain Beverland and Tara K. Beattie
Eng. Proc. 2024, 68(1), 14; https://doi.org/10.3390/engproc2024068014 - 4 Jul 2024
Viewed by 624
Abstract
This study examines waste generation patterns at the University of Lagos (UoL), Nigeria, to inform decision-making towards improving the efficiency of the university’s management strategies in line with Sustainable Development Goal 12, target 12.5 to reduce waste generation through prevention, reduction, recycling, and [...] Read more.
This study examines waste generation patterns at the University of Lagos (UoL), Nigeria, to inform decision-making towards improving the efficiency of the university’s management strategies in line with Sustainable Development Goal 12, target 12.5 to reduce waste generation through prevention, reduction, recycling, and reuse by 2030. The moving average of the waste generation was studied using time series data. During October 2014 to October 2016 the UoL generated an average of 877.5 tons of waste every month, with the lowest observed value being 496.6 tons and the highest recorded value being 1250.5 tons. The trend result indicates a gradual decrease in the generation of waste over time. There is also a noticeable negative cyclical pattern with seasonal variations, where the highest generation point is observed in March and the lowest point is observed in June, particularly in the latter half of the second quarter, as time progresses. Although there is a reduction in the amount of waste generated over time, it is crucial to persist in evaluating diverse waste management strategies that could further reduce the amount of waste generated in the case study area. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Map of University of Lagos, Akoka Campus, showing the location of the four campus zones (A–D) (source: Mbama et al., 2023 [<a href="#B17-engproc-68-00014" class="html-bibr">17</a>]).</p>
Full article ">Figure 2
<p>Monthly waste generation and moving average forecasting the trend from October 2014 to October 2016 in the Akoka Campus, University of Lagos.</p>
Full article ">
9 pages, 834 KiB  
Proceeding Paper
Modeling the Asymmetric and Time-Dependent Volatility of Bitcoin: An Alternative Approach
by Abdulnasser Hatemi-J
Eng. Proc. 2024, 68(1), 15; https://doi.org/10.3390/engproc2024068015 - 4 Jul 2024
Viewed by 724
Abstract
Volatility as a measure of financial risk is a crucial input for hedging, portfolio diversification, option pricing and the calculation of the value at risk. In this paper, we estimate the asymmetric and time-varying volatility for Bitcoin as the dominant cryptocurrency in the [...] Read more.
Volatility as a measure of financial risk is a crucial input for hedging, portfolio diversification, option pricing and the calculation of the value at risk. In this paper, we estimate the asymmetric and time-varying volatility for Bitcoin as the dominant cryptocurrency in the world market. A novel approach that explicitly separates the falling markets from the rising ones is utilized for this purpose. The empirical results have important implications for investors and financial institutions. Our approach provides a position-dependent measure of risk for Bitcoin. This is essential since the source of risk for an investor with a long position is the falling prices, while the source of risk for an investor with a short position is the rising prices. Thus, providing a separate risk measure in each case is expected to increase the efficiency of the underlying risk management in both cases compared to the existing methods in the literature. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Time plot of the exchange rate for Bitcoin.</p>
Full article ">Figure A1
<p>Time plot of the exchange rate for the positive component of Bitcoin.</p>
Full article ">Figure A2
<p>Time plot of the exchange rate for the negative component of Bitcoin.</p>
Full article ">
7 pages, 1376 KiB  
Proceeding Paper
Modelling the Daily Concentration of Airborne Particles Using 1D Convolutional Neural Networks
by Ivan Gudelj, Mario Lovrić and Emmanuel Karlo Nyarko
Eng. Proc. 2024, 68(1), 16; https://doi.org/10.3390/engproc2024068016 - 4 Jul 2024
Viewed by 470
Abstract
This paper focuses on improving the prediction of the daily concentration of the pollutants, PM10 and nitrogen oxides (NO, NO2) in the air at urban monitoring sites using 1D convolutional neural networks (CNN). The results show that the 1D CNN [...] Read more.
This paper focuses on improving the prediction of the daily concentration of the pollutants, PM10 and nitrogen oxides (NO, NO2) in the air at urban monitoring sites using 1D convolutional neural networks (CNN). The results show that the 1D CNN model outperforms the other machine learning models (LSTM and Random Forest) in terms of the coefficients of determination and absolute errors. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>A city map of Graz with the five measurement sites marked [<a href="#B11-engproc-68-00016" class="html-bibr">11</a>].</p>
Full article ">Figure 2
<p>Test data NO<sub>2</sub> concentration time series plots for Graz Nord. The plots present a 7-day moving average (for better visibility). (<b>a</b>) Actual values and the values predicted by the 1D CNN model; (<b>b</b>) 1D CNN model prediction error.</p>
Full article ">
16 pages, 2534 KiB  
Proceeding Paper
Reservoir Neural Network Computing for Time Series Forecasting in Aerospace: Potential Applications to Predictive Maintenance
by Juan Manuel Rodríguez Riesgo and Juan Luis Cabrera Fernández
Eng. Proc. 2024, 68(1), 17; https://doi.org/10.3390/engproc2024068017 - 4 Jul 2024
Viewed by 703
Abstract
Coupling a reservoir neural network and a Grey Wolf optimization algorithm the system hyperparameters space is explored to find the configuration best suited to forecast the input sensor from the NASA CMAPSS dataset. In such a framework, the application to the problem of [...] Read more.
Coupling a reservoir neural network and a Grey Wolf optimization algorithm the system hyperparameters space is explored to find the configuration best suited to forecast the input sensor from the NASA CMAPSS dataset. In such a framework, the application to the problem of predictive maintenance is considered. The necessary requirements for the system to generate satisfactory predictions are established, with specific suggestions as to how a forecast can be improved through reservoir computing. The obtained results are used to determine certain common rules that improve the quality of the predictions and focus the optimization towards hyperparameter solutions that may allow for a faster approach to predictive maintenance. This research is a starting point to develop methods that could inform accurately on the remaining useful life of a component in aerospace systems. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>RC-GWO system flowchart. The sensor data enters as an input for pre-processing and hyperparameters optimization to find a compatible reservoir to generate the prediction upon.</p>
Full article ">Figure 2
<p>CMAPSS dataset Test 1, Unit 49 (from top to bottom): OS 1: Altitude, OS 2: Mach Number, Sensor 2: Total temperature at LPC outlet, Sensor 7: Total pressure at HPC outlet. Time units are in operating cycles.</p>
Full article ">Figure 3
<p>Internal states of a reservoir. <b>Left</b>: Internal states for each the neuron. <b>Right</b>: histogram for the internal states. Results obtained at an arbitrary step during training. <b>Top</b>: A saturated reservoir. <b>Bottom</b>: an unsaturated reservoir.</p>
Full article ">Figure 4
<p>CMAPSS dataset Test 1, Unit 49 time series prediction for Set 3 (from <b>top</b> to <b>bottom</b>): OS 1: Altitude, OS 2: Mach Number, Sensor 2: Total temperature at LPC outlet and, Sensor 7: total pressure at HPC outlet. Time units are in operating cycles. Green: target signal; Blue: predicted signal. Results for the additional sets are available as <a href="#app1-engproc-68-00017" class="html-app">Supplementary Materials</a>.</p>
Full article ">Figure 5
<p>Zoom detail of the forecasting along the 10 initial cycles. Details are the same as in <a href="#engproc-68-00017-f004" class="html-fig">Figure 4</a>.</p>
Full article ">
8 pages, 278 KiB  
Proceeding Paper
Comparison of Inferential Methods for a Novel CMP Model
by Yuvraj Sunecher and Naushad Mamode Khan
Eng. Proc. 2024, 68(1), 18; https://doi.org/10.3390/engproc2024068018 - 4 Jul 2024
Viewed by 295
Abstract
In many real-life instances, time series of counts are often exposed to the dispersion phenomenon while at the same time being influenced by some explanatory variables. This paper takes into account these two issues by assuming that the series of counts follow an [...] Read more.
In many real-life instances, time series of counts are often exposed to the dispersion phenomenon while at the same time being influenced by some explanatory variables. This paper takes into account these two issues by assuming that the series of counts follow an observation-driven first-order integer-valued moving-average structure (INMA(1)) where the innovation terms are COM-Poisson (CMP) distributed under a link function characterized by time-independent covariates. The second part of the paper constitutes estimating the regression effects, dispersion and the serial parameters using popular estimation methods, mainly the Conditional Least Squares (CLS), Generalized Method of Moments and Generalized Quasi-Likelihood (GQL) approaches. The performance of these estimation methods is compared via simulation experiments under different levels of dispersion. Additionally, the suggested model is used to examine time series accident data from actual incidents in several Mauritius locations. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
7 pages, 730 KiB  
Proceeding Paper
Foreign Exchange Forecasting Models: LSTM and BiLSTM Comparison
by Fernando García, Francisco Guijarro, Javier Oliver and Rima Tamošiūnienė
Eng. Proc. 2024, 68(1), 19; https://doi.org/10.3390/engproc2024068019 - 4 Jul 2024
Viewed by 623
Abstract
Knowledge of foreign exchange rates and their evolution is fundamental to firms and investors, both for hedging exchange rate risk and for investment and trading. The ARIMA model has been one of the most widely used methodologies for time series forecasting. Nowadays, neural [...] Read more.
Knowledge of foreign exchange rates and their evolution is fundamental to firms and investors, both for hedging exchange rate risk and for investment and trading. The ARIMA model has been one of the most widely used methodologies for time series forecasting. Nowadays, neural networks have surpassed this methodology in many aspects. For short-term stock price prediction, neural networks in general and recurrent neural networks such as the long short-term memory (LSTM) network in particular perform better than classical econometric models. This study presents a comparative analysis between the LSTM model and BiLSTM models. There is evidence for an improvement in the bidirectional model for predicting foreign exchange rates. In this case, we analyse whether this efficiency is consistent in predicting different currencies as well as the bitcoin futures contract. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Cell structure of an LSTM network. Source: “CC” by E. A. Santos. Licenced under BY CC-SA 4.0.</p>
Full article ">Figure 2
<p>BiLSTM structure. Source: [<a href="#B25-engproc-68-00019" class="html-bibr">25</a>]. Licenced under BY CC BY 4.0.</p>
Full article ">
8 pages, 336 KiB  
Proceeding Paper
Extraction and Forecasting of Trends in Cases of Signal Rank Overestimation
by Nina Golyandina and Pavel Dudnik
Eng. Proc. 2024, 68(1), 20; https://doi.org/10.3390/engproc2024068020 - 5 Jul 2024
Cited by 1 | Viewed by 356
Abstract
Singular spectrum analysis allows automation for the extraction of trends of arbitrary shapes. Here, we study how the estimation of signal ranks influences the accuracy of trend extraction and forecasting. It is numerically shown that the trend estimates and their forecasting are slightly [...] Read more.
Singular spectrum analysis allows automation for the extraction of trends of arbitrary shapes. Here, we study how the estimation of signal ranks influences the accuracy of trend extraction and forecasting. It is numerically shown that the trend estimates and their forecasting are slightly changed if the signal rank is overestimated. If the trend is not of finite rank, the trend estimates are still stable, while forecasting may be unstable. The method of automation of the trend extraction includes an important step for improving the separability of time series components to avoid their mixture. However, the better the separability improvement, the larger the forecasting variability, since the noise components become separated and can be similar to trend ones. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Flowchart of intelligent trend extraction and prediction.</p>
Full article ">Figure 2
<p>Example (<a href="#FD2-engproc-68-00020" class="html-disp-formula">2</a>): RMSE: AutoSSA (<b>top</b>) and AutoEOSSA (<b>bottom</b>).</p>
Full article ">Figure 3
<p>Example ‘Red wine’: AutoSSA (<b>top</b>) and AutoEOSSA (<b>bottom</b>).</p>
Full article ">Figure 4
<p>Example ‘Red wine’: Polynomial regression.</p>
Full article ">
10 pages, 2110 KiB  
Proceeding Paper
Forecasting Stock Market Dynamics using Market Cap Time Series of Firms and Fluctuating Selection
by Hugo Fort
Eng. Proc. 2024, 68(1), 21; https://doi.org/10.3390/engproc2024068021 - 5 Jul 2024
Viewed by 982
Abstract
Evolutionary economics has been instrumental in explaining the nature of innovation processes and providing valuable heuristics for applied research. However, quantitative tests in this field remain scarce. A significant challenge is accurately estimating the fitness of companies. We propose the estimation of the [...] Read more.
Evolutionary economics has been instrumental in explaining the nature of innovation processes and providing valuable heuristics for applied research. However, quantitative tests in this field remain scarce. A significant challenge is accurately estimating the fitness of companies. We propose the estimation of the financial fitness of a company by its market capitalization (MC) time series using Malthusian fitness and the selection equation of evolutionary biology. This definition of fitness implies that all companies, regardless of their industry, compete for investors’ money through their stocks. The resulting fluctuating selection from market capitalization (FSMC) formula allows forecasting companies’ shares of total MC through this selection equation. We validate the method using the daily MC of public-owned Fortune 100 companies over the period 2000–2021. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p><b>Estimation of fitness: instantaneous vs. smoothed fitness.</b> Data corresponding to Apple (AAPL) for the second and third quarters of 2021. The rapidly varying black full line is the instantaneous fitness produced by Equation (6) for each day of the validation period. The thick gray line is the smoothed fitness, obtained through Equation (7) with a running time window of length <span class="html-italic">T</span><sub>T</sub> = 63 days (the number of market days of a quarter); it shows much smaller variations and slightly departs from the constant fitness value (red-dotted line) used by the FSMC forecasting.</p>
Full article ">Figure 2
<p><b>The absolute error of FSMC predictions for each firm over <span class="html-italic">T<sub>v</sub></span> = 252 days (a market year).</b> Each of the 252 × 78 cells corresponds to the absolute error for the forecasted day and company number <span class="html-italic">i</span> averaged over the 5536 − 2<span class="html-italic">T</span><sub>V</sub> = 5536 − 2 × 252 = 5032 validation instances (Equation (10)). The color code is as follows: blue indicates errors smaller than the value of the mean fractions. mean{<span class="html-italic">x<sub>i</sub></span>(<span class="html-italic">t</span>)} = 0.025.</p>
Full article ">Figure 3
<p><b>The absolute percentage errors yielded by the FSMC method for each firm over <span class="html-italic">T<sub>v</sub></span> = 21 days (a market month).</b> Each of the 21 × 78 cells corresponds to the percentage error for the forecasted day and company number <span class="html-italic">i</span> averaged over the 5536 − 2<span class="html-italic">T</span><sub>V</sub> = 5536 − 2 × 21 = 5494 validation instances (Equation (11)). The color code is as follows: blue indicates small average relative errors (&lt;5%), while red corresponds to large relative errors (&gt;20%).</p>
Full article ">Figure 4
<p>MAPE of FSMC forecast (Equation (13)) over <span class="html-italic">T</span><sub>V</sub> = 21 days (a month) for each firm.</p>
Full article ">Figure 5
<p><b>The evolution of the market caps of AIG and Fannie Mae from 2000 to 2021.</b> The inset is zoomed in on the corresponding fractions of both companies (filled) and the FSMC predictions (dashed and dotted). <span>$</span> corresponds to USD.</p>
Full article ">
7 pages, 812 KiB  
Proceeding Paper
Using Dichotomous Variables to Model Structural Changes in Time Series: An Application to International Trade
by Gerardo Covarrubias and Xuedong Liu
Eng. Proc. 2024, 68(1), 22; https://doi.org/10.3390/engproc2024068022 - 5 Jul 2024
Viewed by 349
Abstract
This research aimed to elucidate the methodology employed in econometric estimations by utilizing dichotomous variables. These variables served a dual purpose: firstly, they denoted an attribute designed to discern structural changes within a linear relationship, and secondly, they quantified the impact and statistical [...] Read more.
This research aimed to elucidate the methodology employed in econometric estimations by utilizing dichotomous variables. These variables served a dual purpose: firstly, they denoted an attribute designed to discern structural changes within a linear relationship, and secondly, they quantified the impact and statistical significance of one or more quantitative independent variables on a dependent variable in the presence of such structural changes. The core of this document covers the presentation of the methodology; additionally, an applied analysis of Mexico’s foreign trade is included. This analysis delved into estimating the impact and statistical significance of exports on the economic growth across different periods, reflecting significant structural changes in the development of the Mexican economy. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>CUSUM test.</p>
Full article ">Figure 2
<p>CUSUMQ test.</p>
Full article ">Figure 3
<p>Linear regression in three different periods.</p>
Full article ">
11 pages, 4292 KiB  
Proceeding Paper
A Global Deep Learning Perspective on Australia-Wide Monthly Precipitation Prediction
by Luyi Shen, Guoqi Qian and Antoinette Tordesillas
Eng. Proc. 2024, 68(1), 23; https://doi.org/10.3390/engproc2024068023 - 8 Jul 2024
Viewed by 389
Abstract
Gaining a deep understanding of precipitation patterns is beneficial for enhancing Australia’s adaptability to climate change. Driven by this motivation, we present a specific spatiotemporal deep learning model that well integrates matrix factorization and temporal convolutional networks, along with essential year-month covariates and [...] Read more.
Gaining a deep understanding of precipitation patterns is beneficial for enhancing Australia’s adaptability to climate change. Driven by this motivation, we present a specific spatiotemporal deep learning model that well integrates matrix factorization and temporal convolutional networks, along with essential year-month covariates and key climatic drivers, to analyze and forecast monthly precipitation in Australia. We name this the spatiotemporal TCN-MF method. Our approach employs the precipitation profiler-observation fusion and estimation (PPrOFusE) method for data input, synthesizing monthly precipitation readings from the gauge measurement of the Bureau of Meteorology (BoM), the JAXA Global Satellite Mapping of Precipitation (GSMaP), and the NOAA Climate Prediction Center Morphing (CMORPH) technique. The input dataset spans from April 2000 to March 2021 and covers 1391 Australian grid locations. To evaluate the model’s effectiveness, particularly in regions prone to severe flooding, we employ the empirical dynamic quantiles (EDQ) technique. This method ranks cumulative rainfall levels, enabling focused analysis on areas most affected by extreme weather events. Our assessment from April 2021 to March 2022 highlights the model’s proficiency in identifying significant rainfall, especially in flood-impacted locations. Through the analysis across various climatic zones, the spatiotemporal TCN-MF model contributes to the field of continent-wide precipitation forecasting, providing valuable insights that may enhance climate change adaptability strategies in Australia. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Precipitation data for Australia in March 2022. From top to bottom: Fused, BoM, JAXA, and NOAA data. (<b>b</b>) Time series representation of climate drivers from April 2000 to March 2022. From top to bottom: Dipole Mode Index (DMI), Southern Annular Mode (SAM), and Southern Oscillation Index (SOI).</p>
Full article ">Figure 2
<p>Architecture of the spatiotemporal TCN-MF model. <b>Left</b>: The input and the techniques required for the global model, namely TCN and MF. The base TCN diagram is sourced from [<a href="#B26-engproc-68-00023" class="html-bibr">26</a>]. <b>Right</b>: The local TCN model, integrating transformed precipitation, seasonal and climate covariates, and global model outputs, is shown. By integrating both local and global models, a unique model is executed for each location. Consequently, the final output presents a holistic precipitation forecast across Australia.</p>
Full article ">Figure 3
<p>Overview of precipitation data for 1391 locations from April 2021 to March 2022. (<b>Top</b>) Fused precipitation data using the PPrOFusE method, representing observed rainfall. (<b>Bottom</b>) Monthly precipitation forecasts using the spatiotemporal TCN-MF Model.</p>
Full article ">Figure 3 Cont.
<p>Overview of precipitation data for 1391 locations from April 2021 to March 2022. (<b>Top</b>) Fused precipitation data using the PPrOFusE method, representing observed rainfall. (<b>Bottom</b>) Monthly precipitation forecasts using the spatiotemporal TCN-MF Model.</p>
Full article ">Figure 4
<p>Central heatmap illustrating the ranking of 1391 cumulative monthly precipitation time series across Australia using the EDQ method. Surrounding line graphs display comparative analysis at eight selected locations, with the quantiles corresponding to <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>0.001</mn> <mo>,</mo> <mn>0.503</mn> <mo>,</mo> <mn>0.953</mn> <mo>,</mo> <mn>0.965</mn> <mo>,</mo> <mn>0.971</mn> <mo>,</mo> <mn>0.983</mn> <mo>,</mo> <mn>0.991</mn> <mo>,</mo> <mn>0.996</mn> <mo>}</mo> </mrow> </semantics></math>. Observed rainfall is shown in black, while blue dashed lines represent forecasts from a seasonal VAR model, and red dashed lines indicate predictions from the spatiotemporal TCN-MF model. The x-axis spans the prediction period from April 2021 to March 2022.</p>
Full article ">
10 pages, 7610 KiB  
Proceeding Paper
Prediction of the Characteristics of Concrete Containing Crushed Brick Aggregate
by Marijana Hadzima-Nyarko, Miljan Kovačević, Ivanka Netinger Grubeša and Silva Lozančić
Eng. Proc. 2024, 68(1), 24; https://doi.org/10.3390/engproc2024068024 - 8 Jul 2024
Viewed by 446
Abstract
The construction industry faces the challenge of conserving natural resources while maintaining environmental sustainability. This study investigates the feasibility of using recycled materials, particularly crushed clay bricks, as replacements for conventional aggregates in concrete. The research aims to optimize the performance of both [...] Read more.
The construction industry faces the challenge of conserving natural resources while maintaining environmental sustainability. This study investigates the feasibility of using recycled materials, particularly crushed clay bricks, as replacements for conventional aggregates in concrete. The research aims to optimize the performance of both single regression tree models and ensembles of regression trees in predicting concrete properties. The study focuses on optimizing key parameters like the minimum leaf size in the models. By testing various minimum leaf sizes and ensemble methods such as Random Forest and TreeBagger, the study evaluates metrics including Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and the coefficient of determination (R2). The analysis indicates that the most influential factors on concrete characteristics are the concrete’s age, the amount of superplasticizer used, and the size of crushed brick particles exceeding 4 mm. Additionally, the water-to-cement ratio significantly impacts the predictions. The regression tree models showed optimal performance with a minimum leaf size, achieving an RMSE of 4.00, an MAE of 2.95, an MAPE of 0.10, and an R2 of 0.96. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Segmentation of the input space into distinct regions and the corresponding 3D regression surface represented within the framework of a regression tree [<a href="#B9-engproc-68-00024" class="html-bibr">9</a>].</p>
Full article ">Figure 2
<p>Creating regression tree ensembles using the Bagging algorithm [<a href="#B15-engproc-68-00024" class="html-bibr">15</a>].</p>
Full article ">Figure 3
<p>Assessment of the accuracy of RF and TB models based on the number of randomly selected splitting variables and minimum leaf size: (<b>a</b>) RMSE, (<b>b</b>) MAE, (<b>c</b>) MAPE, (<b>d</b>) R.</p>
Full article ">Figure 3 Cont.
<p>Assessment of the accuracy of RF and TB models based on the number of randomly selected splitting variables and minimum leaf size: (<b>a</b>) RMSE, (<b>b</b>) MAE, (<b>c</b>) MAPE, (<b>d</b>) R.</p>
Full article ">Figure 4
<p>Importance of predictors (input variables’ importance).</p>
Full article ">Figure A1
<p>Optimal regression tree model.</p>
Full article ">
9 pages, 1234 KiB  
Proceeding Paper
Enabling Diffusion Model for Conditioned Time Series Generation
by Frédéric Montet, Benjamin Pasquier, Beat Wolf and Jean Hennebert
Eng. Proc. 2024, 68(1), 25; https://doi.org/10.3390/engproc2024068025 - 8 Jul 2024
Viewed by 896
Abstract
Synthetic time series generation is an emerging field of study in the broad spectrum of data science, addressing critical needs in diverse fields such as finance, meteorology, and healthcare. In recent years, diffusion methods have shown impressive results for image synthesis thanks to [...] Read more.
Synthetic time series generation is an emerging field of study in the broad spectrum of data science, addressing critical needs in diverse fields such as finance, meteorology, and healthcare. In recent years, diffusion methods have shown impressive results for image synthesis thanks to models such as Stable Diffusion and DALL·E, defining the new state-of-the-art methods. In time series generation, their potential exists but remains largely unexplored. In this work, we demonstrate the applicability and suitability of diffusion methods for time series generation on several datasets with a rigorous evaluation procedure. Our proposal, inspired from an existing diffusion model, obtained a better performance than a reference model based on generative adversarial networks (GANs). We also propose a modification of the model to allow for guiding the generation with respect to conditioning variables. This conditioned generation is successfully demonstrated on meteorological data. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Data pipelines for unconditional generation, where squared blocks represent train/test/ generated data, models, or scores, and rounded blocks represent processes. The pipeline to train and use the generator is illustrated on the top (white), while the pipelines for evaluation are on the bottom (yellow for DS and red for PS).</p>
Full article ">Figure 2
<p>Data pipelines for conditional generation and evaluation procedures. The conventions are similar as in <a href="#engproc-68-00025-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>t-SNE visualizations of the real and generated data for the considered datasets, showing the distribution of the data in a 2D space.</p>
Full article ">Figure 4
<p>Averaged daily temperature in Bern (Switzerland) until the year 2050. Solid lines are ground-truth values, while dashed lines are data generated by the diffusion model.</p>
Full article ">Figure 5
<p>Averaged generated sequences on a selection of features, compared to the real sequences. Standard deviation is represented by the shaded area.</p>
Full article ">
8 pages, 1557 KiB  
Proceeding Paper
Estimation and Prediction of Cereal Production Using Normalized Difference Vegetation Index Time Series (Sentinel-2) Data in Central Spain
by César Sáenz, Alfonso Bermejo-Saiz, Víctor Cicuéndez, Tomás Pugni, Diego Madruga, Alicia Palacios-Orueta and Javier Litago
Eng. Proc. 2024, 68(1), 26; https://doi.org/10.3390/engproc2024068026 - 8 Jul 2024
Viewed by 445
Abstract
Estimating production in cereal fields allows farmers to obtain information on improving management in their following campaigns and avoiding losses. The main objective of this work was to estimate grain production in cereals (wheat and barley) in the 2019 and 2020 campaigns in [...] Read more.
Estimating production in cereal fields allows farmers to obtain information on improving management in their following campaigns and avoiding losses. The main objective of this work was to estimate grain production in cereals (wheat and barley) in the 2019 and 2020 campaigns in three provinces of Central Spain. The model was based on the prediction of the maximum values of the Sentinel-2 Normalized Difference Vegetation Index (NDVI) time series with ARIMA and multiple linear regression models. The highest correlation was found between grain yield and the variables’ five-month cumulative rainfall and maximum greenness (NDVImax). Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Study area: different tiles from Sentinel-2 Burgos (30TVM), Palencia (30TUN), and Soria (30TWM).</p>
Full article ">Figure 2
<p>Climodiagrams of the study areas: (<b>a</b>) Burgos, (<b>b</b>) Palencia, and (<b>c</b>) Soria.</p>
Full article ">Figure 3
<p>Workflow for estimating cereals production using NDVI<sub>max</sub>.</p>
Full article ">Figure 4
<p>Flowchart of the Box and Jenkins methodology.</p>
Full article ">Figure 5
<p>Time series of observed and predicted NDVI, with NDVI<sub>max</sub> for 2019 and 2020.</p>
Full article ">Figure 6
<p>The observed and estimated yields (kg/ha) for 2019 in the provinces of Burgos (<b>A</b>,<b>D</b>), Palencia (<b>B</b>,<b>E</b>) and Soria (<b>C</b>,<b>F</b>) compared to the obtained and predicted NDVI also for 2020 in the provinces of: Burgos (<b>G</b>,<b>J</b>), Palencia (<b>H</b>,<b>K</b>) and Soria (<b>I</b>,<b>L</b>). The estimated yields were obtained from the multiple linear regression.</p>
Full article ">
9 pages, 1594 KiB  
Proceeding Paper
Exploring Optimal Strategies for Small Hydro Power Forecasting: Training Periods and Methodological Variations
by Duarte Lopes, Isabel Preto and David Freire
Eng. Proc. 2024, 68(1), 27; https://doi.org/10.3390/engproc2024068027 - 9 Jul 2024
Viewed by 446
Abstract
This study investigates optimal training intervals for small hydro power regression models, crucial for accurate forecasts in diverse conditions, particularly focusing on Portugal’s small hydro portfolio. Utilizing a regression model based on kernel density estimation, historical hourly production values, and calendar variables, forecasts [...] Read more.
This study investigates optimal training intervals for small hydro power regression models, crucial for accurate forecasts in diverse conditions, particularly focusing on Portugal’s small hydro portfolio. Utilizing a regression model based on kernel density estimation, historical hourly production values, and calendar variables, forecasts are generated. Various approaches, including dynamic time warping (DTW), “K-Means Alike,” and traditional K-means clustering, are assessed for determining the most effective historical training periods. Results highlight the “K-Means Alike” approach, which, with a 2-month training period, outperforms conventional methods, offering enhanced accuracy while minimizing computational resources. Despite promising results, DTW exhibits increased computational demands without consistent performance superiority. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Graphic visualization of KDE model learning and predicting stages.</p>
Full article ">Figure 2
<p>Graphic visualization of how DTW dynamic history operates.</p>
Full article ">Figure 3
<p>Graphic visualization of how K-means alike operates—creation of AD Info centroid and data point selection.</p>
Full article ">Figure 4
<p>Graphic visualization of how traditional K-means works—creation of available data clusters and AD Info assignment.</p>
Full article ">
6 pages, 344 KiB  
Proceeding Paper
Assessing the Preprocessing Benefits of Data-Driven Decomposition Methods for Phase Permutation Entropy—Application to Econometric Time Series
by Erwan Pierron and Meryem Jabloun
Eng. Proc. 2024, 68(1), 28; https://doi.org/10.3390/engproc2024068028 - 9 Jul 2024
Viewed by 414
Abstract
This paper investigates the efficacy of various data-driven decomposition methods combined with Phase Permutation Entropy (PPE) to form a promising complexity metric for analyzing time series. PPE is a variant of classical permutation entropy (PE), while the examined data-driven decomposition methods include Empirical [...] Read more.
This paper investigates the efficacy of various data-driven decomposition methods combined with Phase Permutation Entropy (PPE) to form a promising complexity metric for analyzing time series. PPE is a variant of classical permutation entropy (PE), while the examined data-driven decomposition methods include Empirical Mode Decomposition (EMD), Variational Mode Decomposition (VMD), Empirical Wavelet Transform (EWT), Seasonal and Trend decomposition using Loess (STL), and Singular Spectrum Analysis-based decomposition (SSA). To our knowledge, this combination has not been explored yet. Our primary aim is to assess how these preprocessing methods affect PPE’s ability to capture temporal structural complexities within time series. This evaluation encompasses the analysis of both simulated and econometric time series. Our results reveal that combining SSA with PPE produces superior advantages for measuring the complexity of seasonal time series. Conversely, VMD combined with PPE proves to be the less advantageous strategy. Overall, our study illustrates that combining data-driven preprocessing methods with PPE offers greater benefits compared to combining them with traditional PE in quantifying time series complexity. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Data-driven signal decomposition combined with PE (left column) and PPE (right column) applied to 4 signal-generating models: WGN, an AR(16) model, and 2 random Fourier series (RFS) with reduced fundamental frequencies, namely, <math display="inline"><semantics> <mrow> <msub> <mi>ν</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0.013</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>0.0013</mn> </mrow> </semantics></math>. The simulated results are based on the average of 100 Monte Carlo simulations for each model. IMF PE and PPE (y-axis) are displayed as a function of the reduced frequency (x-axis) of each obtained IMF.</p>
Full article ">Figure 2
<p>Data-driven signal decomposition combined with PE (left column) and PPE (right column) applied to real data: (<b>a</b>,<b>b</b>) EHHT1 data and (<b>c</b>,<b>d</b>) M3Forecast data from M3 competition dataset.</p>
Full article ">
7 pages, 764 KiB  
Proceeding Paper
Smart Belay Device for Sport Climbing—An Analysis about Falling
by Heiko Oppel and Michael Munz
Eng. Proc. 2024, 68(1), 29; https://doi.org/10.3390/engproc2024068029 - 9 Jul 2024
Viewed by 2805
Abstract
Although sport climbing is generally considered a comparatively safe sport, the injuries that do occur can be more severe than in other sports. With its increased popularity over the last few decades, sport climbing has started to attract people who are unfamiliar with [...] Read more.
Although sport climbing is generally considered a comparatively safe sport, the injuries that do occur can be more severe than in other sports. With its increased popularity over the last few decades, sport climbing has started to attract people who are unfamiliar with the sport. Especially as a belayer, you are responsible for the climber’s safety. For this reason, addressing the issue of safe belaying is necessary. Using an instrumented belay device, we recorded a climber’s fall into the rope with varying configurations. By combining this data with machine learning algorithms, we were able to categorize the recordings and extract information about the severity of the fall for the climber. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Comparison between the two conducted configurations of dynamic and non-dynamic belaying. In case of a fall, the belayer is always pulled in the direction of the rope (first quickdraw), regardless of the type of belaying. The difference is marked by the movement of the belayer as soon as the fall takes place. From there, the belayer must jump in the direction of the rope. This is referred to as dynamic belaying. Without the additional jump, the belayer would be passively standing by, which is referenced as non-dynamic belaying.</p>
Full article ">Figure 2
<p>Example of a fall with 0.5 m of slack in the system and a non-dynamic belayer. The timestamp of the impact force (green) and the start time (purple) are marked by vertical lines.</p>
Full article ">Figure 3
<p>Visualization of the two fall configurations—slack of 0.5 m (<b>a</b>) and no slack (<b>b</b>). Each subgraph displays a summary of the resulting accelerations, once from dynamic and once from non-dynamic belayed falls. The lines highlighted in the legend represent the average over all of the recordings for each configuration. The slope around the curves represent the standard deviation.</p>
Full article ">Figure 4
<p>Feature importance based on the permutation feature importance from the classification results using a support vector machine.</p>
Full article ">Figure 5
<p>Visualization of the regression results using a support vector machine and manually extraced features. Subfigure (<b>a</b>) visualizes predicted label over the true label, whereas subfigure (<b>b</b>) shows the predicted label over the absolute deviation from the true label.</p>
Full article ">
10 pages, 333 KiB  
Proceeding Paper
Energy Efficiency Evaluation of Frameworks for Algorithms in Time Series Forecasting
by Sergio Aquino-Brítez, Pablo García-Sánchez, Andrés Ortiz and Diego Aquino-Brítez
Eng. Proc. 2024, 68(1), 30; https://doi.org/10.3390/engproc2024068030 - 9 Jul 2024
Viewed by 828
Abstract
In this study, the energy efficiency of time series forecasting algorithms is addressed in a broad context, highlighting the importance of optimizing energy consumption in computational applications. The purpose of this study is to compare the energy efficiency and accuracy of algorithms implemented [...] Read more.
In this study, the energy efficiency of time series forecasting algorithms is addressed in a broad context, highlighting the importance of optimizing energy consumption in computational applications. The purpose of this study is to compare the energy efficiency and accuracy of algorithms implemented in different frameworks, specifically Darts, TensorFlow, and Prophet, using the ARIMA technique. The experiments were conducted on a local infrastructure. The Python library CodeCarbon and the physical energy consumption measurement device openZmeter were used to measure the energy consumption. The results show significant differences in energy consumption and algorithm accuracy depending on the framework and execution environment. We conclude that it is possible to achieve an optimal balance between energy efficiency and accuracy in time series forecasting, which has important implications for developing more sustainable and efficient applications. This study provides valuable guidance for researchers and professionals interested in the energy efficiency of forecasting algorithms. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Average Energy Consumption (kWh) of the Best Models and Libraries, as measured by openZmeter and CodeCarbon.</p>
Full article ">Figure 2
<p>Comparison of RMSE and MAE across different models and libraries.</p>
Full article ">Figure 3
<p>Comparison of <math display="inline"><semantics> <msub> <mi>CO</mi> <mn>2</mn> </msub> </semantics></math> emissions using different models and libraries.</p>
Full article ">
9 pages, 636 KiB  
Proceeding Paper
Multimodal Model Based on LSTM for Production Forecasting in Oil Wells with Rod Lift System
by David Esneyder Bello Angulo and Elizabeth León Guzmán
Eng. Proc. 2024, 68(1), 31; https://doi.org/10.3390/engproc2024068031 - 10 Jul 2024
Viewed by 478
Abstract
This paper presents a novel multimodal recurrent model for time series forecasting leveraging LSTM architecture, with a focus on production forecasting in oil wells equipped with rod lift systems. The model is specifically designed to handle time series data with diverse types, incorporating [...] Read more.
This paper presents a novel multimodal recurrent model for time series forecasting leveraging LSTM architecture, with a focus on production forecasting in oil wells equipped with rod lift systems. The model is specifically designed to handle time series data with diverse types, incorporating both images and numerical data at each time step. This capability enables a comprehensive analysis over specified temporal windows. The architecture consists of distinct submodels tailored to process different data modalities. These submodels generate a unified concatenated feature vector, providing a holistic representation of the well’s operational status. This representation is further refined through a dense layer to facilitate non-linear transformation and integration. Temporal analysis forms the core of the model’s functionality, facilitated by a Long Short-Term Memory (LSTM) layer, which excels at capturing long-range dependencies in the data. Additionally, a fully connected layer with linear activation output enables one-shot multi-step forecasting, which is necessary because the input and output have different modalities. Experimental results show that the proposed multimodal model achieved the best performance in the studied cases, with a Mean Absolute Percentage Error (MAPE) of 8.2%, outperforming univariate and multivariate deep learning-based models, as well as ARIMA implementations, which yielded results with a MAPE greater than 9%. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Representative images distributed over time. Dynacard image of 120 × 100 pixels on the left and valve test image of 120 × 200 pixels on the right. Each image source represents a different modality of the measured data.</p>
Full article ">Figure 2
<p>Schematic representation of the multimodal time series forecasting model, demonstrating the concatenation of submodels inside the time-distributed layer corresponding to each modality in the dataset. In this specific case, two modalities are utilized, namely images and numerical data.</p>
Full article ">Figure 3
<p>Time-distributed framework used to feed the submodels for each time step, then pass the output to the LSTM layers.</p>
Full article ">Figure 4
<p>Example of prediction in a well. It is observed that the best result is obtained with the multimodal LSTM model.</p>
Full article ">Figure 5
<p>Example of prediction in a well with low dispersion. A better fit of the ARIMA model is observed.</p>
Full article ">
7 pages, 17163 KiB  
Proceeding Paper
Spectral Characteristics of Strong Ground Motion Time Series for Low to Medium Seismicity Regions with Deep Soil Atop Deep Geological Sediments
by Silva Lozančić, Borko Đ. Bulajić, Gordana Pavić, Ivana Bulajić and Marijana Hadzima-Nyarko
Eng. Proc. 2024, 68(1), 32; https://doi.org/10.3390/engproc2024068032 - 10 Jul 2024
Viewed by 332
Abstract
In this study, we examine the features of horizontal and vertical strong motion time series for sites with deep soil above deep geological deposits, in low-to-medium seismicity zones. New empirical regional equations were derived for spectral and peak ground acceleration attenuation based only [...] Read more.
In this study, we examine the features of horizontal and vertical strong motion time series for sites with deep soil above deep geological deposits, in low-to-medium seismicity zones. New empirical regional equations were derived for spectral and peak ground acceleration attenuation based only on the strong-motion time series that were recorded in the north-western Balkans region. Results show that short-period horizontal spectral amplitudes at rock locations can be larger than those resulting from the combination of the deep soil and the deep geological deposits. However, the results also show that there is a significant increase in spectral amplitudes for larger vibration periods. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Pannonian Basin (<b>top left</b>), epicenters of Mw ≥ 3 regional earthquakes (<b>top right</b>), the investigated area, the epicenters of two recent damaging earthquakes, and the strongest historical earthquake close to Osijek [<a href="#B5-engproc-68-00032" class="html-bibr">5</a>].</p>
Full article ">Figure 2
<p>Two typical geotechnical profiles and the geological map for the analyzed region.</p>
Full article ">Figure 3
<p>Regional horizontal pseudo-acceleration spectral records at deep soil sites (s<sub>L</sub> = 2) and the corresponding empirical predictions based on the new scaling equations for horizontal ground motion [<a href="#B5-engproc-68-00032" class="html-bibr">5</a>].</p>
Full article ">Figure 4
<p>Regional vertical pseudo-acceleration spectral records at deep soil sites (s<sub>L</sub> = 2) and the corresponding empirical predictions based on the new scaling equations for vertical ground motion [<a href="#B8-engproc-68-00032" class="html-bibr">8</a>].</p>
Full article ">Figure 5
<p>Microzonation maps for 6 horizontal <span class="html-italic">PSA</span> amplitudes and probabilities of p in t years that are analog to return periods of 95 years (<b>top left</b>), 475 years (<b>top right</b>), 975 years (<b>bottom left</b>), and 2475 years (<b>bottom right</b>), for the deep soil and deep geological sediments; the location for which the UHS were calculated is shown by full circles [<a href="#B5-engproc-68-00032" class="html-bibr">5</a>].</p>
Full article ">Figure 6
<p>UHS for the location 45°32′ N, 18°23′ E for four different probabilities [<a href="#B5-engproc-68-00032" class="html-bibr">5</a>], and the Eurocode 8 (2004) spectra for ground type C and Type 2.</p>
Full article ">Figure 7
<p>UHS vertical spectra for four distinct probabilities [<a href="#B8-engproc-68-00032" class="html-bibr">8</a>] and the corresponding Eurocode 8 (2004) spectra for Ground Type C and Type 2, for the location 45 32′ N, 18 23′′ E.</p>
Full article ">
10 pages, 678 KiB  
Proceeding Paper
Performance of an End-to-End Inventory Demand Forecasting Pipeline Using a Federated Data Ecosystem
by Henrique Duarte Moura, Els de Vleeschauwer, Gerald Haesendonck, Ben De Meester, Lynn D’eer, Tom De Schepper, Siegfried Mercelis and Erik Mannens
Eng. Proc. 2024, 68(1), 33; https://doi.org/10.3390/engproc2024068033 - 10 Jul 2024
Viewed by 518
Abstract
One of the key challenges for (fresh produce) retailers is achieving optimal demand forecasting, as it plays a crucial role in operational decision-making and dampens the Bullwhip Effect. Improved forecasts holds the potential to achieve a balance between minimizing waste and avoiding shortages. [...] Read more.
One of the key challenges for (fresh produce) retailers is achieving optimal demand forecasting, as it plays a crucial role in operational decision-making and dampens the Bullwhip Effect. Improved forecasts holds the potential to achieve a balance between minimizing waste and avoiding shortages. Different retailers have partial views on the same products, which—when combined—can improve the forecasting of individual retailers’ inventory demand. However, retailers are hesitant to share all their individual data. Therefore, we propose an end-to-end graph-based time series forecasting pipeline using a federated data ecosystem to predict inventory demand for supply chain retailers. Graph deep learning forecasting has the ability to comprehend intricate relationships, and it seamlessly tunes into the diverse, multi-retailer data present in a federated setup. The system aims to create a unified data view without centralization, addressing technical and operational challenges, which are discussed throughout the text. We test this pipeline using real-world data across large and small retailers, and discuss the performance obtained and how it can be further improved. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>BWE (red) generates an amplification in the inventory swings alongside the supply chain in response to changes in consumer demand. The purple curve shows how data sharing and enhanced forecasting damps BWE in the whole chain (SOURCE: imec ICON project AI4Foodlogistics proposal).</p>
Full article ">Figure 2
<p>RML rules describe how source data are mapped to RDF graphs using a common ontology. RML mapping engines execute the rules. SPARQL endpoints expose the RDF graphs. An additional SPARQL Endpoint (Federator) combines the exposed RDF graphs to feed the forecasting model that generates order predictions. Retailers stay in control of their own data, whilst contributing to a retailer-independent forecasting model.</p>
Full article ">Figure 3
<p>Proposed spatio-temporal graph-based deep learning model.</p>
Full article ">Figure 4
<p>A time-evolving graph in which four nodes <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>3</mn> </msub> <mo>}</mo> </mrow> </semantics></math> correspond to four products. Given the observations and graph snapshots in the past <span class="html-italic">M</span> time steps, where for each step each node <math display="inline"><semantics> <msub> <mi>p</mi> <mi>i</mi> </msub> </semantics></math> has some feature values, we want to infer in the next <math display="inline"><semantics> <msup> <mi>T</mi> <mo>′</mo> </msup> </semantics></math> time steps the values for these features. The graph can be directed (as above) or undirected (no arrows).</p>
Full article ">Figure 5
<p>Forecasting module runtime considering several setups.</p>
Full article ">
14 pages, 5554 KiB  
Proceeding Paper
Short-Term Forecasting of Non-Stationary Time Series
by Amir Aieb, Antonio Liotta, Alexander Jacob and Muhammad Azfar Yaqub
Eng. Proc. 2024, 68(1), 34; https://doi.org/10.3390/engproc2024068034 - 10 Jul 2024
Viewed by 887
Abstract
Forecasting climate events is crucial for mitigating and managing risks related to climate change; however, the problem of non-stationarity in time series (NTS) arises, making it difficult to capture and model the underlying trends. This task requires a complex procedure to address the [...] Read more.
Forecasting climate events is crucial for mitigating and managing risks related to climate change; however, the problem of non-stationarity in time series (NTS) arises, making it difficult to capture and model the underlying trends. This task requires a complex procedure to address the challenge of creating a strong model that can effectively handle the non-uniform variability in various climate datasets. In this work, we use a daily standardized precipitation index dataset as an example of NTS, whereby the heterogeneous variability of daily precipitation poses complexities for traditional machine-learning models in predicting future events. To address these challenges, we introduce a novel approach, aiming to adjust the non-uniform distribution and simplify the detection time lags using autocorrelation. Our study employs a range of statistical techniques, including sampling-based seasonality, mathematical transformation, and normalization, to preprocess the data to increase the time lag window. Through the exploration of linear and sinusoidal transformation, we aim to assess their impact on increasing the accuracy of forecasting models. A strong performance is effectively observed by using the proposed approach to capture more than one year of time delay across all the seasonal subsets. Furthermore, improved model accuracy is observed, notably with K-Nearest Neighbors (KNN) and Random Forest (RF). This study underscores RF’s consistently strong performance across all the transformations, while KNN only demonstrates optimal results when the data have been linearized. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Algorithm for transforming non-stationary time series data.</p>
Full article ">Figure 2
<p>Statistical description of daily precipitation (<b>A</b>) and standardized precipitation index obtained from the gamma and Pearson_3 models (<b>B</b>), followed by the test of homogeneity for both the training and testing datasets (<b>C</b>,<b>D</b>).</p>
Full article ">Figure 3
<p>The flowchart summarizes the workflow of the proposed approach for time series data forecasting.</p>
Full article ">Figure 4
<p>Autocorrelation analysis illustrating the time lag detection using various data transformation methods.</p>
Full article ">Figure 5
<p>Results of training K-Nearest Neighbors (KNN) and Random Forest (RF) models to forecast the Standardized Precipitation Index (SPI), under different data transformation methods.</p>
Full article ">Figure 6
<p>Results of testing the K-Nearest Neighbors (KNN) and Random Forest (RF) models for the standardized precipitation index (SPI) forecasting on the original scale.</p>
Full article ">Figure 7
<p>Cross-validation analysis for SPI forecasting via K-Nearest Neighbors (KNN) and Random Forest (RF) using the maximum time delay. Coefficient of determination (R-squared), mean absolute error (MAE).</p>
Full article ">
7 pages, 369 KiB  
Proceeding Paper
Modelling Explosive Nonstationarity of Ground Motion Shows Potential for Landslide Early Warning
by Michael Manthey, Guoqi Qian and Antoinette Tordesillas
Eng. Proc. 2024, 68(1), 35; https://doi.org/10.3390/engproc2024068035 - 11 Jul 2024
Viewed by 419
Abstract
This work applies the rarely seen explosive version of autoregressive modelling to a novel practical context—geological failure monitoring. This approach is more general than standard ARMA or ARIMA methods in that it allows the underlying data process to be explosively nonstationary, which is [...] Read more.
This work applies the rarely seen explosive version of autoregressive modelling to a novel practical context—geological failure monitoring. This approach is more general than standard ARMA or ARIMA methods in that it allows the underlying data process to be explosively nonstationary, which is often the case in real-world slope failure processes. We develop and test our methodology on a case study consisting of high-quality (in situ) line-of-sight radar displacement data from a slope that undergoes a failure event. Specifically, we first optimally estimate the characteristic roots of the autoregressive processes underpinning the displacement time series preceding the failure at each monitoring location. We then establish and utilise a pivotal quantity for the autoregressive parameter ensemble to perform simulation-based hypothesis test/s for the explosiveness of the corresponding true characteristic roots. Concluding that a true characteristic root becomes explosive at some significance level implies that the underlying displacement process is explosively nonstationary, and, hence, local geological instability is suspected at this significance level. We found that the actual location of failure (LOF) was identified well in advance of the time of failure (TOF) by flagging those locations where explosive root(s) were identified by our approach. This statistical feedback model for ground motion dynamics presents an alternative and/or addition to the velocity threshold approach to early warning of impending failure. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Mine-A representative cumulative displacement profiles (every 10th location). Note that there are several outlying series with drastic changes in displacement and which do not fall within the LOF. Otherwise, two distinct regimes emerge; a relatively stable grouping of time series and a subset which seem to exponentially diverge.</p>
Full article ">Figure 2
<p>Spatial maps of estimated roots over Mine-A locations. (<b>a</b>) Map of Mine-A locations, coloured according to the value of the largest modulus of the characteristic roots estimated at each location. The first 200 training times were used from the total 1315 prefailure times. A clear ovular region of points identified as near-explosive already emerges in the centre-right. (<b>b</b>) Map of Mine-A locations, coloured according to the value of the largest modulus of the characteristic roots estimated at each location. The first 1000 training times were used from the total 1315 prefailure times. Note that it may appear (due to the gradual nature of colouration) to some viewers that a large portion of the slope is now explosive; however, the surrounding regions are typically slightly below a value of one and, hence, are still stationary. In fact, more or less, only those points in the eventual LOF actually sometimes exceed a value of 1. (<b>c</b>) Map of Mine-A locations, coloured according to the value of the largest modulus of the characteristic roots estimated at each location. The first 1310 training times were used from the total 1315 prefailure times. The ovular region of points identified as explosive is now quite stark.</p>
Full article ">Figure 3
<p>Spatial maps of stationary versus explosive hypothesis test outcomes over Mine-A locations. (<b>a</b>) Map of Mine-A locations, coloured according to whether our simulation-based hypothesis testing scheme accepts/rejects the null hypothesis of stationarity at a confidence level of 95% (location-wise). The first 200 training times were used from the total 1315 prefailure times. The previously seen ovular region of points identified as explosive is yet to emerge at this high threshold of confidence (however, it does so clearly at lower thresholds of confidence). (<b>b</b>) Map of Mine-A locations, coloured according to whether our simulation-based hypothesis testing scheme accepts/rejects the null hypothesis of stationarity at a confidence level of 95% (location-wise). The first 1000 training times were used from the total 1315 prefailure times. The eventual LOF is clearly emerging. (<b>c</b>) Map of Mine-A locations, coloured according to whether our simulation-based hypothesis testing scheme accepts/rejects the null hypothesis of stationarity at a confidence level of 95% (location-wise). The first 1310 training times were used from the total 1315 prefailure times. The ovular region of points identified as explosive is now clearly identified.</p>
Full article ">
8 pages, 679 KiB  
Proceeding Paper
Cellular Automata Framework for Dementia Classification Using Explainable AI
by Siva Manohar Reddy Kesu, Neelam Sinha and Hariharan Ramasangu
Eng. Proc. 2024, 68(1), 36; https://doi.org/10.3390/engproc2024068036 - 11 Jul 2024
Viewed by 453
Abstract
The clinical dementia rating scale has been used for analyzing dementia severity based on cognitive impairments. Many researchers have introduced various statistical methods and machine learning techniques for the classification of dementia severity. Feature importance with deep learning architecture can give a better [...] Read more.
The clinical dementia rating scale has been used for analyzing dementia severity based on cognitive impairments. Many researchers have introduced various statistical methods and machine learning techniques for the classification of dementia severity. Feature importance with deep learning architecture can give a better analysis of the dementia severity. A CA framework has been proposed for the classification of cognitive impairment, and LIME has been used for explaining the local interpretability. Feature vectors for healthy and unhealthy classes have been converted to redistributed CA images. These CA images have been classified using deep learning architecture, and promising results have been achieved. GRAD-CAM and LIME explainer have captured the feature importance of the cognitive impairment of CA images. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the proposed CA framework-based dementia classification.</p>
Full article ">Figure 2
<p>Distribution of features in CA grid.</p>
Full article ">Figure 3
<p>Redistributed evolutionary CA image from features with diffusion rate of 20%.</p>
Full article ">Figure 4
<p>Block diagram of CNN architecture for CA image classifications.</p>
Full article ">Figure 5
<p>Predicted class and GRAD-CAM explainer for a sample CA image. (<b>a</b>) Class predicted as unhealthy for CA image from CNN classifier. Feature 1 (orientation) and feature 2 (judgment) play high importance, represented in yellow, whereas features 0 (memory), feature 3 (community affairs), and feature 4 (hobbies) play low importance, represented in blue. (<b>b</b>) GRAD-CAM image explanation for low cognitive impairment CA image based on the color intensity is high for orientation and judgment features.</p>
Full article ">Figure 6
<p>Feature importance from LIME explainer for low cognitive impairment CA image based on the color intensity. The intensity of color is high for orientation, judgment, and community affair features in a decreasing order.</p>
Full article ">
14 pages, 390 KiB  
Proceeding Paper
JET: Fast Estimation of Hierarchical Time Series Clustering
by Phillip Wenig, Mathias Höfgen and Thorsten Papenbrock
Eng. Proc. 2024, 68(1), 37; https://doi.org/10.3390/engproc2024068037 - 11 Jul 2024
Viewed by 565
Abstract
Clustering is an effective, unsupervised classification approach for time series analysis applications that suffer a natural lack of training data. One such application is the development of jet engines, which involves numerous test runs and failure detection processes. While effective data mining algorithms [...] Read more.
Clustering is an effective, unsupervised classification approach for time series analysis applications that suffer a natural lack of training data. One such application is the development of jet engines, which involves numerous test runs and failure detection processes. While effective data mining algorithms exist for the detection of anomalous and structurally conspicuous test recordings, these algorithms do not perform any semantic labeling. So, data analysts spend many hours connecting the large amounts of automatically extracted observations to their underlying root causes. The complexity, number, and variety of extracted time series make this task hard not only for humans, but also for existing time series clustering algorithms. These algorithms either require training data for supervised learning, cannot deal with varying time series lengths, or suffer from exceptionally long runtimes. In this paper, we propose JET, an unsupervised, highly efficient clustering algorithm for large numbers of variable-lengths time series. The main idea is to transform the input time series into a metric space, then apply a very fast conventional clustering algorithm to obtain effective but rather coarse-grained pre-clustering of the data; this pre-clustering serves to subsequently estimate the more accurate but also more costly shape-based distances of the time series and, thus, enables JET to apply a highly effective hierarchical clustering algorithm to the entire input time series collection. Our experiments demonstrate that JET is highly accurate and much faster than its competitors. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>The four parts of JET: feature embedding, pre-clustering, distance matrix estimation, and hierarchical clustering.</p>
Full article ">Figure 2
<p>Birch’s homogeneity score increases with an increasing number of clusters on a fixed sample dataset.</p>
Full article ">Figure 3
<p>Distance calculations for the exact HC and approximating JET.</p>
Full article ">Figure 4
<p>Rand Index (RI) scores of all algorithms on the UCR Time Series Classification Archive datasets grouped by datasets of varying lengths (<b>left</b>) and equal lengths (<b>right</b>) and ordered by median RI (on varying lengths).</p>
Full article ">Figure 5
<p>(<b>Left</b>): Runtime in seconds of well-performing algorithms (mean RI <math display="inline"><semantics> <mrow> <mo>&gt;</mo> <mn>0.5</mn> </mrow> </semantics></math>) on the UCR varying-length datasets ordered by median RI. (<b>Right</b>): Average runtime in seconds of <span class="html-italic">JET</span> and the standard <span class="html-italic">HC</span> algorithm on the UCR datasets with varying lengths plotted against numbers of time series per dataset.</p>
Full article ">Figure 6
<p>Rand Index (RI) scores vs. runtime measurements in seconds (on log scale) for two real-world Rolls-Royce datasets.</p>
Full article ">
11 pages, 5561 KiB  
Proceeding Paper
A System for Efficient Detection of Forest Fires through Low Power Environmental Data Monitoring and AI
by İpek Üremek, Paul Leahy and Emanuel Popovici
Eng. Proc. 2024, 68(1), 38; https://doi.org/10.3390/engproc2024068038 - 11 Jul 2024
Viewed by 634
Abstract
This study introduces a system that merges AI with low-power IoT (Internet Of Things) technology to enhance environmental monitoring, with a specific focus on accurately predicting forest fires through time series analysis. Utilizing affordable sensors and wireless communication technologies like LoRa (Long Range), [...] Read more.
This study introduces a system that merges AI with low-power IoT (Internet Of Things) technology to enhance environmental monitoring, with a specific focus on accurately predicting forest fires through time series analysis. Utilizing affordable sensors and wireless communication technologies like LoRa (Long Range), environmental data have been gathered. One of the key features of this approach is the comparison of the real-time local environmental data with meteorological service environmental data to ensure accuracy. This comparison informs a feedback loop that improves the model’s predictive accuracy. The research also delves into detailed time series analysis, incorporating the Autoregressive Integrated Moving Average (ARIMA) model to identify the best windows of opportunity for communication and to provide future forecasting. Finally, a decision tree model serves as the last step, providing a comprehensive assessment of fire risk due to its straightforward application and clarity. Validation of the fire detection component remains a critical future task to confirm its effectiveness and reliability. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Number of wildfires recorded in Amazon by year between months January and August in years 2013–2019.</p>
Full article ">Figure 2
<p>Total area burned by fires larger than 30 hectares in European countries most at risk of wildfires.</p>
Full article ">Figure 3
<p>Basic structure of fire detection system.</p>
Full article ">Figure 4
<p>Basic structure of a decision tree.</p>
Full article ">Figure 5
<p>Network architecture.</p>
Full article ">Figure 6
<p>Validation steps.</p>
Full article ">Figure 7
<p>Real-time sensor data and Met Éireann’s weather data alignment graph.</p>
Full article ">Figure 8
<p>Temperature data forecasting by ARIMA are shown with a solid red line representing predicted values and a pink shaded area showing the 95% confidence interval.</p>
Full article ">Figure 9
<p>Pressure data forecasting by ARIMA is shown with a solid red line representing predicted values and a pink shaded area showing the 95% confidence interval.</p>
Full article ">Figure 10
<p>(<b>Top</b>): the Autocorrelation Function (ACF) graph for temperature, showing a decline after the initial peak. (<b>Bottom</b>): the Autocorrelation Function (ACF) graph for pressure shows a decline after the initial peak, while the Partial Autocorrelation Function (PACF) shows a significant spike at the first lag.</p>
Full article ">Figure 11
<p>The path between the UCC Atmospheric Monitoring Station and the UCC Electrical and Electronics Building [<a href="#B17-engproc-68-00038" class="html-bibr">17</a>].</p>
Full article ">
5 pages, 887 KiB  
Proceeding Paper
New Studies on Birth, Death, Temperature Time Series and Their Correlation
by Arzu Sardarli
Eng. Proc. 2024, 68(1), 39; https://doi.org/10.3390/engproc2024068039 - 12 Jul 2024
Viewed by 327
Abstract
This article presents the preliminary results of my daily birth time series studies in eight Canadian provinces for 2011–2019. An extensive review of the available literature shows that researchers usually analyse the monthly birth time series, perhaps because of the unavailability of daily [...] Read more.
This article presents the preliminary results of my daily birth time series studies in eight Canadian provinces for 2011–2019. An extensive review of the available literature shows that researchers usually analyse the monthly birth time series, perhaps because of the unavailability of daily birth data. Statistics Canada started releasing daily birth data records for Canadian provinces and territories in 2011. Using the Fourier analysis of daily birth time series, I revealed new cycles on birth time series that, to my knowledge, were not observed in previous works. This research was conducted at the Regina and Saskatoon Research Data Centres (RDC) of Statistics Canada. It was supported by the University of Regina, the First Nations University of Canada and Statistics Canada. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>FFT of monthly birth time series for Nova Scotia (<b>above</b>) and Ontario (<b>below</b>), 2011–2019.</p>
Full article ">Figure 1 Cont.
<p>FFT of monthly birth time series for Nova Scotia (<b>above</b>) and Ontario (<b>below</b>), 2011–2019.</p>
Full article ">Figure 2
<p>FFT of daily birth time series for Nova Scotia and (<b>above</b>) Ontario (<b>below</b>), 2011–2019.</p>
Full article ">Figure 2 Cont.
<p>FFT of daily birth time series for Nova Scotia and (<b>above</b>) Ontario (<b>below</b>), 2011–2019.</p>
Full article ">
5 pages, 985 KiB  
Proceeding Paper
Crisis and Youth Inactivity: Central and Eastern Europe during the Financial Crisis of 2008 and the COVID-19 Outbreak of 2020
by Nataša Kurnoga, Tomislav Korotaj and James Ming Chen
Eng. Proc. 2024, 68(1), 40; https://doi.org/10.3390/engproc2024068040 - 12 Jul 2024
Viewed by 515
Abstract
This paper analyzes eleven Central and Eastern European countries after the financial crisis of 2008 and the COVID-19 pandemic of 2020. It investigates the heterogeneity in the labor market among the selected countries based on the youth inactivity, secondary education attainment, and income [...] Read more.
This paper analyzes eleven Central and Eastern European countries after the financial crisis of 2008 and the COVID-19 pandemic of 2020. It investigates the heterogeneity in the labor market among the selected countries based on the youth inactivity, secondary education attainment, and income share of the bottom fifty percent of the population. A hierarchical cluster analysis with Ward’s method and k-means clustering generated diverse cluster solutions. A comparative analysis of the four-cluster solutions for 2008 and 2020 showed multiple changes in the cluster composition. The joint groupings of geographically and historically close countries, such as the Baltics, the former Czechoslovakia, and the former Yugoslav republics of Croatia and Slovenia, were identified for 2008. Lithuania emerged as a singleton in 2020. The youth inactivity, educational levels, and income inequality reveal the status of the youth in Central and Eastern Europe during these crises. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Dendrogram for eleven Central and Eastern European countries, 2008.</p>
Full article ">Figure 2
<p>Dendrogram for eleven Central and Eastern European countries, 2020.</p>
Full article ">
10 pages, 1876 KiB  
Proceeding Paper
Application of the Optimised Pulse Width Modulation (PWM) Based Encoding-Decoding Algorithm for Forecasting with Spiking Neural Networks (SNNs)
by Sergio Lucas and Eva Portillo
Eng. Proc. 2024, 68(1), 41; https://doi.org/10.3390/engproc2024068041 - 12 Jul 2024
Viewed by 418
Abstract
Spiking Neural Networks (SNNs) are recognised for processing spatiotemporal information with ultra-low power consumption. However, applying a non-efficient encoding-decoding algorithm can counter the efficiency advantages of the SNNs. In this sense, this paper presents one-step ahead forecasting centered on the application of an [...] Read more.
Spiking Neural Networks (SNNs) are recognised for processing spatiotemporal information with ultra-low power consumption. However, applying a non-efficient encoding-decoding algorithm can counter the efficiency advantages of the SNNs. In this sense, this paper presents one-step ahead forecasting centered on the application of an optimised encoding-decoding algorithm based on Pulse Width Modulation (PWM) for SNNs. The validation is carried out with sine-wave, 3 UCI and 1 available real-world datasets. The results show the practical disappearance of the computational and energy costs associated with the encoding and decoding phases (less than 2% of the total costs) and very satisfactory forecasting results (MAE lower than 0.0357) for any dataset. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Example of a SNN structure.</p>
Full article ">Figure 2
<p>Encoding process and input-output pairs formation.</p>
Full article ">Figure 3
<p>Forecasting measures applying <math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>o</mi> <mi>r</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>o</mi> <mi>p</mi> <mi>t</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>Comparison among the original time-series, the target and the decoded SNN output with ARCTIC dataset.</p>
Full article ">Figure 5
<p>Computational and energy costs distribution applying <math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>o</mi> <mi>r</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>o</mi> <mi>p</mi> <mi>t</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">
8 pages, 325 KiB  
Proceeding Paper
New Algorithm for Detecting Weak Changes in the Mean in a Class of CHARN Models with Application to Welding Electrical Signals
by Youssef Salman, Anis Hoayek and Mireille Batton-Hubert
Eng. Proc. 2024, 68(1), 42; https://doi.org/10.3390/engproc2024068042 - 12 Jul 2024
Viewed by 372
Abstract
In this paper, we propose a new automatic algorithm for detecting weak changes in the mean of a class of piece-wise CHARN models. Through a simulation experiment, we demonstrate its efficacy and precision in detecting weak changes in the mean and accurately estimating [...] Read more.
In this paper, we propose a new automatic algorithm for detecting weak changes in the mean of a class of piece-wise CHARN models. Through a simulation experiment, we demonstrate its efficacy and precision in detecting weak changes in the mean and accurately estimating their locations. Furthermore, we illustrate the robust performance of our algorithm through its application to welding electrical signals (WES). Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Behavior of the power when facing a change.</p>
Full article ">Figure 2
<p>Change-points estimates in WES corresponding to two different thresholds.</p>
Full article ">Figure 2 Cont.
<p>Change-points estimates in WES corresponding to two different thresholds.</p>
Full article ">
9 pages, 255 KiB  
Proceeding Paper
Detecting Short-Notice Cancellation in Hotels with Machine Learning
by Eleazar C-Sánchez and Agustín J. Sánchez-Medina
Eng. Proc. 2024, 68(1), 43; https://doi.org/10.3390/engproc2024068043 - 15 Jul 2024
Viewed by 504
Abstract
Cancellations play a critical role in the lodging industry. Considering the time horizon, cancellations placed close to check-in have a significant impact on hoteliers, who must respond promptly for effective management. In recent years, the introduction of personal name records (PNR) has brought [...] Read more.
Cancellations play a critical role in the lodging industry. Considering the time horizon, cancellations placed close to check-in have a significant impact on hoteliers, who must respond promptly for effective management. In recent years, the introduction of personal name records (PNR) has brought innovative approaches to this domain, but short-notice cancellation prediction is still underdeveloped. Using real PNR data with more than 10k reservations provided by a four-star hotel, this research aims to combine fuzzy clustering with tree decision techniques and random forest under R software version 4.3.3 to forecast cancellations placed close to the entry day, slightly improving the performance of individual techniques. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
11 pages, 576 KiB  
Proceeding Paper
Shadows of Resilience: Exploring the Impact of the Shadow Economy on Economic Stability
by Charalampos Agiropoulos, James Ming Chen, Thomas Poufinas and George Galanos
Eng. Proc. 2024, 68(1), 44; https://doi.org/10.3390/engproc2024068044 - 15 Jul 2024
Viewed by 591
Abstract
This study analyzes the shadow economy within the European Union and its influence on the economic resilience of member countries. Data spanning almost two decades and covering a broad spectrum provide a unique opportunity to examine the impact of the shadow economy on [...] Read more.
This study analyzes the shadow economy within the European Union and its influence on the economic resilience of member countries. Data spanning almost two decades and covering a broad spectrum provide a unique opportunity to examine the impact of the shadow economy on economic stability across various economic cycles. Regularization techniques such as Lasso and Automatic Relevance Determination (ARD) combat possible collinearity and overfitting arising from the inclusion of irrelevant and redundant variables. The shadow economy interacts with key indicators of economic resilience, such as GDP, national debt, and population, across different phases of economic stability and turbulence. The preliminary findings suggest a complex and varied interaction between the shadow economy and economic resilience. This study provides a valuable foundation for policies aimed at stability and sustainable economic growth. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Heatmap of Pearson’s correlation for the selected variables. *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
8 pages, 619 KiB  
Proceeding Paper
Robust Multitaper Tests for Detecting Frequency Modulated Signals
by Benjamin Ott, Glen Takahara and Wesley S. Burr
Eng. Proc. 2024, 68(1), 45; https://doi.org/10.3390/engproc2024068045 - 15 Jul 2024
Viewed by 408
Abstract
In this paper, we propose a new test for the detection of polynomial modulated signals, as well as an aggregated test based on the new test. The test statistics are developed under the multitaper spectral framework, and are designed to be robust to [...] Read more.
In this paper, we propose a new test for the detection of polynomial modulated signals, as well as an aggregated test based on the new test. The test statistics are developed under the multitaper spectral framework, and are designed to be robust to the choice of the multitaper order K. The proposed tests are based on a modification of a previous test statistic developed by the authors, denoted by F4. We review the F4 test, and discuss some of its shortcomings, then propose our two tests. We illustrate performance via simulations, and apply our tests to SoHO GOLF optical solar time series. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Detection probability of <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>;</mo> <mn>0.1</mn> <mo>,</mo> <mi>K</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> in gray (dashed), <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>;</mo> <mn>0.1</mn> <mo>,</mo> <mi>K</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> in gray, and <math display="inline"><semantics> <mrow> <msubsup> <mi>F</mi> <mn>4</mn> <mo>′</mo> </msubsup> <mrow> <mo>(</mo> <mn>1</mn> <mo>;</mo> <mn>0.1</mn> <mo>,</mo> <mi>K</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> in black. A total of 10,000 simulations conducted with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math> for each simulation at a significance level of <math display="inline"><semantics> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> </semantics></math>. The signal is given by Equation (<a href="#FD1-engproc-68-00045" class="html-disp-formula">1</a>) with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mi>ϕ</mi> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>0.001</mn> <mo>/</mo> <mrow> <mo>(</mo> <mi>N</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mi>N</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>f</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi>Z</mi> <mi>t</mi> </msub> <mo>}</mo> </mrow> </semantics></math> is a white noise process with mean 0 and variance 1.</p>
Full article ">Figure 2
<p>Estimate of the complementary CDF of the Aggregate test for <math display="inline"><semantics> <mrow> <mi mathvariant="script">K</mi> <mo>=</mo> <mo>{</mo> <mn>5</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>80</mn> <mo>}</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math>, with <math display="inline"><semantics> <mrow> <mi>Simulations</mi> <mo>=</mo> </mrow> </semantics></math> 10,000. The time series were created using independent Gaussian white noise replicates.</p>
Full article ">Figure 3
<p>With <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2000</mn> </mfrac> </mrow> </semantics></math>, 10,000 simulations of <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi>X</mi> <mi>t</mi> </msub> <mo>}</mo> </mrow> </semantics></math> (Equation (<a href="#FD1-engproc-68-00045" class="html-disp-formula">1</a>)) with amplitude 0.5, frequency 0.1, <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mi>P</mi> </msub> <mfrac> <mi>N</mi> <mn>2</mn> </mfrac> </mfrac> <mfenced separators="" open="(" close=")"> <mi>t</mi> <mo>−</mo> <mfrac> <mi>N</mi> <mn>2</mn> </mfrac> </mfenced> </mrow> </semantics></math>. <math display="inline"><semantics> <msub> <mi>Z</mi> <mi>t</mi> </msub> </semantics></math> was created using Gaussian white noise for (<b>a</b>) and <math display="inline"><semantics> <msub> <mi>Z</mi> <mi>t</mi> </msub> </semantics></math> for (<b>b</b>) was from an <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>R</mi> <mo>=</mo> <mo>(</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.3</mn> <mo>,</mo> <mo>−</mo> <mn>0.1</mn> <mo>)</mo> <mo>,</mo> <mi>M</mi> <mi>A</mi> <mo>=</mo> <mo>(</mo> <mn>0.6</mn> <mo>)</mo> </mrow> </semantics></math>, with innovations from a zero mean Gumbel distribution.</p>
Full article ">Figure 4
<p>(<b>a</b>) SoHO GOLF significant frequencies for <math display="inline"><semantics> <msubsup> <mi>F</mi> <mn>4</mn> <mo>′</mo> </msubsup> </semantics></math> in black and <math display="inline"><semantics> <msub> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> <mn>3</mn> </msub> </semantics></math> in gray at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> </mrow> </semantics></math>. (<b>b</b>) Expanded range of tapers for <math display="inline"><semantics> <msubsup> <mi>F</mi> <mn>4</mn> <mo>′</mo> </msubsup> </semantics></math> with Aggregate test results in black lines overlaid. (<b>c</b>) Expanded range of tapers for <math display="inline"><semantics> <msub> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> <mn>3</mn> </msub> </semantics></math> with Aggregate test results in black lines overlaid.</p>
Full article ">
8 pages, 2121 KiB  
Proceeding Paper
Optimizing Biogas Power Plants through Machine-Learning-Aided Rotor Configuration
by Andreas Heller, Héctor Pomares and Peter Glösekötter
Eng. Proc. 2024, 68(1), 46; https://doi.org/10.3390/engproc2024068046 - 16 Jul 2024
Viewed by 474
Abstract
The increasing demand for sustainable energy sources has intensified the exploration of biogas power plants as a viable option. In this research, we present a novel approach that leverages machine learning techniques to optimize the performance of biogas power plants through the strategic [...] Read more.
The increasing demand for sustainable energy sources has intensified the exploration of biogas power plants as a viable option. In this research, we present a novel approach that leverages machine learning techniques to optimize the performance of biogas power plants through the strategic placement and configuration of rotors within the fermentation vessel. Our study involves the simulation of a diverse range of biogas power plant scenarios, each characterized by varying rotor locations and rotating speeds, influencing the agitation levels of the biogas substrate. The simulation results, encompassing multiple performance metrics, serve as input data for an artificial neural network (ANN). This ANN is trained to learn the intricate relationships between rotor placement, rotor speed, agitation levels, and overall system efficiency. The trained model demonstrates predictive capabilities, enabling the estimation of plant efficiency based on specific rotor configurations. The proposed methodology provides a tool for both optimizing existing biogas power plants and guiding engineers in the design and setup of new facilities. Our model aims to offer valuable insights for engineers in the initial planning stages of new biogas power plants, enabling them to make informed decisions that contribute to sustainable and efficient energy generation. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Comparison of fluid velocity development throughout a simulation. The simulation setup and the points of measurement (magenta) are shown in (<b>a</b>,<b>c</b>). The associated velocity developments over time at the respective locations are shown in (<b>b</b>,<b>d</b>).</p>
Full article ">Figure 2
<p>(<b>a</b>) Strategic placement of rotors within single case setups of one batch. (<b>b</b>) Model of a single case setup with the rotor placed near the vessel’s wall.</p>
Full article ">Figure 3
<p>Fully connected neural network model architecture used in this work. The input layer consists of 10 neurons, followed by two hidden layers with 128 and 64 neurons, respectively, before outputting on one single neuron on the output layer, which describes the estimated Normalized Agitation Efficiency. The Adam optimizer is used, and the activation function of each layer is a Rectified Linear Unit (ReLU), except for the output layer, which has linear activation.</p>
Full article ">Figure 4
<p>Root mean square error of training and test sets over 50 training epochs.</p>
Full article ">
8 pages, 337 KiB  
Proceeding Paper
A Hybrid Computer-Intensive Approach Integrating Machine Learning and Statistical Methods for Fake News Detection
by Livio Fenga
Eng. Proc. 2024, 68(1), 47; https://doi.org/10.3390/engproc2024068047 - 16 Jul 2024
Viewed by 467
Abstract
In this paper, we address the challenge of early fake news detection within the framework of anomaly detection for time-dependent data. Our proposed method is computationally intensive, leveraging a resampling scheme inspired by maximum entropy principles. It has a hybrid nature, combining a [...] Read more.
In this paper, we address the challenge of early fake news detection within the framework of anomaly detection for time-dependent data. Our proposed method is computationally intensive, leveraging a resampling scheme inspired by maximum entropy principles. It has a hybrid nature, combining a sophisticated machine learning algorithm augmented by a bootstrapped versions of binomial statistical tests. In the presented approach, the detection of fake news through the anomaly detection system entails identifying sudden deviations from the norm, indicative of significant, temporary shifts in the underlying data-generating process. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Binomial distribution.</p>
Full article ">Figure 2
<p>Coherent (acceptable) and non-coherent (poor) predictions.</p>
Full article ">Figure 3
<p>Officially recognized fake news.</p>
Full article ">Figure 4
<p>Not all anomalies are fake news. Quantitative results should always be discussed by human analysts.</p>
Full article ">
11 pages, 337 KiB  
Proceeding Paper
Multi-Objective Optimisation for the Selection of Clusterings across Time
by Sergej Korlakov, Gerhard Klassen, Luca T. Bauer and Stefan Conrad
Eng. Proc. 2024, 68(1), 48; https://doi.org/10.3390/engproc2024068048 - 17 Jul 2024
Viewed by 390
Abstract
Nowadays, time series data are ubiquitous, encompassing various domains like medicine, economics, energy, climate science and the Internet of Things. One crucial task in analysing these data is clustering, aiming to find patterns that indicate previously undiscovered relationships among features or specific groups [...] Read more.
Nowadays, time series data are ubiquitous, encompassing various domains like medicine, economics, energy, climate science and the Internet of Things. One crucial task in analysing these data is clustering, aiming to find patterns that indicate previously undiscovered relationships among features or specific groups of objects. In this work, we present a novel framework for the clustering of multiple multivariate time series over time that utilises multi-objective optimisation to determine the temporal clustering solution for each time point. To highlight the strength of our framework, we conduct a comparison with alternative solutions using multiple labelled real-world datasets. Our results reveal that our method not only provides better results but also enables a comparison between datasets with regard to their temporal dependencies. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Selection process from an exemplary Pareto front. (<b>a</b>) Selection using a weight of 0.5 for temporal quality. (<b>b</b>) Selection using a weight of 0.8 for temporal quality.</p>
Full article ">Figure 2
<p>Example of a simple selection process from a Pareto front.</p>
Full article ">
10 pages, 1062 KiB  
Proceeding Paper
Continual Learning for Time Series Forecasting: A First Survey
by Quentin Besnard and Nicolas Ragot
Eng. Proc. 2024, 68(1), 49; https://doi.org/10.3390/engproc2024068049 - 17 Jul 2024
Viewed by 979
Abstract
Deep learning has brought significant advancements in the field of artificial intelligence, particularly in robotics, imaging, sound processing, etc. However, a common major challenge faced by all neural networks is their substantial demand for data during the learning process. The required data must [...] Read more.
Deep learning has brought significant advancements in the field of artificial intelligence, particularly in robotics, imaging, sound processing, etc. However, a common major challenge faced by all neural networks is their substantial demand for data during the learning process. The required data must be both quantitative and stationary to ensure the proper computing of standard models. Nevertheless, complying to these constraints is often impossible for many real-life applications because of dynamic environments. Indeed, modifications can occur in the distribution of the data or even in the goals to pursue within these environments. This is known as data and concept drift. Research in the field of continual learning seeks to address these challenges by implementing evolving models capable of adaptation over time. This notably involves finding a compromise on the plasticity/stability dilemma while taking into account material and computational constraints. Exploratory efforts are evident in all applications of deep learning (graphs, reinforcement learning, etc.), but to date, there is still a limited amount of work in the case of time series, specifically in the context of regression and forecasting. This paper aims to provide a first survey on this field of continuous learning applied to time series forecasting. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Comparison of fine-tuning, lifelong learning, and joint training for sequential task learning A (blue) → B (green) → C (red)—Figure 11—[<a href="#B24-engproc-68-00049" class="html-bibr">24</a>].</p>
Full article ">Figure 2
<p>Comparison of training time and average MSE on test datasets over 20 experiments for algorithms in the data and target domain scenario—Figures 1 and 2—[<a href="#B41-engproc-68-00049" class="html-bibr">41</a>].</p>
Full article ">Figure 3
<p>Performance of algorithms with increasing number of tasks by 2 in each step in the task and data domain incremental scenario—Figures 3 and 4—[<a href="#B41-engproc-68-00049" class="html-bibr">41</a>].</p>
Full article ">
17 pages, 851 KiB  
Proceeding Paper
A Machine Learning-Based Approach to Analyze and Visualize Time-Series Sentencing Data
by Eugene Pinsky and Kandaswamy Piranavakumar
Eng. Proc. 2024, 68(1), 50; https://doi.org/10.3390/engproc2024068050 - 17 Jul 2024
Viewed by 371
Abstract
Analyzing time-series sentencing data presents many challenges. The data have many dimensions and change with time. This makes it difficult to identify patterns and discuss their similarities over time. This work proposes a machine learning approach to associate patterns with clusters. This allows [...] Read more.
Analyzing time-series sentencing data presents many challenges. The data have many dimensions and change with time. This makes it difficult to identify patterns and discuss their similarities over time. This work proposes a machine learning approach to associate patterns with clusters. This allows a representation of sentencing data regarding trajectories in the appropriate (time, cluster) space. We propose to use the Hamming distance of trajectories to measure the similarity of sentencing data across districts. For any offense, we can define the average Hamming distance that has a simple interpretation as the average period when sentencing patterns are different. We introduce simple statistical measures on trajectories to show similarities and changes in sentencing behavior over time. We illustrate our approach by analyzing sentencing data for narcotics and retail theft. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>The raw data before processing.</p>
Full article ">Figure 2
<p>The cleaned data after processing.</p>
Full article ">Figure 3
<p>Distribution of data.</p>
Full article ">Figure 4
<p>Narcotics cluster visualization.</p>
Full article ">Figure 5
<p>Retail theft cluster visualization.</p>
Full article ">Figure 6
<p>Narcotics district-wise cluster visualization.</p>
Full article ">Figure 7
<p>Retail theft district-wise cluster visualization.</p>
Full article ">Figure 8
<p>The Average trajectories for Narcotics and Retail Theft.</p>
Full article ">Figure 9
<p>Sub-trajectories for years 1–5.</p>
Full article ">Figure 10
<p>Sub-trajectories for years 6–10.</p>
Full article ">Figure 11
<p>Side-by-side comparison for district <math display="inline"><semantics> <msub> <mi>D</mi> <mn>3</mn> </msub> </semantics></math>.</p>
Full article ">Figure 12
<p>Side-by-side comparison for district <math display="inline"><semantics> <msub> <mi>D</mi> <mn>5</mn> </msub> </semantics></math>.</p>
Full article ">
12 pages, 1446 KiB  
Proceeding Paper
Assessing Global Wildfire Dynamics and Climate Resilience: A Focus on European Regions Using the Fire Weather Index
by Ayat-Allah Bouramdane
Eng. Proc. 2024, 68(1), 51; https://doi.org/10.3390/engproc2024068051 - 18 Jul 2024
Viewed by 609
Abstract
Wildfires pose significant threats to ecosystems, human safety, and socio-economic stability, necessitating a deep understanding of fire-prone landscapes for effective management. This study assesses the temporal and spatial patterns of the Fire Weather Index (FWI), a crucial indicator of landscape flammability, with a [...] Read more.
Wildfires pose significant threats to ecosystems, human safety, and socio-economic stability, necessitating a deep understanding of fire-prone landscapes for effective management. This study assesses the temporal and spatial patterns of the Fire Weather Index (FWI), a crucial indicator of landscape flammability, with a particular focus on European regions. Historical FWI data from the European Forest Fire Information System (EFFIS) under the Copernicus Emergency Management Service (CEMS) are analyzed using tools such as the Climate Data Store (CDS) API. The results reveal spatial patterns, highlighting regions with heightened wildfire risk and those with reduced fire danger. Southern and Southeastern Europe face elevated danger, driven by factors like high temperatures, low humidity, and reduced precipitation, while Northwestern and Northeastern Europe exhibit lower risk due to milder conditions. The study further delves into the implications of these patterns on agrivoltaic systems, the distinct climatic and environmental factors influencing elevated FWI levels across various regions, and how the findings of this research can guide tailored wildfire management strategies for European areas. The findings inform resilient strategies for policymakers, land managers, and communities, contributing valuable insights for proactive and sustainable wildfire mitigation. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>The global analysis of the Fire Weather Index (FWI) anomaly plot for 2022 reveals distinct patterns indicating regions with increased wildfire risk (i.e., &gt;50, highlighted in red) and those experiencing a relative reduction in fire danger (highlighted in blue). Elevated fire risk is observed across continents, including parts of North and South America, Africa, Asia, and regions with arid climates, suggesting heightened susceptibility to wildfires due to factors such as high temperatures and reduced humidity. Conversely, temperate zones and areas with consistent precipitation, like Northern Europe and coastal South America, exhibit a relative reduction in fire danger. These patterns underscore the complex interplay between climatic conditions and vegetation characteristics, emphasizing the need for tailored wildfire management strategies on a global scale to address the diverse challenges associated with fire risk and resilience. Source: own elaboration based on <a href="#sec3-engproc-68-00051" class="html-sec">Section 3</a>.</p>
Full article ">Figure 2
<p>The analysis of the Fire Weather Index (FWI) anomaly over Europe (<b>Top-Left Panel</b>) for the year 2022 unveils distinctive patterns indicating regions with heightened wildfire risk (highlighted in red) and those experiencing a relative reduction in fire danger (highlighted in blue). In Southern and Southeastern Europe (<b>Top-Right Panel</b>), the notable red regions suggest an increased number of days with FWI surpassing the threshold of 50, attributed to warmer temperatures, lower humidity, and decreased precipitation during the summer months. Complex topography, such as mountainous terrain, amplifies fire danger in these areas. Conversely, notable blue regions in Northwestern (<b>Bottom-Left Panel</b>) and Northeastern (<b>Bottom-Right Panel</b>) Europe indicate a reduction in FWI, potentially linked to more moderate temperatures, higher humidity, or increased precipitation. Regions with ample vegetation moisture content and climate resilience, particularly those with consistent precipitation, show reduced fire danger. This nuanced analysis underscores the importance of tailoring wildfire management strategies to the specific climatic and geographical characteristics of different European regions, supporting targeted interventions for effective risk mitigation. Source: own elaboration based on <a href="#sec3-engproc-68-00051" class="html-sec">Section 3</a>.</p>
Full article ">Figure 3
<p>The weekly time series plots comparing Fire Weather Index (FWI) anomalies in 2022 with the reference period mean have provided valuable insights into the temporal patterns of fire danger across various European regions—Europe, Southwestern Europe, Northwestern Europe, Southeastern Europe, and Northeastern Europe. The x-axis represents weeks, showcasing mean FWI values, while shaded regions indicate the reference period’s FWI variability. A solid line denotes the reference period mean, and shaded areas above and below represent the 10th to 90th percentile range and the minimum to maximum range during the reference period. These analyses identified specific weeks and regions susceptible to heightened fire risk and those exhibiting a reduced risk. Regions highlighted in red on the plots indicated an excess of FWI values in 2022 compared to the reference period mean, signifying heightened fire danger during those specific weeks. On the other hand, regions depicted in blue exhibited FWI values below the reference period mean, suggesting a relative reduction in fire danger during those weeks. The dynamic perspective offered by these weekly analyses enhances our ability to pinpoint temporal variations in fire risk, supporting proactive measures and emphasizing the importance of region-specific wildfire management strategies. Source: own elaboration based on <a href="#sec3-engproc-68-00051" class="html-sec">Section 3</a>.</p>
Full article ">
9 pages, 781 KiB  
Proceeding Paper
Assessing Asymptotic Tail Independence: A Simulation Study
by Marta Ferreira
Eng. Proc. 2024, 68(1), 52; https://doi.org/10.3390/engproc2024068052 - 18 Jul 2024
Viewed by 426
Abstract
The occurrence of extreme values in one variable can trigger the same in other variables, making it necessary to assess the risk of contagion. The usual dependence measures based on the central part of the data typically fail to assess extreme dependence. Within [...] Read more.
The occurrence of extreme values in one variable can trigger the same in other variables, making it necessary to assess the risk of contagion. The usual dependence measures based on the central part of the data typically fail to assess extreme dependence. Within the scope of EVT, tail dependence measures were developed, such as the Ledford and Tawn coefficient that we discuss here. This is a measure of residual dependence that is particularly important when it comes to analyzing at the tail level where data are scarce. We will consider different estimation methodologies and compare them based on a simulation study. We finish with an application to real data. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Estimated sample means (solid line) for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>, along with Wald 95% confidence interval (dashed line), in model AMH where <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math> (horizontal line), of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>).</p>
Full article ">Figure 2
<p>Estimated sample means (solid line) for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>, along with Wald 95% confidence interval (dashed line), in model Frank where <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> (horizontal line), of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>).</p>
Full article ">Figure 3
<p>Estimated sample means (solid line) for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>, along with Wald 95% confidence interval (dashed line), in model BNormal where <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>3</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> (horizontal line), of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>).</p>
Full article ">Figure 4
<p>Estimated sample means (solid line) for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>, along with Wald 95% confidence interval (dashed line), in model Logistic where <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (horizontal line), of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>).</p>
Full article ">Figure 5
<p>Estimated sample means (solid line) for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, along with Wald 95% confidence interval (dashed line), in model AMH where <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math> (horizontal line), of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>).</p>
Full article ">Figure 6
<p>Estimated sample means (solid line) for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, along with Wald 95% confidence interval (dashed line), in model Frank where <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> (horizontal line), of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>).</p>
Full article ">Figure 7
<p>Estimated sample means (solid line) for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, along with Wald 95% confidence interval (dashed line), in model BNormal where <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>3</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> (horizontal line), of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>).</p>
Full article ">Figure 8
<p>Estimated sample means (solid line) for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>, along with Wald 95% confidence interval (dashed line), in model Logistic where <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (horizontal line), of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>).</p>
Full article ">Figure 9
<p>Estimated RMSE for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>, for models <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (dotted line), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (dot-dash line), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (solid line) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (dashed line), for models AMH (<b>top-left</b>), Frank (<b>top-right</b>), BNormal (<b>bottom-left</b>) and Logistic (<b>bottom-right</b>).</p>
Full article ">Figure 10
<p>Estimated RMSE for each threshold <span class="html-italic">u</span>, in samples of size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, for models <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (dotted line), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (dot-dash line), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (solid line) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (dashed line), for models AMH (<b>top-left</b>), Frank (<b>top-right</b>), BNormal (<b>bottom-left</b>) and Logistic (<b>bottom-right</b>).</p>
Full article ">Figure 11
<p>Daily close log-returns of CAC40 (<b>left</b>), DAX (<b>middle</b>) and PSI20 (<b>right</b>) indexes, in the period between January 2020 and February 2024 .</p>
Full article ">Figure 12
<p>Scatter-plot of filtered daily close log-returns: CAC40–DAX (<b>left</b>) and CAC40–PSI20 (<b>right</b>).</p>
Full article ">Figure 13
<p>CAC40–DAX: Estimates given by <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>) for thresholds <span class="html-italic">u</span> between 0.5 and 0.99 and respective 95% Wald confidence intervals.</p>
Full article ">Figure 14
<p>CAC40–PSI20: Estimates given by <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-left</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>P</mi> <mi>D</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>top-right</b>), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>e</mi> <mi>m</mi> <mi>p</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-left</b>) and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>η</mi> <mo>^</mo> </mover> <mi>u</mi> <mrow> <mo>(</mo> <mi>b</mi> <mi>e</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (<b>bottom-right</b>) for thresholds <span class="html-italic">u</span> between 0.5 and 0.99 and respective 95% Wald confidence intervals.</p>
Full article ">
15 pages, 3374 KiB  
Proceeding Paper
Exploring Regional Determinants of Tourism Success in the Eurozone: An Unsupervised Machine Learning Approach
by Charalampos Agiropoulos, James Ming Chen, George Galanos and Thomas Poufinas
Eng. Proc. 2024, 68(1), 53; https://doi.org/10.3390/engproc2024068053 - 19 Jul 2024
Viewed by 388
Abstract
This paper presents an initial analysis of the factors influencing tourism success at the NUTS 2 regional level across the Eurozone from 2010 to 2019. Utilizing an extensive dataset that includes economic, demographic, and tourism-specific indicators, we employ unsupervised machine learning techniques, primarily [...] Read more.
This paper presents an initial analysis of the factors influencing tourism success at the NUTS 2 regional level across the Eurozone from 2010 to 2019. Utilizing an extensive dataset that includes economic, demographic, and tourism-specific indicators, we employ unsupervised machine learning techniques, primarily K-means clustering and Principal Component Analysis (PCA), to unearth underlying patterns and relationships. Our study reveals distinct clusters of regions characterized by varying degrees of economic prosperity, infrastructure development, and tourism activity. Through K-means clustering, we identified optimal groupings of regions that share similar characteristics in terms of GDP per capita, unemployment rates, tourist arrivals, and overnight stays, among other metrics. Subsequent PCA provided deeper insights into the most influential factors driving these clusters, offering a reduced-dimensional perspective that highlights the primary axes of variation. The findings underscore significant disparities in tourism success across the Eurozone, with economic robustness and strategic infrastructural investments emerging as key drivers. Regions with higher GDP per capita and lower unemployment rates tend to exhibit higher tourism metrics, suggesting that economic health is a substantial contributor to regional tourism appeal and capacity. This paper contributes to the literature by demonstrating how machine learning can be applied to regional tourism data to better understand and strategize for tourism development. The insights garnered from this study are poised to assist policy-makers and tourism planners in crafting targeted interventions aimed at enhancing tourism competitiveness in underperforming regions. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Histograms and distribution plots for all key variables.</p>
Full article ">Figure 2
<p>Correlation heatmap of all key variables.</p>
Full article ">Figure 3
<p>Elbow method for optimal cluster determination.</p>
Full article ">Figure 4
<p>Indicative scatter plots for GDP per capita vs. tourist arrivals and population vs. competitiveness.</p>
Full article ">Figure 5
<p>Indicative box plots for GDP per capita (<b>a</b>) and tourist arrivals (<b>b</b>) by cluster.</p>
Full article ">Figure 6
<p>Cumulative variance explained using PCA components.</p>
Full article ">Figure 7
<p>Plot of the first two principal components.</p>
Full article ">
7 pages, 1892 KiB  
Proceeding Paper
Simulating the Aerial Ballet: The Dance of Fire-Fighting Planes and Helicopters
by Juha Alander, Lauri Honkasilta and Kalle Saastamoinen
Eng. Proc. 2024, 68(1), 54; https://doi.org/10.3390/engproc2024068054 - 19 Jul 2024
Viewed by 463
Abstract
This study introduces a simulation model to analyze the efficacy of different aerial firefighting strategies in Finland, focusing on the comparative water production capacity and associated costs of firefighting aircraft versus helicopters of varying sizes. By utilizing publicly available data and direct inquiries, [...] Read more.
This study introduces a simulation model to analyze the efficacy of different aerial firefighting strategies in Finland, focusing on the comparative water production capacity and associated costs of firefighting aircraft versus helicopters of varying sizes. By utilizing publicly available data and direct inquiries, the model evaluates the impact of water collection distance on the volume of extinguishing water procured and its costs. The simulation reveals that firefighting aircraft offer a cost-effective solution, particularly when collecting water from distances of thirteen kilometers, where their cost per liter of water aligns with that of smaller helicopters operating closer to the fire zone. The study underscores the importance of precise data input into the calculator, highlighting the potential of aerial firefighting strategies in enhancing wildfire suppression efforts. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>View of the counter assembly section.</p>
Full article ">Figure 2
<p>Water point on all air units a kilometer away on all. Duration of operation 120 min.</p>
Full article ">Figure 3
<p>Helicopters fetch water from 1 km away, Air Tractor from 6 km away. Duration of operation 120 min.</p>
Full article ">Figure 4
<p>Helicopters fetch water from 1 km away, Air Tractor from 13 km away. Duration of operation 120 min.</p>
Full article ">
8 pages, 1400 KiB  
Proceeding Paper
Evaluation of Economic Interventions in Economic Blocks during an Economic and Sanitary Crisis
by Carmin Montante and Clemente Hernandez-Rodriguez
Eng. Proc. 2024, 68(1), 55; https://doi.org/10.3390/engproc2024068055 - 19 Jul 2024
Viewed by 389
Abstract
The purpose of this study is to evaluate the economic interventions that took place during the initial stages of the pandemic in 2020 in the US, Mexico, and Canada. These countries share a free trade agreement that indicates their willingness to cooperate in [...] Read more.
The purpose of this study is to evaluate the economic interventions that took place during the initial stages of the pandemic in 2020 in the US, Mexico, and Canada. These countries share a free trade agreement that indicates their willingness to cooperate in economic terms with each other and that they should adopt similar economic policies due to both their shared agreements and proximity. However, the economic interventions adopted by two of the three countries were not considered by the other, which makes for an interesting comparison. Interrupted time series analysis is a quasi-quantitative method that has recently been used in evaluating policy during a specific time. This study is interested in focusing on the economic interventions that were put into practice in neighboring countries that have formed a free trade alliance named USMCA. The method of a systematic analysis of interrupted time series will be used as a basis for organizing the article to provide further validity to the study. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>US 2019–2021 quarterly GDP in USD with economic interventions in 2020.</p>
Full article ">Figure 2
<p>Mexico 2019–2021 quarterly GDP in MXN with economic interventions in 2020.</p>
Full article ">Figure 3
<p>Canada 2019–2021 quarterly GDP in CND with economic interventions in 2020.</p>
Full article ">
8 pages, 306 KiB  
Proceeding Paper
Modeling the Future of Hydroelectric Power: A Cross-Country Study
by Farooq Ahmad, Livio Finos and Mariangela Guidolin
Eng. Proc. 2024, 68(1), 56; https://doi.org/10.3390/engproc2024068056 - 19 Jul 2024
Viewed by 816
Abstract
This paper examines the role of hydropower in the context of the energy transition, using innovation diffusion models. The study analyzes time series data of hydropower generation from 1965 to 2022 by applying diffusion models and some other models, such as Prophet and [...] Read more.
This paper examines the role of hydropower in the context of the energy transition, using innovation diffusion models. The study analyzes time series data of hydropower generation from 1965 to 2022 by applying diffusion models and some other models, such as Prophet and ARIMA, for comparison purposes. The models are evaluated across diverse geographic regions, including America, Africa, Europe, Asia, and the Middle East, to determine their effectiveness in predicting hydropower generation trends. The analysis reveals that the GGM consistently outperforms other models in accuracy across all regions. In most cases, the GGM exhibits better performance compared to the Bass, ARIMA, and Prophet models, highlighting its potential as a robust forecasting tool for hydropower generation. This study emphasizes the critical role of accurate forecasting in energy planning and calls for further research to validate these findings and explore additional factors influencing hydropower generation evolution. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Hydroelectricity generation by all countries.</p>
Full article ">Figure 2
<p>America and South Africa.</p>
Full article ">Figure 3
<p>Europe.</p>
Full article ">Figure 4
<p>Asia and Middle East.</p>
Full article ">
9 pages, 696 KiB  
Proceeding Paper
Catalyzing Supply Chain Evolution: A Comprehensive Examination of Artificial Intelligence Integration in Supply Chain Management
by Sarthak Pattnaik, Natasya Liew, Ali Ozcan Kures, Eugene Pinsky and Kathleen Park
Eng. Proc. 2024, 68(1), 57; https://doi.org/10.3390/engproc2024068057 - 22 Jul 2024
Cited by 1 | Viewed by 1061
Abstract
The integration of Artificial Intelligence (AI) into Supply-Chain Management (SCM) has revolutionized operations, offering avenues for enhanced efficiency and decision-making. AI has become pivotal in tackling various Supply-Chain Management challenges, notably enhancing demand forecasting precision and automating warehouse operations for improved efficiency and [...] Read more.
The integration of Artificial Intelligence (AI) into Supply-Chain Management (SCM) has revolutionized operations, offering avenues for enhanced efficiency and decision-making. AI has become pivotal in tackling various Supply-Chain Management challenges, notably enhancing demand forecasting precision and automating warehouse operations for improved efficiency and error reduction. However, a critical debate arises concerning the choice between less accurate explainable models and more accurate yet unexplainable models in Supply-Chain Management applications. This paper explores this debate within the context of various Supply-Chain Management challenges and proposes a methodology for developing models tailored to different Supply-Chain Management problems. Drawing from academic research and modelling, the paper discusses the applications of AI in demand forecasting, inventory optimization, warehouse automation, transportation management, supply chain planning, supplier management, quality control, risk management, and customer service. Additionally, it examines the trade-offs between model interpretability and accuracy, highlighting the need for a nuanced approach. The proposed methodology advocates for the development of explainable models for tasks where interpretability is crucial, such as risk management and supplier selection, while leveraging unexplainable models for tasks prioritizing accuracy, like demand forecasting and predictive maintenance. Through this approach, stakeholders gain insights into Supply-Chain Management processes, fostering better decision-making and accountability. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Random Under Sampling with Recursive Feature Selection.</p>
Full article ">Figure 2
<p>SMOTE with Recursive Feature Selection.</p>
Full article ">Figure 3
<p>Data Imbalance Issue in Supply Chain Dataset.</p>
Full article ">Figure 4
<p>AUC curve without any oversampling methods.</p>
Full article ">Figure 5
<p>Line Order Quantity Forecasting using ARIMA.</p>
Full article ">Figure 6
<p>Line Order Quantity Forecasting using SARIMA.</p>
Full article ">
8 pages, 9556 KiB  
Proceeding Paper
Calibration-Free Current Measurement with Integrated Quantum Sensor
by Jens Pogorzelski, Ludwig Horsthemke, Jonas Homrighausen, Dennis Stiegekötter, Frederik Hoffmann, Ann-Sophie Bülter, Markus Gregor and Peter Glösekötter
Eng. Proc. 2024, 68(1), 58; https://doi.org/10.3390/engproc2024068058 - 22 Jul 2024
Viewed by 720
Abstract
This paper presents the application of a compact and fully integrated LED quantum sensor based on the NV centers in diamond for current measurement in a busbar. The magnetic field measurements from the sensor are directly compared with measurements from a numerical simulation, [...] Read more.
This paper presents the application of a compact and fully integrated LED quantum sensor based on the NV centers in diamond for current measurement in a busbar. The magnetic field measurements from the sensor are directly compared with measurements from a numerical simulation, eliminating the need for calibration. The sensor setup achieves an accuracy of 0.28% in the measurement range of 0–30 A DC. The integration of advanced quantum sensing technology with practical current measurement demonstrates the potential of this sensor for applications in electrical and distribution networks. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Diamond crystal structure formed by carbon atoms (gray), with a nitrogen atom (red) and adjacent vacancy forming a nitrogen vacancy (NV) center. NV centers are formed in all axes of the diamond lattice (indicated by purple-colored carbon atoms). The green arrow indicates an external magnetic field <math display="inline"><semantics> <mover accent="true"> <mi>B</mi> <mo>→</mo> </mover> </semantics></math>, whereas the blue arrow indicates the vectorial projection on the NV-axis <math display="inline"><semantics> <msub> <mi>B</mi> <mrow> <mo>|</mo> <mo>|</mo> </mrow> </msub> </semantics></math>. (<b>b</b>) Simplified energy diagram of the NV center. A continuous sweep of <math display="inline"><semantics> <msub> <mi>f</mi> <mrow> <mi>M</mi> <mi>W</mi> </mrow> </msub> </semantics></math> leads to the ground state being flipped from <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mi>s</mi> </msub> <mo>=</mo> <mo>±</mo> <mn>1</mn> </mrow> </semantics></math> at different frequencies due to the different <math display="inline"><semantics> <msub> <mi>B</mi> <mo>‖</mo> </msub> </semantics></math> components. This spin can be detected optically, as pumping with green light is spin-preserving and decay via a singlet state with fluorescence in the infrared range is more likely. (<b>c</b>) An example spectrum containing eight fluorescence dips caused by different <math display="inline"><semantics> <msub> <mi>B</mi> <mo>‖</mo> </msub> </semantics></math> components for each NV center axis and, therefore, different Zeeman splittings <math display="inline"><semantics> <mrow> <mn>2</mn> <msub> <mi>γ</mi> <mi>e</mi> </msub> <msub> <mi>B</mi> <mo>‖</mo> </msub> </mrow> </semantics></math>. From the different-frequency deltas between these dips, the <math display="inline"><semantics> <msub> <mi>B</mi> <mo>‖</mo> </msub> </semantics></math> might be calculated. Figure based on [<a href="#B16-engproc-68-00058" class="html-bibr">16</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Sensor setup with LED-PCB, MW-PCB, filterfoil, and PD-PCB, (<b>b</b>) photo of the integrated quantum sensor, and (<b>c</b>) sensor integrated into 3D printed clip. An offset magnet is integrated into the clip as well.</p>
Full article ">Figure 3
<p>(<b>a</b>) Magnetic field distribution at <math display="inline"><semantics> <mrow> <mn>30</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">A</mi> </mrow> </semantics></math> without a permanent magnet. (<b>b</b>) Magnetic field distribution with a permanent magnet N45 (sintered NdFeB) and without current. (<b>c</b>) Magnetic field distribution at 30A together with a permanent magnet, showing a superimposed magnetic field distribution.</p>
Full article ">Figure 4
<p>(<b>a</b>) The resulting spectra measured with the sensor setup. The current is varied between 0 and <math display="inline"><semantics> <mrow> <mn>30</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">A</mi> </mrow> </semantics></math>. The resulting output of the TIA is shown in milivolts as a function of the frequency sweep. The resonance frequencies of the individual NV axes are labeled with <math display="inline"><semantics> <msub> <mi>f</mi> <mrow> <mi>N</mi> <mi>V</mi> <mi>i</mi> <mo>±</mo> </mrow> </msub> </semantics></math>. (<b>b</b>) The extracted resonance frequencies shift with increasing busbar current. Characteristic non-linearities can be recognized at higher fields.</p>
Full article ">Figure 5
<p>(<b>a</b>) Measured and simulated absolute magnetic field <span class="html-italic">B</span>. (<b>b</b>) Measured and simulated magnetic field without offset.</p>
Full article ">Figure 6
<p>(<b>a</b>) Spatial change in standard deviation between the measured and simulated magnetic fields. Marked in black is the position of the sensor which was used for the simulation and which is based on the production data of the sensor and the clip. (<b>b</b>) Measured and simulated magnetic field without offset.</p>
Full article ">
14 pages, 1233 KiB  
Proceeding Paper
A Simple Computational Approach to Predict Long-Term Hourly Electric Consumption
by Eugene Pinsky, Etienne Meunier, Pierre Moreau and Tanvi Sharma
Eng. Proc. 2024, 68(1), 59; https://doi.org/10.3390/engproc2024068059 - 23 Jul 2024
Viewed by 417
Abstract
By exploiting the patterns in past data points, we could forecast long-term consumption with a computationally simple algorithm. Our approach is simple to interpret. It incorporates the seasonality of past consumption and can predict power consumption for any time scale. The algorithm can [...] Read more.
By exploiting the patterns in past data points, we could forecast long-term consumption with a computationally simple algorithm. Our approach is simple to interpret. It incorporates the seasonality of past consumption and can predict power consumption for any time scale. The algorithm can be easily implemented directly in SQL. It can run sub-second long-term predictions on large-scale data marts. The proposed method scored a Mean Absolute Percentage Error (MAPE) of just 5.88% when predicting hourly values for France’s electric consumption in 2017 based on hourly data from 2008 to 2011. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Weekday vs. weekend electricity consumption.</p>
Full article ">Figure 2
<p>The schema of our pipeline.</p>
Full article ">Figure 3
<p>Detrending RTE dataset using 1-degree polynomial.</p>
Full article ">Figure 4
<p>Detrending PJM dataset using 1-degree polynomial.</p>
Full article ">Figure 5
<p>Detrending PJM dataset using 2-degree polynomial.</p>
Full article ">Figure 6
<p>Detrended RTE electricity consumption by months.</p>
Full article ">Figure 7
<p>Detrended RTE electricity consumption by day numbers.</p>
Full article ">Figure 8
<p>ACF tests on the RTE dataset at different lags.</p>
Full article ">Figure 9
<p>Simplified representation of the model year computation with the RTE dataset.</p>
Full article ">Figure 10
<p>Daily RTE consumption (<b>top</b>) vs. forecast (<b>bottom</b>) for years 2013–2014, using years 2008–2011 as the training set.</p>
Full article ">Figure 11
<p>Hourly ISO consumption (<b>top</b>) vs. forecast (<b>bottom</b>) for January 2013, using years 2004–2009 as the training set.</p>
Full article ">Figure 12
<p>Daily PJM consumption (<b>top</b>) vs. forecast (<b>bottom</b>) for year 2013, using years 1993–2010 as training set, and a 1-degree linear detrending.</p>
Full article ">Figure 13
<p>Distribution of total energy production by source.</p>
Full article ">Figure 14
<p>Overall distribution of energy production.</p>
Full article ">
11 pages, 1511 KiB  
Proceeding Paper
Promoting Electric Vehicle Growth through Infrastructure and Policy: A Forecasting Analysis
by Anuva Banwasi, Adele M. Sinai and Brennan Xavier McManus
Eng. Proc. 2024, 68(1), 60; https://doi.org/10.3390/engproc2024068060 - 18 Jul 2024
Viewed by 337
Abstract
This study examines electric vehicle (EV) adoption in the United States, specifically the interconnected relationship between EV-promoting policies, EV charging infrastructure, and registrations of EVs. Gasoline-powered vehicles make up a significant portion of the US’s carbon emissions, and increasing the use of EVs [...] Read more.
This study examines electric vehicle (EV) adoption in the United States, specifically the interconnected relationship between EV-promoting policies, EV charging infrastructure, and registrations of EVs. Gasoline-powered vehicles make up a significant portion of the US’s carbon emissions, and increasing the use of EVs is a way to decrease this footprint. Over the past decade, there have been many incentives and policy-driven changes to propel electric vehicle adoption forward. The focus of this study is to identify if there a significant relationship between these three factors and the extent to which these factors are significant predictors for each other. To do so, we conduct several statistical tests to analyze the forecasting effect of changes in EV policies on EV infrastructure, and changes in infrastructure on EV registrations. We find that there are significant forecasting relationships between these factors. Furthermore, it is possible to accurately forecast changes in EV charging stations over time using the time-series data of previous EV charging stations and policies. There are many interconnected factors, but this strong forecasting relationship between EV incentive policies and the expansion of charging infrastructure provides valuable insights for policymakers, industry stakeholders, and researchers attempting to understand and promote EV adoption. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Number of EV charging stations in New York (NY) over time; (<b>b</b>) number of EV-related incentive policies enacted in New York (NY) over time.</p>
Full article ">Figure 2
<p>Granger causality test between station counts and policy counts for New York (NY). The <span class="html-italic">p</span>-value is 0.0065, which is below 0.05, so we reject the null hypothesis. There is significant relationship between charging infrastructures and EV incentivizing policies.</p>
Full article ">Figure 3
<p>VAR results for New York (NY). Forecast percent change in station counts over time vs. actual percent change in station counts over time.</p>
Full article ">Figure 4
<p>VAR results for other states. Forecast vs. actual percent change in station counts over time for (<b>a</b>) Washington, (<b>b</b>) Washington DC, (<b>c</b>) Texas, and (<b>d</b>) New Jersey.</p>
Full article ">Figure 5
<p><span class="html-italic">p</span>-values representing predictor ability for each of the policy types on new charging stations.</p>
Full article ">Figure 6
<p>Forecast net change in registrations counts over time vs. true net change in registration counts over time for New York (NY).</p>
Full article ">
9 pages, 464 KiB  
Proceeding Paper
Analyzing Patterns of Injury in Occupational Hand Trauma Focusing on Press Machines: A Registry-Based Study and Machine Learning Analysis
by Sarthak Pattnaik, Parita Danole, Sagar Mandiya, Ali Foroutan, Ghazal Mashhadiagha, Yousef Shafaei Khanghah, Khatereh Isazadehfar and Eugene Pinsky
Eng. Proc. 2024, 68(1), 61; https://doi.org/10.3390/engproc2024068061 - 29 Jul 2024
Viewed by 476
Abstract
Objectives: The aim of the project is to analyze the data of patients who have been admitted to the emergency room due to severe hand and palm injuries. Methods: we have used data visualization and statistical analysis to observe trends in various factors [...] Read more.
Objectives: The aim of the project is to analyze the data of patients who have been admitted to the emergency room due to severe hand and palm injuries. Methods: we have used data visualization and statistical analysis to observe trends in various factors pertaining to the patients, such as place of injury, machine-causing injuries, date and time of the injury, amputation, fracture, etiology, distribution of the injured hand, etc. Results: There is a significant difference between age and gender groups across various injuries. Most of the injuries in the dataset are occupational injuries caused by press machines. Most injuries take place in the later half of the week, on Wednesdays and Saturdays. Conclusion: There were 1676 patients who reported to the medical emergency center. Of these, only a handful of them have undergone extremely painstaking injuries where there was uncontrolled bleeding and hemi-amputation. We can also surmise the same by looking at the data that provides the summary of the number of fingers injured. Most patients have either one or two fingers injured. Very few patients had more than two fingers injured. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Distribution of the frequency of accidents against machines that have caused injuries.</p>
Full article ">Figure 2
<p>Comparison of total fingers due to press machine against injuries caused due to other machines.</p>
Full article ">Figure 3
<p>Place of accidents for males and females.</p>
Full article ">Figure 4
<p>Age of patients who have been admitted in the hospital.</p>
Full article ">Figure 5
<p>DASH category.</p>
Full article ">
9 pages, 936 KiB  
Proceeding Paper
Decomposing the Sri Lanka Yield Curve Using Principal Component Analysis to Examine the Term Structure of the Interest Rate
by K P N Sanjeewa Dayarathne and Uthayasanker Thayasivam
Eng. Proc. 2024, 68(1), 62; https://doi.org/10.3390/engproc2024068062 - 27 Aug 2024
Viewed by 448
Abstract
In this study, we delve into the dynamics of the Sri Lankan government bond market, building upon prior research that focused on the application of principal component analysis (PCA) in modelling sovereign yield curves. Our analysis encompasses data spanning from January 2010 to [...] Read more.
In this study, we delve into the dynamics of the Sri Lankan government bond market, building upon prior research that focused on the application of principal component analysis (PCA) in modelling sovereign yield curves. Our analysis encompasses data spanning from January 2010 to August 2022. The study applied several PCA variants such as multivariate PCA, Randomized PCA, Incremental PCA, Sparse PCA, Functional PCA, and Kernel PCA on smoothed data. Kernel PCA was found to explain the majority of the variation associated with the data. Findings reveal that the first principal component accounted for a substantial 97.69% of the variations in yield curve movements, 2nd PCA accounted for 1.88%, and 3rd for 0.42%. These results align with previous research, which generally posits that the initial three principal components tend to elucidate around 95% of the fluctuations within the term structure of yields. Our results question the empirical findings, which state that the 1st PCA represents the longer tenor of the yield curve. In Sri Lanka, instead, the 1st PCA represents the 3-year bond yields. It may be because of the liquidity constraints in underdeveloped frontier markets, where longer tenor yields do not react fast enough to reflect the movement of the yield curve. The 2nd PCA represents the slope of the yield curve which is the yield difference of a 10-year T-Bond and 3 months T-Bill. The 3rd PCA which represents the curvature of the yield curve attributed to 2 × 3 years T-Bond yield—3 months T-bill10-year T-Bond. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Flow of analysis.</p>
Full article ">Figure 2
<p>Correlation matrix of the yields.</p>
Full article ">Figure 3
<p>1st and 2nd PCA. 3rd PCA with respective yield proxies.</p>
Full article ">
17 pages, 2569 KiB  
Proceeding Paper
Oil Price Volatility and MENA Stock Markets: A Comparative Analysis of Oil Exporters and Importers
by Khalil Mhadhbi and Ines Guelbi
Eng. Proc. 2024, 68(1), 63; https://doi.org/10.3390/engproc2024068063 - 2 Sep 2024
Viewed by 775
Abstract
This paper explores the transmission of volatility from Brent oil price evolution to the stock returns of 7 MENA countries, encompassing three importers and four exporters, after excluding four initial countries using the ARCH test. Employing the GARCH-BEKK estimation method, we detect this [...] Read more.
This paper explores the transmission of volatility from Brent oil price evolution to the stock returns of 7 MENA countries, encompassing three importers and four exporters, after excluding four initial countries using the ARCH test. Employing the GARCH-BEKK estimation method, we detect this transmission from January 2008 to September 2022. The results reveal significant volatility persistence across six stock markets with three importer countries and three exporters. These findings align with Shiller’s theory, indicating high volatility in financial markets. Tunisia’s stock market shows sensitivity to oil market developments, while the Omani market demonstrates volatility transfer from Brent oil prices. However, Morocco’s market exhibits resilience, with no significant transmission from international oil prices. Exporting countries, except the UAE, display significant and positive coefficients, indicating volatility transmission. The study suggests further research into underlying mechanisms and recommends policymakers and investors implement strategies to mitigate volatility effects. Advanced modeling and behavioral insights can enhance risk management strategies. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Trends in the evolution of global Brent oil prices and stock market yields of MENA oil-importing countries.</p>
Full article ">Figure 1 Cont.
<p>Trends in the evolution of global Brent oil prices and stock market yields of MENA oil-importing countries.</p>
Full article ">Figure 2
<p>Trends in the evolution of global Brent oil prices and stock market yields of MENA oil-exporting countries.</p>
Full article ">
11 pages, 7567 KiB  
Proceeding Paper
Towards an Automatic Tool for Resilient Waterway Transport: The Case of the Italian Po River
by Maria Luisa Villani, Ebrahim Ehsanfar, Sohith Dhavaleswarapu, Alberto Agnetti, Luca Crose, Giancarlo Focherini and Sonia Giovinazzi
Eng. Proc. 2024, 68(1), 64; https://doi.org/10.3390/engproc2024068064 - 4 Sep 2024
Viewed by 344
Abstract
Improved navigability can enhance inland waterway transportation efficiency, contributing to synchro-modal logistics and promoting sustainable development in regions that can benefit from the presence of considerable waterways. Modern technological solutions, such as digital twins in corridor management systems, must integrate functions of navigability [...] Read more.
Improved navigability can enhance inland waterway transportation efficiency, contributing to synchro-modal logistics and promoting sustainable development in regions that can benefit from the presence of considerable waterways. Modern technological solutions, such as digital twins in corridor management systems, must integrate functions of navigability forecasts that provide timely and reliable information for safe trip planning. This information needs to account for the type of vessel and for the environmental and geomorphological characteristics of each navigation trait. This paper presents a case study, within the EU project CRISTAL, focusing on the Italian Po River, of which the navigability forecast requirements of a digital twin are illustrated. Preliminary results to deliver navigability risk information were obtained. In particular, the statistical correlation of water discharge and water depth, computed from historical data, suggested that efficient forecast models for navigability risk, given some water discharge forecasts, could be built. To this aim, the LSTM (long-short-term-memory) technique was used on the same data to provide models linking water discharge and water depth predictions. Future work involves further testing these models with updated real data and integrating outcomes with climatic and infrastructure management information to enhance the accuracy of the risk information. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Flood and Drought Early Warning System of the Po River basin. The figure highlights the monitoring points and a type of chart provided by the tool by combining the source data.</p>
Full article ">Figure 2
<p>Monitored critical sections in the main Po River where different colors represent the different stretches.</p>
Full article ">Figure 3
<p>Draft of a navigability risk forecast matrix. A navigability risk level (red = high risk, yellow = medium risk, and green = low risk), based on the navigability forecast, will be provided for each stretch and for different vessel classes (according to the draught) for the next 10 days (gg = day).</p>
Full article ">Figure 4
<p>Surveyed daily water depth collected at each critical section and river discharge data recorded at each monitoring section of the river Po River derived from the monitoring network. Data source: <a href="http://www.agenziapo.it" target="_blank">www.agenziapo.it</a>.</p>
Full article ">Figure 5
<p>The correlation between daily water depth and river discharge in the Po River. The red trendline highlights the relationship with a computed Pearson correlation coefficient.</p>
Full article ">Figure 6
<p>Year-long trend of water discharge and water depth data at a station.</p>
Full article ">Figure 7
<p>Water depth time series decomposition. Trend and seasonal components revealed through additive seasonal decomposition offer insights into temporal dynamics.</p>
Full article ">Figure 8
<p>Monthly Box Plots (2021) illustrating the distribution and variability of river depth (<b>a</b>) and discharge (<b>b</b>) at the station.</p>
Full article ">Figure 9
<p>Discharge forecast from the probabilistic processing of DEWS/FEWS Po early warning systems data.</p>
Full article ">Figure 10
<p>Problem statement representation of the deep learning method.</p>
Full article ">Figure 11
<p>Development process of a prediction model.</p>
Full article ">Figure 12
<p>Water depth forecast vs. water depth observations for critical section 1 of stretch X. The blue line indicates the lowest water depth before the last 100 days.</p>
Full article ">Figure 13
<p>Water depth forecast vs. water depth observations for critical section 2 of stretch Y.</p>
Full article ">Figure 14
<p>Water discharge (blue) and water depth (brown) time series: (<b>a</b>) critical section 1 of stretch X; (<b>b</b>) critical section 2 of stretch Y.</p>
Full article ">
9 pages, 430 KiB  
Proceeding Paper
Chlorophyll-A Time Series Study on a Saline Mediterranean Lagoon: The Mar Menor Case
by Arnau Garcá-i-Cucó, José Gellida-Bayarri, Beatriz Chafer-Dolz, Juan-Carlos Cano and José M. Cecilia
Eng. Proc. 2024, 68(1), 65; https://doi.org/10.3390/engproc2024068065 - 25 Sep 2024
Viewed by 288
Abstract
The Mar Menor, Europe’s largest saline lagoon, has experienced significant eutrophication. The concentration of chlorophyll-a (Chl-a) in the water is used as a critical indicator of this eutrophication process and can alert us to possible ecosystemic changes such as a massive fish die-off. [...] Read more.
The Mar Menor, Europe’s largest saline lagoon, has experienced significant eutrophication. The concentration of chlorophyll-a (Chl-a) in the water is used as a critical indicator of this eutrophication process and can alert us to possible ecosystemic changes such as a massive fish die-off. The main objective of this paper is to predict chlorophyll-a concentration using various time series models. Among them, multivariate models such as short-term memory networks (LSTM) and, in particular, the autoregressive integrated moving average model with eXogenous variables (ARIMAX) demonstrated superior performance. These models incorporate multiple predictors, such as humidity, water temperature, conductivity and turbidity, thus capturing the complex interactions that affect Chl-a levels. Despite their effectiveness, these multivariate models introduce cascading errors due to the uncertainty inherent in the exogenous inputs. Consequently, the application of univariate models—such as Prophet, Triple Exponential Smoothing and ARIMA—are also studied for their relative robustness to error propagation. Full article
(This article belongs to the Proceedings of The 10th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Air temperature analysis.</p>
Full article ">Figure 2
<p>Water temperature analysis.</p>
Full article ">Figure 3
<p>Chlorophyll-a analysis.</p>
Full article ">Figure 4
<p>Feature correlation matrix. Stronger blue color shows a higher correlation between variables.</p>
Full article ">Figure 5
<p>Rolling MAPE.</p>
Full article ">Figure 6
<p>Rolling MAPE for univariate predictions.</p>
Full article ">
Back to TopTop