Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 17, December
Previous Issue
Volume 17, October
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Algorithms, Volume 17, Issue 11 (November 2024) – 63 articles

Cover Story (view full-size image): We present challenging model classes arising in the context of finding optimized object packings (OPs). Except for the smallest and simplest general OP model instances, it is not possible to find their exact (closed-form) solution. Most OP problem instances become increasingly difficult to handle, even numerically, as the number of packed objects increases. In our article, scalable irregular OP problem classes–aimed at packing given collections of general circles, spheres, ellipses, and ovals–are discussed, with numerical results being found for a selection of model instances. To illustrate, the figure shows an optimized configuration of different size spheres packed into a container sphere. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 1117 KiB  
Article
Automatic Simplification of Lithuanian Administrative Texts
by Justina Mandravickaitė, Eglė Rimkienė, Danguolė Kotryna Kapkan, Danguolė Kalinauskaitė and Tomas Krilavičius
Algorithms 2024, 17(11), 533; https://doi.org/10.3390/a17110533 - 20 Nov 2024
Viewed by 170
Abstract
Text simplification reduces the complexity of text while preserving essential information, thus making it more accessible to a broad range of readers, including individuals with cognitive disorders, non-native speakers, children, and the general public. In this paper, we present experiments on text simplification [...] Read more.
Text simplification reduces the complexity of text while preserving essential information, thus making it more accessible to a broad range of readers, including individuals with cognitive disorders, non-native speakers, children, and the general public. In this paper, we present experiments on text simplification for the Lithuanian language, aiming to simplify administrative texts to a Plain Language level. We fine-tuned mT5 and mBART models for this task and evaluated the effectiveness of ChatGPT as well. We assessed simplification results via both quantitative metrics and qualitative evaluation. Our findings indicated that mBART performed the best as it achieved the best scores across all evaluation metrics. The qualitative analysis further supported these findings. ChatGPT experiments showed that it responded quite well to a short and simple prompt to simplify the given text; however, it ignored most of the rules given in a more elaborate prompt. Finally, our analysis revealed that BERTScore and ROUGE aligned moderately well with human evaluations, while BLEU and readability scores indicated lower or even negative correlations Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Comparison of evaluation metrics for mBART. (<b>a</b>) SacreBLEU Score (<b>b</b>) Rougel score (<b>c</b>) Rouge1 Score (<b>d</b>) Rouge2 Score.</p>
Full article ">Figure 2
<p>Comparison of evaluation metrics for mT5. (<b>a</b>) SacreBLEU Score (<b>b</b>) Rougel score (<b>c</b>) Rouge1 Score (<b>d</b>) Rouge2 Score.</p>
Full article ">Figure 3
<p>Comparison of Spearman correlations between mBART and mT5 evaluation metrics and human scores.</p>
Full article ">
13 pages, 1085 KiB  
Article
Exponential Functions Permit Estimation of Anaerobic Work Capacity and Critical Power from Less than 2 Min All-Out Test
by Ming-Chang Tsai, Scott Thomas and Marc Klimstra
Algorithms 2024, 17(11), 532; https://doi.org/10.3390/a17110532 - 20 Nov 2024
Viewed by 157
Abstract
The Critical Power Model (CPM) is key for assessing athletes’ aerobic and anaerobic energy systems but typically involves lengthy, exhausting protocols. The 3 min all-out test (3MT) simplifies CPM assessment, yet its duration remains demanding. Exponential decay models, specifically mono- and bi-exponential functions, [...] Read more.
The Critical Power Model (CPM) is key for assessing athletes’ aerobic and anaerobic energy systems but typically involves lengthy, exhausting protocols. The 3 min all-out test (3MT) simplifies CPM assessment, yet its duration remains demanding. Exponential decay models, specifically mono- and bi-exponential functions, offer a more efficient alternative by accurately capturing the nonlinear energy dynamics in high-intensity efforts. This study explores shortening the 3MT using these functions to reduce athlete strain while preserving the accuracy of critical power (CP) and work capacity (W) estimates. Seventy-six competitive cyclists and triathletes completed a 3MT on a cycle ergometer, with CP and W calculated at shorter intervals. Results showed that a 90 s test using the bi-exponential model yielded CP and W values similar to those of the full 3MT. Meanwhile, the mono-exponential model required at least 135 s. Bland–Altman and linear regression analyses confirmed that a 120 s test with the mono-exponential model reliably estimated CP and W with minimal physical strain. These findings support a shortened, less-demanding 3MT as a valid alternative for CPM assessment. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

Figure 1
<p>Comparison of observed and 60 s modeled power of a 3MT. Observed power is represented by gray open circles, while modeled power (solid line) used the initial 60 s data (black open circle). The dashed horizontal line represents the CP calculated from 3MT for comparison.</p>
Full article ">Figure 2
<p>Comparison of end powers for different time duration to <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>P</mi> <mrow> <mn>3</mn> <mi>M</mi> <mi>T</mi> </mrow> </msub> </mrow> </semantics></math> for MONO (<b>A</b>) and BI (<b>B</b>). (* <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 3
<p>Comparison of <math display="inline"><semantics> <msup> <mi mathvariant="normal">W</mi> <mo>′</mo> </msup> </semantics></math> for different time duration to <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>P</mi> <mrow> <mn>3</mn> <mi>M</mi> <mi>T</mi> </mrow> </msub> </mrow> </semantics></math> for MONO (<b>A</b>) and BI (<b>B</b>). (* <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 4
<p>Bland–Altman plots comparing <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>P</mi> <mrow> <mn>3</mn> <mi>M</mi> <mi>T</mi> </mrow> </msub> </mrow> </semantics></math> and MONO <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>P</mi> <mn>105</mn> </msub> </mrow> </semantics></math> (<b>A</b>) and BI <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>P</mi> <mn>90</mn> </msub> </mrow> </semantics></math> (<b>B</b>). The dashed lines represent the 95% limits of agreement, and the solid horizontal line represents the mean bias between the models. Mean bias represents the average difference between the two methods; a mean bias of zero suggests no systematic difference between the methods, while deviations from zero indicate a consistent overestimation or underestimation by one method compared to the other.</p>
Full article ">Figure 5
<p>Bland–Altman plots comparing <math display="inline"><semantics> <msubsup> <mi>W</mi> <mrow> <mn>3</mn> <mi>M</mi> <mi>T</mi> </mrow> <mo>′</mo> </msubsup> </semantics></math> and MONO <math display="inline"><semantics> <msubsup> <mi>W</mi> <mn>135</mn> <mo>′</mo> </msubsup> </semantics></math> (<b>A</b>) and BI <math display="inline"><semantics> <msubsup> <mi>W</mi> <mn>90</mn> <mo>′</mo> </msubsup> </semantics></math> (<b>B</b>). The dashed lines represent the 95% limits of agreement, and the solid horizontal line represents the mean bias between the models. Mean bias represents the average difference between the two methods; a mean bias of zero suggests no systematic difference between the methods, while deviations from zero indicate a consistent overestimation or underestimation by one method compared to the other.</p>
Full article ">Figure 6
<p>Regression plot comparing <math display="inline"><semantics> <msup> <mi mathvariant="normal">W</mi> <mo>′</mo> </msup> </semantics></math> calculated from 3MT with exponential functions: MONO <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>P</mi> <mn>135</mn> </msub> </mrow> </semantics></math> (<b>A</b>) and BI <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>P</mi> <mn>90</mn> </msub> </mrow> </semantics></math> (<b>B</b>). The dashed line represents the identity (y = x), and the solid line represents the model slope tested against 1 (MONO: <span class="html-italic">p</span> = 0.28, BI: <span class="html-italic">p</span> = 0.01).</p>
Full article ">Figure 7
<p>Regression plot comparing CP calculated from the 3MT with exponential functions: MONO <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>P</mi> <mn>105</mn> </msub> </mrow> </semantics></math> (<b>A</b>) and BI <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>P</mi> <mn>90</mn> </msub> </mrow> </semantics></math> (<b>B</b>). The dashed line represents the identity line (<math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math>), and the solid line represents the regression slope tested against 1 (MONO: <span class="html-italic">p</span> = 0.53, BI: <span class="html-italic">p</span> = 0.50).</p>
Full article ">
18 pages, 1508 KiB  
Article
Adversarial Validation in Image Classification Datasets by Means of Cumulative Spectral Gradient
by Diego Renza, Ernesto Moya-Albor and Adrian Chavarro
Algorithms 2024, 17(11), 531; https://doi.org/10.3390/a17110531 - 19 Nov 2024
Viewed by 257
Abstract
The main objective of a machine learning (ML) system is to obtain a trained model from input data in such a way that it allows predictions to be made on new i.i.d. (Independently and Identically Distributed) data with the lowest possible error. However, [...] Read more.
The main objective of a machine learning (ML) system is to obtain a trained model from input data in such a way that it allows predictions to be made on new i.i.d. (Independently and Identically Distributed) data with the lowest possible error. However, how can we assess whether the training and test data have a similar distribution? To answer this question, this paper presents a proposal to determine the degree of distribution shift of two datasets. To this end, a metric for evaluating complexity in datasets is used, which can be applied in multi-class problems, comparing each pair of classes of the two sets. The proposed methodology has been applied to three well-known datasets: MNIST, CIFAR-10 and CIFAR-100, together with corrupted versions of these. Through this methodology, it is possible to evaluate which types of modification have a greater impact on the generalization of the models without the need to train multiple models multiple times, also allowing us to determine which classes are more affected by corruption. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
Show Figures

Figure 1

Figure 1
<p>Outline of the proposed adversarial validation methodology.</p>
Full article ">Figure 2
<p>Examples of the 15 corruptions included in the MNIST-C dataset.</p>
Full article ">Figure 3
<p>Examples of the 19 corruptions included in the CIFAR-10-C and CIFAR-100-C datasets.</p>
Full article ">Figure 4
<p>Flowchart diagram of the adversarial validation methodology. The blue lines represent the original datasets, and the red line represents the corrupted datasets.</p>
Full article ">Figure 5
<p>Class-level adversarial validation on the MNIST dataset and its corrupted version (MNIST-C). (<b>a</b>) CSG metric ordered by the average value of all classes (dataset-level adversarial validation). Lower values relate to data that deviate from the original distribution. (<b>b</b>) Variability of the CSG metric between classes in each of the corruption types (corruption-level adversarial validation).</p>
Full article ">Figure 6
<p>Class-level adversarial validation on the CIFAR-10 dataset and its corrupted version (CIFAR-10-C). (<b>a</b>) CSG metric ordered by the average value of all classes. Lower values relate to data that deviate from the original distribution (dataset-level adversarial validation). (<b>b</b>) Variability of the CSG metric between classes in each of the corruption types (corruption-level adversarial validation).</p>
Full article ">Figure 7
<p>Class-level adversarial validation on CIFAR-100 dataset and its corrupted version (CIFAR-100-C). CSG metric ordered by the average value of all classes (dataset-level adversarial validation). Lower values relate to data that deviate from the original distribution.</p>
Full article ">Figure 8
<p>Class-level adversarial validation on CIFAR-100 dataset and its corrupted version (CIFAR-100-C). Variability of the CSG metric between classes in each of the corruption types (corruption-level adversarial validation).</p>
Full article ">Figure 9
<p>Example of a CIFAR image with the four types of modifications that have the greatest impact on the distribution of data (<b>b</b>–<b>d</b>) and the four types of modifications that have the least impact (<b>e</b>–<b>h</b>).</p>
Full article ">
20 pages, 3221 KiB  
Article
A VIKOR-Based Sequential Three-Way Classification Ranking Method
by Wentao Xu, Jin Qian, Yueyang Wu, Shaowei Yan, Yongting Ni and Guangjin Yang
Algorithms 2024, 17(11), 530; https://doi.org/10.3390/a17110530 - 19 Nov 2024
Viewed by 236
Abstract
VIKOR uses the idea of overall utility maximization and individual regret minimization to afford a compromise result for multi-attribute decision-making problems with conflicting attributes. Many researchers have proposed corresponding improvements and expansions to make it more suitable for sorting optimization in their respective [...] Read more.
VIKOR uses the idea of overall utility maximization and individual regret minimization to afford a compromise result for multi-attribute decision-making problems with conflicting attributes. Many researchers have proposed corresponding improvements and expansions to make it more suitable for sorting optimization in their respective research fields. However, these improvements and extensions only rank the alternatives without classifying them. For this purpose, this text introduces the three-way sequential decisions method and combines it with the VIKOR method to design a three-way VIKOR method that can deal with both ranking and classification. By using the final negative ideal solution (NIS) and the final positive ideal solution (PIS) for all alternatives, the individual regret value and group utility value of each alternative were calculated. Different three-way VIKOR models were obtained by four different combinations of individual regret value and group utility value. In the ranking process, the characteristics of VIKOR method are introduced, and the subjective preference of decision makers is considered by using individual regret, group utility, and decision index values. In the classification process, the corresponding alternatives are divided into the corresponding decision domains by sequential three-way decisions, and the risk of direct acceptance or rejection is avoided by putting the uncertain alternatives into the boundary region to delay the decision. The alternative is divided into decision domains through sequential three-way decisions, sorted according to the collation rules in the same decision domain, and the final sorting results are obtained according to the collation rules in different decision domains. Finally, the effectiveness and correctness of the proposed method are verified by a project investment example, and the results are compared and evaluated. The experimental results show that the proposed method has a significant correlation with the results of other methods, ad is effective and feasible, and is simpler and more effective in dealing with some problems. Errors caused by misclassification is reduced by sequential three-way decisions. Full article
Show Figures

Figure 1

Figure 1
<p>Connections between decisions regions, → denotes ≻.</p>
Full article ">Figure 2
<p>Ranking rules for different decision regions.</p>
Full article ">Figure 3
<p>Visualization of Example 5.</p>
Full article ">Figure 4
<p>Visualization of comparison results of Example 5 with those of other methods.</p>
Full article ">Figure 5
<p>Visualization of Example 6.</p>
Full article ">Figure 6
<p>Visualization of comparison results with other methods of Example 6.</p>
Full article ">Figure 7
<p>Influence of decision mechanism coefficient <math display="inline"><semantics> <mi>v</mi> </semantics></math> on decision index value <math display="inline"><semantics> <mi>Q</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>Heatmap of Spearman correlation coefficient.</p>
Full article ">
28 pages, 6900 KiB  
Article
A New Approach to Recognize Faces Amidst Challenges: Fusion Between the Opposite Frequencies of the Multi-Resolution Features
by Regina Lionnie, Julpri Andika and Mudrik Alaydrus
Algorithms 2024, 17(11), 529; https://doi.org/10.3390/a17110529 - 17 Nov 2024
Viewed by 528
Abstract
This paper proposes a new approach to pixel-level fusion using the opposite frequency from the discrete wavelet transform with Gaussian or Difference of Gaussian. The low-frequency from discrete wavelet transform sub-band was fused with the Difference of Gaussian, while the high-frequency sub-bands were [...] Read more.
This paper proposes a new approach to pixel-level fusion using the opposite frequency from the discrete wavelet transform with Gaussian or Difference of Gaussian. The low-frequency from discrete wavelet transform sub-band was fused with the Difference of Gaussian, while the high-frequency sub-bands were fused with Gaussian. The final fusion was reconstructed using an inverse discrete wavelet transform into one enhanced reconstructed image. These enhanced images were utilized to improve recognition performance in the face recognition system. The proposed method was tested against benchmark face datasets such as The Database of Faces (AT&T), the Extended Yale B Face Dataset, the BeautyREC Face Dataset, and the FEI Face Dataset. The results showed that our proposed method was robust and accurate against challenges such as lighting conditions, facial expressions, head pose, 180-degree rotation of the face profile, dark images, acquisition with time gap, and conditions where the person uses attributes such as glasses. The proposed method is comparable to state-of-the-art methods and generates high recognition performance (more than 99% accuracy). Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Examples of images inside each dataset: (<b>a</b>) AT&amp;T [<a href="#B40-algorithms-17-00529" class="html-bibr">40</a>], (<b>b</b>) BeautyREC [<a href="#B41-algorithms-17-00529" class="html-bibr">41</a>], (<b>c</b>) EYB [<a href="#B42-algorithms-17-00529" class="html-bibr">42</a>,<a href="#B43-algorithms-17-00529" class="html-bibr">43</a>], (<b>d</b>) EYB-Dark [<a href="#B42-algorithms-17-00529" class="html-bibr">42</a>,<a href="#B43-algorithms-17-00529" class="html-bibr">43</a>], (<b>e</b>) FEI [<a href="#B44-algorithms-17-00529" class="html-bibr">44</a>], (<b>f</b>) FEI-FE [<a href="#B44-algorithms-17-00529" class="html-bibr">44</a>].</p>
Full article ">Figure 2
<p>The flowchart of our proposed method.</p>
Full article ">Figure 3
<p>The MRA-DWT sub-bands (from <b>left</b> to <b>right</b>): approximation, horizontal, vertical, diagonal sub-bands with Haar and one level of decomposition.</p>
Full article ">Figure 4
<p>The illustration of the scaling function (<b>left</b>) and wavelet function (<b>right</b>) from the Haar wavelet.</p>
Full article ">Figure 5
<p>Results from Gaussian filtering and the Difference of Gaussian (from <b>left</b> to <b>right</b>): original image, Gaussian filtered image with <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, Gaussian filtered image with <span class="html-italic">σ</span><sub>2</sub>, Difference of Gaussian.</p>
Full article ">Figure 6
<p>Example of results from proposed fusion (from <b>top</b> to <b>bottom</b>): <span class="html-italic">AL</span>, <span class="html-italic">HG</span>, <span class="html-italic">VG</span>, <span class="html-italic">DG</span> with image fusion DWT/IDWT-IF using the mean-mean rule.</p>
Full article ">Figure 7
<p>The comparison of processing times for the AT&amp;T Face Dataset; Exp. 5; Exp. 6 using <span class="html-italic">db2</span> in DWT/IDWT-IF with levels of decomposition: one (Exp. 6a); three (Exp. 6b); five (Exp. 6c); and seven (Exp. 6d).</p>
Full article ">Figure 8
<p>Accuracy results (%) for the AT&amp;T Face Dataset (proposed method) using different wavelet families in MRA-DWT/IDWT with one level of decomposition: (<b>a</b>) Experiment 5; (<b>b</b>) Experiment 6.</p>
Full article ">Figure 9
<p>Accuracy results (%) for AT&amp;T Face Dataset from Experiment 6 (proposed method) using <span class="html-italic">db2</span> wavelet in DWT/IDWT-IF and <span class="html-italic">bior3.3</span> in MRA-DWT/IDWT with variations in the level of decomposition.</p>
Full article ">Figure 10
<p>Accuracy results (%) for AT&amp;T Face Dataset from Experiment 6 (proposed method) using various wavelet families in DWT/IDWT-IF with five levels of decomposition and <span class="html-italic">bior3.3</span> in MRA-DWT/IDWT.</p>
Full article ">Figure 11
<p>Accuracy results (%) for the EYB Face Dataset for Experiments 2, 4, 5, and 6.</p>
Full article ">Figure 12
<p>Accuracy results (%) for the EYB-Dark Face Dataset for Experiments 2, 4, 5, and 6.</p>
Full article ">Figure 13
<p>Accuracy results (%) for the EYB-Dark Face Dataset for Experiment 6 using fusion rules: mean-mean, min-max, and max-min.</p>
Full article ">Figure 14
<p>Fusion results of DWT/IDWT-IF with d2 and five levels of decomposition (from left to right) top: original image, using min-max rule, max-min rule, and mean-mean rule; bottom: fusion results but scaled based on the pixel value range.</p>
Full article ">Figure 15
<p>Accuracy results (%) for the EYB-Dark Face Dataset for Experiment 6 with the mean-mean fusion rule using different wavelet families for MRA-DWT/IDWT.</p>
Full article ">Figure 16
<p>Accuracy results (%) for the BeautyREC Dataset from Exp. 5 and 6 with variations of employing 1820 images and all (3000) images.</p>
Full article ">Figure 17
<p>Accuracy results (%) for the BeautyREC Dataset: Exp. 5, LP-IF with MRA-DWT/IDWT (a) <span class="html-italic">haar</span>, (b) <span class="html-italic">db2</span>, (c) <span class="html-italic">sym2</span>, (d) <span class="html-italic">bior2.6</span>, (e) <span class="html-italic">bior3.3</span>; Exp. 6, DWT/IDWT-IF with MRA-DWT/IDWT (a) <span class="html-italic">haar</span>, (b) <span class="html-italic">db2</span>, (c) <span class="html-italic">sym2</span>, (d) <span class="html-italic">bior2.6</span>, (e) <span class="html-italic">bior3.3</span>; Exp. 6, DWT/IDWT-IF with <span class="html-italic">haar</span> for MRA-DWT/IDWT and <span class="html-italic">db2</span> wavelet with total level of decomposition (f) one, (g) three, (h) seven; Exp. 6, DWT/IDWT-IF with <span class="html-italic">haar</span> for MRA-DWT/IDWT and five levels of decomposition using wavelets (i) <span class="html-italic">haar</span>, (j) <span class="html-italic">sym2</span>, (k) <span class="html-italic">bior 2.6</span>; Exp. 6, DWT/IDWT-IF using fusion rule (l) min-max, (m) max-min. All results came from SVM with the cubic kernel.</p>
Full article ">Figure 18
<p>Example of high variations for one person inside the BeautyREC Face Dataset.</p>
Full article ">Figure 19
<p>Accuracy results (%) for the FEI Face Database from Exp. 5 and 6.</p>
Full article ">Figure 20
<p>Accuracy results (%) for the FEI-FE Face Database from Exp. 5 and 6.</p>
Full article ">
19 pages, 5212 KiB  
Article
Assessment of Solar Energy Generation Toward Net-Zero Energy Buildings
by Rayan Khalil, Guilherme Vieira Hollweg, Akhtar Hussain, Wencong Su and Van-Hai Bui
Algorithms 2024, 17(11), 528; https://doi.org/10.3390/a17110528 - 16 Nov 2024
Viewed by 314
Abstract
With the continuous rise in the energy consumption of buildings, the study and integration of net-zero energy buildings (NZEBs) are essential for mitigating the harmful effects associated with this trend. However, developing an energy management system for such buildings is challenging due to [...] Read more.
With the continuous rise in the energy consumption of buildings, the study and integration of net-zero energy buildings (NZEBs) are essential for mitigating the harmful effects associated with this trend. However, developing an energy management system for such buildings is challenging due to uncertainties surrounding NZEBs. This paper introduces an optimization framework comprising two major stages: (i) renewable energy prediction and (ii) multi-objective optimization. A prediction model is developed to accurately forecast photovoltaic (PV) system output, while a multi-objective optimization model is designed to identify the most efficient ways to produce cooling, heating, and electricity at minimal operational costs. These two stages not only help mitigate uncertainties in NZEBs but also reduce dependence on imported power from the utility grid. Finally, to facilitate the deployment of the proposed framework, a graphical user interface (GUI) has been developed, providing a user-friendly environment for building operators to determine optimal scheduling and oversee the entire system. Full article
Show Figures

Figure 1

Figure 1
<p>Energy building systems.</p>
Full article ">Figure 2
<p>Correlation matrix illustrating the relationships between beam irradiance, diffuse irradiance, ambient temperature, wind speed, and PV system output.</p>
Full article ">Figure 3
<p>Correlation plots between beam irradiance, diffuse irradiance, and PV system output. (<b>a</b>) PV output and beam irradiance; (<b>b</b>) PV output and diffuse irradiance.</p>
Full article ">Figure 4
<p>Correlation plots between ambient temperature, wind speed, and PV system output. (<b>a</b>) PV output and ambient temperature; (<b>b</b>) PV output and wind speed.</p>
Full article ">Figure 5
<p>Input data for one-day simulation.</p>
Full article ">Figure 6
<p>Input data for three-day simulation.</p>
Full article ">Figure 7
<p>Training and validation loss for one-day prediction.</p>
Full article ">Figure 8
<p>Frequency of errors for one-day prediction.</p>
Full article ">Figure 9
<p>Actual vs. predicted output over 24 h.</p>
Full article ">Figure 10
<p>Training and validation loss for three-day prediction.</p>
Full article ">Figure 11
<p>Frequency of errors for three-day prediction.</p>
Full article ">Figure 12
<p>Actual vs. predicted output over 72 h.</p>
Full article ">Figure 13
<p>Optimization model cooling output over 24 h.</p>
Full article ">Figure 14
<p>Optimization model heating output over 24 h.</p>
Full article ">Figure 15
<p>Optimization model electricity outputs over 24 h.</p>
Full article ">Figure 16
<p>Optimization model of cooling output over 72 h.</p>
Full article ">Figure 17
<p>Optimization model of heating output over 72 h.</p>
Full article ">Figure 18
<p>Optimization model of electricity output over 72 h.</p>
Full article ">Figure 19
<p>Developed GUI.</p>
Full article ">
22 pages, 3297 KiB  
Article
Sleep Apnea Classification Using the Mean Euler–Poincaré Characteristic and AI Techniques
by Moises Ramos-Martinez, Felipe D. J. Sorcia-Vázquez, Gerardo Ortiz-Torres, Mario Martínez García, Mayra G. Mena-Enriquez, Estela Sarmiento-Bustos, Juan Carlos Mixteco-Sánchez, Erasmo Misael Rentería-Vargas, Jesús E. Valdez-Resendiz and Jesse Yoe Rumbo-Morales
Algorithms 2024, 17(11), 527; https://doi.org/10.3390/a17110527 - 15 Nov 2024
Viewed by 357
Abstract
Sleep apnea is a sleep disorder that disrupts breathing during sleep. This study aims to classify sleep apnea using a machine learning approach and a Euler–Poincaré characteristic (EPC) model derived from electrocardiogram (ECG) signals. An ensemble K-nearest neighbors classifier and a feedforward neural [...] Read more.
Sleep apnea is a sleep disorder that disrupts breathing during sleep. This study aims to classify sleep apnea using a machine learning approach and a Euler–Poincaré characteristic (EPC) model derived from electrocardiogram (ECG) signals. An ensemble K-nearest neighbors classifier and a feedforward neural network were implemented using the EPC model as inputs. ECG signals were preprocessed with a polynomial-based scheme to reduce noise, and the processed signals were transformed into a non-Gaussian physiological random field (NGPRF) for EPC model extraction from excursion sets. The classifiers were then applied to the EPC model inputs. Using the Apnea-ECG dataset, the proposed method achieved an accuracy of 98.5%, sensitivity of 94.5%, and specificity of 100%. Combining machine learning methods and geometrical features can effectively diagnose sleep apnea from single-lead ECG signals. The EPC model enhances clinical decision-making for evaluating this disease. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Medicine (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>Methodology to classify sleep apnea. The first stage involves the data or ECG. The second stage involves pre-processing, where the ECG is cleaned. The third stage involves random field conversion. The fourth stage focuses on geometrical properties, where the excursion set is obtained. The fifth stage is feature extraction. Finally, the last stage involves selecting a classifier model (EKNN, SVM, FNN) to differentiate between an apnea patient and a non-apnea patient.</p>
Full article ">Figure 2
<p>Polynomial order results for the sections of the ECG; sections <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>C</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>C</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>a</b>) Histogram of the polynomials of P-waves (blue) and T-waves (red) using FIT (Equation (<a href="#FD10-algorithms-17-00527" class="html-disp-formula">10</a>)). (<b>b</b>) Histogram of the polynomials of P-waves (blue) and T-waves (red) using the IAE criterion.</p>
Full article ">Figure 3
<p>Random field of the P-waves, Q-peaks, and R-peaks in a short number of cycles (100).</p>
Full article ">Figure 4
<p>P-waves and R-peaks from the binary image (excursion set) at level <math display="inline"><semantics> <mi>λ</mi> </semantics></math> = 0.2 for an apnea patient. The binary representation of the P-wave varies depending on the level of <span class="html-italic">u</span>.</p>
Full article ">Figure 5
<p>P-waves and R-peaks from the binary image (excursion set) at level <math display="inline"><semantics> <mi>λ</mi> </semantics></math> = 0.2 for a non-apnea patient. Normally, the P-wave follows a straight line at each level <span class="html-italic">u</span>.</p>
Full article ">Figure 6
<p>EPC values from the sleep apnea patients. Patients from 1 to 20 show OSA events at different times with severe cases. In the majority of these cases, the duration of the OSA event is about 1 h.</p>
Full article ">Figure 7
<p>EPC values from sleep apnea patients with borderline cases (patients 21–25), where activity was less than an hour, and non-apnea patients (patients 26–35), who served as control cases.</p>
Full article ">Figure 8
<p>The ensemble KNN classifier example; each dataset is composed of 65 random features. In this setup, 30 KNN classifiers (learners) are activated, and the final prediction is determined by a majority vote.</p>
Full article ">Figure 9
<p>Feedforward multi-layer neural network architecture proposed for classifying sleep apnea; an input layer using a Swish activation function, two hidden layers—one with a ReLU function and the other with a tanh function—and an output layer equipped with a softmax function.</p>
Full article ">Figure 10
<p>Confusion matrix for the training phase using 15th- and 21st-order polynomials. Both cases had the same results.</p>
Full article ">Figure 11
<p>Results of the confusion matrix for the validated data using a 15th-order polynomial.</p>
Full article ">Figure 12
<p>Results for the test phase using a 21st-order polynomial.</p>
Full article ">Figure 13
<p>Confusion matrix for the training phase using a 15th-order polynomial and FNN.</p>
Full article ">Figure 14
<p>Results of the confusion matrix for the test set using a 15th-order polynomial and FNN.</p>
Full article ">Figure A1
<p>Confusion matrix for the training set using the balanced data. Class 0 represents the apnea patients, and class 1 represents the non-apnea patients.</p>
Full article ">Figure A2
<p>Confusion matrix for the test set using the balanced data. Class 0 represents the apnea patients, and class 1 represents the non-apnea patients.</p>
Full article ">
22 pages, 2446 KiB  
Review
A Comprehensive Review of Autonomous Driving Algorithms: Tackling Adverse Weather Conditions, Unpredictable Traffic Violations, Blind Spot Monitoring, and Emergency Maneuvers
by Cong Xu and Ravi Sankar
Algorithms 2024, 17(11), 526; https://doi.org/10.3390/a17110526 - 15 Nov 2024
Viewed by 365
Abstract
With the rapid development of autonomous driving technology, ensuring the safety and reliability of vehicles under various complex and adverse conditions has become increasingly important. Although autonomous driving algorithms perform well in regular driving scenarios, they still face significant challenges when dealing with [...] Read more.
With the rapid development of autonomous driving technology, ensuring the safety and reliability of vehicles under various complex and adverse conditions has become increasingly important. Although autonomous driving algorithms perform well in regular driving scenarios, they still face significant challenges when dealing with adverse weather conditions, unpredictable traffic rule violations (such as jaywalking and aggressive lane changes), inadequate blind spot monitoring, and emergency handling. This review aims to comprehensively analyze these critical issues, systematically review current research progress and solutions, and propose further optimization suggestions. By deeply analyzing the logic of autonomous driving algorithms in these complex situations, we hope to provide strong support for enhancing the safety and reliability of autonomous driving technology. Additionally, we will comprehensively analyze the limitations of existing driving technologies and compare Advanced Driver Assistance Systems (ADASs) with Full Self-Driving (FSD) to gain a thorough understanding of the current state and future development directions of autonomous driving technology. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>Perception capabilities of autonomous driving systems in adverse weather conditions [<a href="#B36-algorithms-17-00526" class="html-bibr">36</a>].</p>
Full article ">Figure 2
<p>Integrated sensor systems in advanced driver assistance technologies [<a href="#B36-algorithms-17-00526" class="html-bibr">36</a>].</p>
Full article ">Figure 3
<p>Performance comparison of AI models on object detection.</p>
Full article ">Figure 4
<p>Algorithms for managing complex traffic scenarios and violations.</p>
Full article ">
31 pages, 7153 KiB  
Article
You Only Look Once Version 5 and Deep Simple Online and Real-Time Tracking Algorithms for Real-Time Customer Behavior Tracking and Retail Optimization
by Mohamed Shili, Osama Sohaib and Salah Hammedi
Algorithms 2024, 17(11), 525; https://doi.org/10.3390/a17110525 - 15 Nov 2024
Viewed by 345
Abstract
The speedy progress of computer vision and machine learning engineering has inaugurated novel means for improving the purchasing experiment in brick-and-mortar stores. This paper examines the utilization of YOLOv (You Only Look Once) and DeepSORT (Deep Simple Online and Real-Time Tracking) algorithms for [...] Read more.
The speedy progress of computer vision and machine learning engineering has inaugurated novel means for improving the purchasing experiment in brick-and-mortar stores. This paper examines the utilization of YOLOv (You Only Look Once) and DeepSORT (Deep Simple Online and Real-Time Tracking) algorithms for the real-time detection and analysis of the purchasing penchant in brick-and-mortar market surroundings. By leveraging these algorithms, stores can track customer behavior, identify popular products, and monitor high-traffic areas, enabling businesses to adapt quickly to customer preferences and optimize store layout and inventory management. The methodology involves the integration of YOLOv5 for accurate and rapid object detection combined with DeepSORT for the effective tracking of customer movements and interactions with products. Information collected in in-store cameras and sensors is handled to detect tendencies in customer behavior, like repeatedly inspected products, periods expended in specific intervals, and product handling. The results indicate a modest improvement in customer engagement, with conversion rates increasing by approximately 3 percentage points, and a decline in inventory waste levels, from 88% to 75%, after system implementation. This study provides essential insights into the further integration of algorithm technology in physical retail locations and demonstrates the revolutionary potential of real-time behavior tracking in the retail industry. This research determines the foundation for future developments in functional strategies and customer experience optimization by offering a solid framework for creating intelligent retail systems. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Architecture of YOLOv5.</p>
Full article ">Figure 2
<p>The architecture of DeepSORT.</p>
Full article ">Figure 3
<p>The proposed architecture for this system.</p>
Full article ">Figure 4
<p>The data flow diagram for this system.</p>
Full article ">Figure 5
<p>Flowchart of the real-time retail tendency detection system.</p>
Full article ">Figure 6
<p>Recommendations generated by the proposed system.</p>
Full article ">Figure 7
<p>Product detection.</p>
Full article ">Figure 8
<p>Confusion matrix for evaluating YOLOv5 detections.</p>
Full article ">Figure 9
<p>Using DeepSORT algorithm in a store.</p>
Full article ">Figure 10
<p>Graph of the model accuracy.</p>
Full article ">Figure 11
<p>Graph of the precision through the datasets.</p>
Full article ">Figure 12
<p>Graph of the recall.</p>
Full article ">Figure 13
<p>Graph of the F1-score calculation.</p>
Full article ">Figure 14
<p>Overview of latency and computing cost.</p>
Full article ">Figure 15
<p>Graph of accuracy and standard deviation over multiple executions.</p>
Full article ">Figure 16
<p>YOLOv5 object detection performance.</p>
Full article ">Figure 17
<p>DeepSORT tracking performance.</p>
Full article ">Figure 18
<p>Conversion rates before and after implementation of YOLOv5 + DeepSORT.</p>
Full article ">Figure 19
<p>Inventory waste levels before and after system integration.</p>
Full article ">Figure 20
<p>Comparison of YOLOv5 + DeepSORT vs. traditional methods.</p>
Full article ">Figure 21
<p>Confusion metrics for different models.</p>
Full article ">Figure 22
<p>Performance comparison of YOLOv5 + DeepSORT vs. other methods.</p>
Full article ">
17 pages, 681 KiB  
Article
Subsampling Algorithms for Irregularly Spaced Autoregressive Models
by Jiaqi Liu, Ziyang Wang, HaiYing Wang and Nalini Ravishanker
Algorithms 2024, 17(11), 524; https://doi.org/10.3390/a17110524 - 15 Nov 2024
Viewed by 304
Abstract
With the exponential growth of data across diverse fields, applying conventional statistical methods directly to large-scale datasets has become computationally infeasible. To overcome this challenge, subsampling algorithms are widely used to perform statistical analyses on smaller, more manageable subsets of the data. The [...] Read more.
With the exponential growth of data across diverse fields, applying conventional statistical methods directly to large-scale datasets has become computationally infeasible. To overcome this challenge, subsampling algorithms are widely used to perform statistical analyses on smaller, more manageable subsets of the data. The effectiveness of these methods depends on their ability to identify and select data points that improve the estimation efficiency according to some optimality criteria. While much of the existing research has focused on subsampling techniques for independent data, there is considerable potential for developing methods tailored to dependent data, particularly in time-dependent contexts. In this study, we extend subsampling techniques to irregularly spaced time series data which are modeled by irregularly spaced autoregressive models. We present frameworks for various subsampling approaches, including optimal subsampling under A-optimality, information-based optimal subdata selection, and sequential thinning on streaming data. These methods use A-optimality or D-optimality criteria to assess the usefulness of each data point and prioritize the inclusion of the most informative ones. We then assess the performance of these subsampling methods using numerical simulations, providing insights into their suitability and effectiveness for handling irregularly spaced long time series. Numerical results show that our algorithms have promising performance. Their estimation efficiency can be ten times as high as that of the uniform sampling estimator. They also significantly reduce the computational time and can be up to forty times faster than the full-data estimator. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Comparison of MSEs for estimating <math display="inline"><semantics> <mi mathvariant="bold-italic">θ</mi> </semantics></math> between different subsampling algorithms across varying subsample sizes <span class="html-italic">r</span>.</p>
Full article ">Figure 2
<p>Comparison of MSE<sub><span class="html-italic">ϕ</span></sub>s for estimating <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> between different subsampling algorithms across varying subsample sizes <span class="html-italic">r</span>.</p>
Full article ">Figure 3
<p>Comparison of MSE<sub><span class="html-italic">σ</span></sub>s for estimating <math display="inline"><semantics> <mi>σ</mi> </semantics></math> between different subsampling algorithms across varying subsample sizes <span class="html-italic">r</span>.</p>
Full article ">
21 pages, 388 KiB  
Article
The Nelder–Mead Simplex Algorithm Is Sixty Years Old: New Convergence Results and Open Questions
by Aurél Galántai
Algorithms 2024, 17(11), 523; https://doi.org/10.3390/a17110523 - 14 Nov 2024
Viewed by 336
Abstract
We investigate and compare two versions of the Nelder–Mead simplex algorithm for function minimization. Two types of convergence are studied: the convergence of function values at the simplex vertices and convergence of the simplex sequence. For the first type of convergence, we generalize [...] Read more.
We investigate and compare two versions of the Nelder–Mead simplex algorithm for function minimization. Two types of convergence are studied: the convergence of function values at the simplex vertices and convergence of the simplex sequence. For the first type of convergence, we generalize the main result of Lagarias, Reeds, Wright and Wright (1998). For the second type of convergence, we also improve recent results which indicate that the Lagarias et al.’s version of the Nelder–Mead algorithm has better convergence properties than the original Nelder–Mead method. This paper concludes with some open questions. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
5 pages, 188 KiB  
Editorial
Metaheuristic Algorithms in Optimal Design of Engineering Problems
by Łukasz Knypiński, Ramesh Devarapalli and Marcin Kamiński
Algorithms 2024, 17(11), 522; https://doi.org/10.3390/a17110522 - 14 Nov 2024
Viewed by 345
Abstract
Metaheuristic optimization algorithms (MOAs) are widely used to optimize the design process of engineering problems [...] Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)
16 pages, 8588 KiB  
Article
Quotient Network-A Network Similar to ResNet but Learning Quotients
by Peng Hui, Jiamuyang Zhao, Changxin Li and Qingzhen Zhu
Algorithms 2024, 17(11), 521; https://doi.org/10.3390/a17110521 - 13 Nov 2024
Viewed by 285
Abstract
The emergence of ResNet provides a powerful tool for training extremely deep networks. The core idea behind it is to change the learning goals of the network. It no longer learns new features from scratch but learns the difference between the target and [...] Read more.
The emergence of ResNet provides a powerful tool for training extremely deep networks. The core idea behind it is to change the learning goals of the network. It no longer learns new features from scratch but learns the difference between the target and existing features. However, the difference between the two kinds of features does not have an independent and clear meaning, and the amount of learning is based on the absolute rather than the relative difference, which is sensitive to the size of existing features. We propose a new network that perfectly solves these two problems while still having the advantages of ResNet. Specifically, it chooses to learn the quotient of the target features with the existing features, so we call it the quotient network. In order to enable this network to learn successfully and achieve higher performance, we propose some design rules for this network so that it can be trained efficiently and achieve better performance than ResNet. Experiments on the CIFAR10, CIFAR100, and SVHN datasets prove that this network can stably achieve considerable improvements over ResNet by simply making tiny corresponding changes to the original ResNet network without adding new parameters. Full article
Show Figures

Figure 1

Figure 1
<p>The residual module (<b>left</b>) and the quotient module (<b>right</b>).</p>
Full article ">Figure 2
<p>Convolution processing before stacking quotient modules.</p>
Full article ">Figure 3
<p>The residual module (<b>left</b>) and the quotient module (<b>right</b>) when changing the number of channels.</p>
Full article ">Figure 4
<p>The middle feature maps when the input image is a frog. The left is for the quotient network, and the right is for ResNet. From top to bottom, it is for the first, second, and third stacked modules in order.</p>
Full article ">Figure A1
<p>The middle feature maps when the input image is a bird. The left is for the quotient network, and the right is for ResNet. From top to bottom, it is for the first, second, and third stacked modules in order.</p>
Full article ">Figure A2
<p>The middle feature maps when the input image is a plane. The left is for the quotient network, and the right is for ResNet. From top to bottom, it is for the first, second, and third stacked modules in order.</p>
Full article ">Figure A3
<p>The middle feature maps when the input image is a dog. The left is for the quotient network, and the right is for ResNet. From top to bottom, it is for the first, second, and third stacked modules in order.</p>
Full article ">Figure A4
<p>The middle feature maps when the input image is a ship. The left is for the quotient network, and the right is for ResNet. From top to bottom, it is for the first, second, and third stacked modules in order.</p>
Full article ">Figure A5
<p>The middle feature maps when the input image is a horse. The left is for the quotient network, and the right is for ResNet. From top to bottom, it is for the first, second, and third stacked modules in order.</p>
Full article ">
25 pages, 3540 KiB  
Article
Minimum-Energy Scheduling of Flexible Job-Shop Through Optimization and Comprehensive Heuristic
by Oludolapo Akanni Olanrewaju, Fabio Luiz Peres Krykhtine and Felix Mora-Camino
Algorithms 2024, 17(11), 520; https://doi.org/10.3390/a17110520 - 12 Nov 2024
Viewed by 389
Abstract
This study considers a flexible job-shop scheduling problem where energy cost savings are the primary objective and where the classical objective of the minimization of the make-span is replaced by the satisfaction of due times for each job. An original two-level mixed-integer formulation [...] Read more.
This study considers a flexible job-shop scheduling problem where energy cost savings are the primary objective and where the classical objective of the minimization of the make-span is replaced by the satisfaction of due times for each job. An original two-level mixed-integer formulation of this optimization problem is proposed, where the processed flows of material and their timing are explicitly considered. Its exact solution is discussed, and, considering its computational complexity, a comprehensive heuristic, balancing energy performance and due time constraint satisfaction, is developed to provide acceptable solutions in polynomial time to the minimum-energy flexible job-shop scheduling problem, even when considering its dynamic environment. The proposed approach is illustrated through a small-scale example. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
Show Figures

Figure 1

Figure 1
<p>Graphical representation of a production plan.</p>
Full article ">Figure 2
<p>Transfers between machines in a flexible job shop.</p>
Full article ">Figure 3
<p>Delays and energy costs for machine assignment to <span class="html-italic">O<sub>ik</sub></span>.</p>
Full article ">Figure 4
<p>Flowchart of proposed heuristic.</p>
Full article ">Figure 5
<p>Current machine assignment after 7 iterations of the inner loop.</p>
Full article ">Figure 6
<p>Example of local decision space.</p>
Full article ">Figure 7
<p>Small flexible job shop (job 1 in black; job 2 in orange).</p>
Full article ">Figure 8
<p>GANTT diagrams of the different heuristics.</p>
Full article ">Figure 9
<p>The considered job shop.</p>
Full article ">Figure 10
<p>Gantt diagram of HET heuristic solution.</p>
Full article ">Figure 11
<p>Gantt diagram of TTE solution.</p>
Full article ">Figure 12
<p>Gantt diagram of ETT solution.</p>
Full article ">
26 pages, 862 KiB  
Article
Can the Plantar Pressure and Temperature Data Trend Show the Presence of Diabetes? A Comparative Study of a Variety of Machine Learning Techniques
by Eduardo A. Gerlein, Francisco Calderón, Martha Zequera-Díaz and Roozbeh Naemi
Algorithms 2024, 17(11), 519; https://doi.org/10.3390/a17110519 - 12 Nov 2024
Viewed by 480
Abstract
This study aimed to explore the potential of predicting diabetes by analyzing trends in plantar thermal and plantar pressure data, either individually or in combination, using various machine learning techniques. A total of twenty-six participants, comprising thirteen individuals diagnosed with diabetes and thirteen [...] Read more.
This study aimed to explore the potential of predicting diabetes by analyzing trends in plantar thermal and plantar pressure data, either individually or in combination, using various machine learning techniques. A total of twenty-six participants, comprising thirteen individuals diagnosed with diabetes and thirteen healthy individuals, walked along a 20 m path. In-shoe plantar pressure data were collected and the plantar temperature was measured both immediately before and after the walk. Each participant completed the trial three times, and the average data between the trials were calculated. The research was divided into three experiments: the first evaluated the correlations between the plantar pressure and temperature data; the second focused on predicting diabetes using each data type independently; and the third combined both data types and assessed the effect of such to enhance the predictive accuracy. For the experiments, 20 regression models and 16 classification algorithms were employed, and the performance was evaluated using a five-fold cross-validation strategy. The outcomes of the initial set of experiments indicated that the machine learning models were significant correlations between the thermal data and pressure estimates. This was consistent with the findings from the prior correlation analysis, which showed weak relationships between these two data modalities. However, a shift in focus towards predicting diabetes by aggregating the temperature and pressure data led to encouraging results, demonstrating the effectiveness of this approach in accurately predicting the presence of diabetes. The analysis revealed that, while several classifiers demonstrated reasonable metrics when using standalone variables, the integration of thermal and pressure data significantly improved the predictive accuracy. Specifically, when only plantar pressure data were used, the Logistic Regression model achieved the highest accuracy at 68.75%. Those predictions based solely on temperature data showed the Naive Bayes model as the lead with an accuracy of 87.5%. Notably, the highest accuracy of 93.75% was observed when both the temperature and pressure data were combined, with the Extra Trees Classifier performing the best. These results suggest that combining temperature and pressure data enhances the model’s predictive accuracy. This can indicate the importance of multimodal data integration and their potentials in diabetes prediction. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>A 25 m walkway; the designated area avoided “quick” twisting movements.</p>
Full article ">Figure 2
<p>Thermal image capture setup.</p>
Full article ">Figure 3
<p>Regions of interest on the feet were marked for thermal and pressure measurements: hallux, 1st metatarsus, 3rd metatarsus, 5th metatarsus, midfoot (proximal to 5th metatarsus apophysis), medial arch on proximal 1st metatarsus, and heel.</p>
Full article ">Figure 4
<p>Correlation index matrix representing the relationship between temperature and pressure features across different individuals. The matrix shows correlation coefficients for pressure and temperature data between the left and right feet of each individual, as well as across different individuals.</p>
Full article ">Figure 5
<p>Performance comparison between the Extra Trees Classifier and Random Forest Classifier. (<b>a</b>,<b>c</b>) display the feature importance plots, with (<b>a</b>) highlighting the Extra Trees Classifier’s focus on thermal data and (<b>c</b>) illustrating the Random Forest Classifier’s balanced consideration of both temperature and pressure features. (<b>b</b>,<b>d</b>) depict the decision boundaries for the Extra Trees Classifier and Random Forest Classifier, respectively, showing how the models classify diabetic (1) and non-diabetic (0) cases based on these features. Random Forest’s mixed use of temperature and pressure data underscores its more comprehensive approach to predicting diabetes.</p>
Full article ">
21 pages, 623 KiB  
Article
Attribute Relevance Score: A Novel Measure for Identifying Attribute Importance
by Pablo Neirz, Hector Allende and Carolina Saavedra
Algorithms 2024, 17(11), 518; https://doi.org/10.3390/a17110518 - 9 Nov 2024
Viewed by 505
Abstract
This study introduces a novel measure for evaluating attribute relevance, specifically designed to accurately identify attributes that are intrinsically related to a phenomenon, while being sensitive to the asymmetry of those relationships and noise conditions. Traditional variable selection techniques, such as filter and [...] Read more.
This study introduces a novel measure for evaluating attribute relevance, specifically designed to accurately identify attributes that are intrinsically related to a phenomenon, while being sensitive to the asymmetry of those relationships and noise conditions. Traditional variable selection techniques, such as filter and wrapper methods, often fall short in capturing these complexities. Our methodology, grounded in decision trees but extendable to other machine learning models, was rigorously evaluated across various data scenarios. The results demonstrate that our measure effectively distinguishes relevant from irrelevant attributes and highlights how relevance is influenced by noise, providing a more nuanced understanding compared to established methods such as Pearson, Spearman, Kendall, MIC, MAS, MEV, GMIC, and Phik. This research underscores the importance of phenomenon-centric explainability, reproducibility, and robust attribute relevance evaluation in the development of predictive models. By enhancing both the interpretability and contextual accuracy of models, our approach not only supports more informed decision making but also contributes to a deeper understanding of the underlying mechanisms in diverse application domains, such as biomedical research, financial modeling, astronomy, and others. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>Scatter plots illustrating various bivariate relationships generated. Each subplot represents a different synthetic dataset: (<b>A</b>–<b>G</b>) are datasets generated from multivariate normal distributions with varying correlations; (<b>H</b>–<b>N</b>) are datasets generated from rotated normal distributions, illustrating different linear relationships; (<b>O</b>–<b>U</b>) represent other complex, nonlinear patterns.</p>
Full article ">Figure 2
<p>Performance of ARS with different base models on Equation (<a href="#FD15-algorithms-17-00518" class="html-disp-formula">15</a>) without the noise term <math display="inline"><semantics> <mi>η</mi> </semantics></math> on the benchmark dataset. The informative variables (<math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mn>2</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mn>3</mn> </msub> </semantics></math>) and noninformative variables (<math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mn>4</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mn>5</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mn>6</mn> </msub> </semantics></math>) are evaluated across three base models: decision trees, linear regression, and k-nearest neighbors.</p>
Full article ">Figure 3
<p>Performance of various dependency measures on informative variables <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">v</mi> <mn>1</mn> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">v</mi> <mn>2</mn> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">v</mi> <mn>3</mn> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> across different noise levels <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </semantics></math>. This figure illustrates the mean and standard deviation of each measure, highlighting the stability of ARS compared to other metrics.</p>
Full article ">Figure 4
<p>Performance of various dependency measures on noninformative variables <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">v</mi> <mn>4</mn> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">v</mi> <mn>5</mn> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">v</mi> <mn>6</mn> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> across different noise levels (j). This figure shows the mean and standard deviation for each measure, demonstrating the high variability of traditional metrics compared to the consistently low scores of ARS. Absolute values are considered for readability.</p>
Full article ">
13 pages, 1056 KiB  
Article
A Framework for Evaluating Dynamic Directed Brain Connectivity Estimation Methods Using Synthetic EEG Signal Generation
by Zoran Šverko, Saša Vlahinić and Peter Rogelj
Algorithms 2024, 17(11), 517; https://doi.org/10.3390/a17110517 - 9 Nov 2024
Viewed by 417
Abstract
This study presents a method for generating synthetic electroencephalography (EEG) signals to test dynamic directed brain connectivity estimation methods. Current methods for evaluating dynamic brain connectivity estimation techniques face challenges due to the lack of ground truth in real EEG signals. To [...] Read more.
This study presents a method for generating synthetic electroencephalography (EEG) signals to test dynamic directed brain connectivity estimation methods. Current methods for evaluating dynamic brain connectivity estimation techniques face challenges due to the lack of ground truth in real EEG signals. To address this, we propose a framework for generating synthetic EEG signals with predefined dynamic connectivity changes. Our approach allows for evaluating and optimizing dynamic connectivity estimation methods, particularly Granger causality (GC). We demonstrate the framework’s utility by identifying optimal window sizes and regression orders for GC analysis. The findings could guide the development of more accurate dynamic connectivity techniques. Full article
(This article belongs to the Special Issue Artificial Intelligence and Signal Processing: Circuits and Systems)
Show Figures

Figure 1

Figure 1
<p>Connectivity matrix GC of order 19 for subject S001R01 [<a href="#B25-algorithms-17-00517" class="html-bibr">25</a>], from the baseline eyes-open experiment [<a href="#B29-algorithms-17-00517" class="html-bibr">29</a>].</p>
Full article ">Figure 2
<p>Synthetically generated signal and reference <span class="html-italic">GC</span> values (in three intervals) [<a href="#B29-algorithms-17-00517" class="html-bibr">29</a>].</p>
Full article ">Figure 3
<p>Dynamic <span class="html-italic">GC</span> values estimated using a sliding window analysis with a window size of 2 s (<b>a</b>) and 400 ms (<b>b</b>); <span class="html-italic">RFC</span> stands for reference functional connectivity, which is computed for the whole interval of the generated signals [<a href="#B29-algorithms-17-00517" class="html-bibr">29</a>].</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <msub> <mi>ε</mi> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> </mrow> </msub> </semantics></math> values for window sizes ranging from the minimal value to 3000 ms with a step of 25 ms, and for different regression orders ranging from 5 to 35 with a step of 1.</p>
Full article ">Figure 5
<p>Dynamic <span class="html-italic">GC</span> values estimated using sliding window analysis with a window size of 875 ms, a regression order of 9, and a minimal <math display="inline"><semantics> <msub> <mi>ε</mi> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> </mrow> </msub> </semantics></math> value of 0.13.</p>
Full article ">Figure 6
<p>Optimum window size in terms of the minimal <math display="inline"><semantics> <msub> <mi>ε</mi> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> </mrow> </msub> </semantics></math> with respect to selected regression order <math display="inline"><semantics> <msub> <mi>M</mi> <mrow> <mi>G</mi> <mi>C</mi> </mrow> </msub> </semantics></math> (blue asterisks) and corresponding minimal <math display="inline"><semantics> <msub> <mi>ε</mi> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> </mrow> </msub> </semantics></math> (red triangles).</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </semantics></math> values for window sizes ranging from the minimal to 3000 ms with a step of 25 ms, and for different regression orders <math display="inline"><semantics> <msub> <mi>M</mi> <mrow> <mi>G</mi> <mi>C</mi> </mrow> </msub> </semantics></math> from 5 to 35.</p>
Full article ">Figure 8
<p>Dynamic <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>C</mi> </mrow> </semantics></math> values estimated using sliding window analysis with a window size of 400 ms, a regression order of 12, and a minimum <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </semantics></math> of 0.021 s.</p>
Full article ">Figure 9
<p>Optimum window size in terms of the minimal <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </semantics></math> with respect to selected regression order <math display="inline"><semantics> <msub> <mi>M</mi> <mrow> <mi>G</mi> <mi>C</mi> </mrow> </msub> </semantics></math> (blue asterisks) and corresponding minimal <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </semantics></math> (red triangles).</p>
Full article ">Figure 10
<p>Product of <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ε</mi> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> </mrow> </msub> </semantics></math> for window sizes ranging from minimal to 3000 ms with a step of 25 ms, and for different regression orders from 5 to 35.</p>
Full article ">Figure 11
<p>Dynamic <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>C</mi> </mrow> </semantics></math> estimated using sliding window analysis with a window size of 400 ms, a regression order of 12, and a minimal product of <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ε</mi> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> </mrow> </msub> </semantics></math> equaling 0.02.</p>
Full article ">Figure 12
<p>Optimum window size in terms of the minimization of the product <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>T</mi> <mo>·</mo> <msub> <mi>ε</mi> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> with respect to selected regression order <math display="inline"><semantics> <msub> <mi>M</mi> <mrow> <mi>G</mi> <mi>C</mi> </mrow> </msub> </semantics></math> (blue asterisks) and corresponding minimal <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>T</mi> <mo>·</mo> <msub> <mi>ε</mi> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> (red triangles).</p>
Full article ">
16 pages, 2594 KiB  
Article
Topological Reinforcement Adaptive Algorithm (TOREADA) Application to the Alerting of Convulsive Seizures and Validation with Monte Carlo Numerical Simulations
by Stiliyan Kalitzin
Algorithms 2024, 17(11), 516; https://doi.org/10.3390/a17110516 - 8 Nov 2024
Viewed by 400
Abstract
The detection of adverse events—for example, convulsive epileptic seizures—can be critical for patients suffering from a variety of pathological syndromes. Algorithms using remote sensing modalities, such as a video camera input, can be effective for real-time alerting, but the broad variability of environments [...] Read more.
The detection of adverse events—for example, convulsive epileptic seizures—can be critical for patients suffering from a variety of pathological syndromes. Algorithms using remote sensing modalities, such as a video camera input, can be effective for real-time alerting, but the broad variability of environments and numerous nonstationary factors may limit their precision. In this work, we address the issue of adaptive reinforcement that can provide flexible applications in alerting devices. The general concept of our approach is the topological reinforced adaptive algorithm (TOREADA). Three essential steps—embedding, assessment, and envelope—act iteratively during the operation of the system, thus providing continuous, on-the-fly, reinforced learning. We apply this concept in the case of detecting convulsive epileptic seizures, where three parameters define the decision manifold. Monte Carlo-type simulations validate the effectiveness and robustness of the approach. We show that the adaptive procedure finds the correct detection parameters, providing optimal accuracy from a large variety of initial states. With respect to the separation quality between simulated seizure and normal epochs, the detection reinforcement algorithm is robust within the broad margins of signal-generation scenarios. We conclude that our technique is applicable to a large variety of event detection systems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>“Category” representation of the TOREADA approach to reinforcement.</p>
Full article ">Figure 2
<p>Illustration of the quantities used in the seizure-detection decision algorithm. The vertical axis represents the output of the optical flow and filtering preprocessing. The horizontal axis is the running time. The dashed blue line is the selected threshold and the blue arrow on the top is the selected observation elapsed time, or the detection depth. The gray box represents the preconvulsive seizure onset (most typically the tonic phase) and the yellow area is the observed convulsive, clonic phase.</p>
Full article ">Figure 3
<p>General scheme for Monte Carlo type of simulating epimarker. The output signal, the synthetic epimarker, is generated according to a probabilistic model that determines the distribution of values at each time point depending on whether the system is in a predefined ictal (seizure) state or in a “normal” state.</p>
Full article ">Figure 4
<p>Real (<b>top frame</b>) and synthetic (<b>bottom frame</b>) epimarker. In both plots, the epimarker is the blue trace and the vertical axes indicate its values. Note that the negative values on the top frame are due to a conveniently introduced offset as a compensation for white noise background. The horizontal axes on both plots represent the real and simulation time (sample steps), correspondingly. The vertical red line on the top frame represents the beginning of the seizure as detected in real time by the detector. On the bottom frame, the two red lines are at the beginning and the end of the model seizure event; the red star is the detected onset by the algorithm.</p>
Full article ">Figure 5
<p>Results from simulations of the reinforcement procedure starting from various initial detector settings. The <b>left plot</b> is the sensitivity presented in pseudo-color code from the color bar. The vertical axis is the trial number; the corresponding parameters are given in <a href="#algorithms-17-00516-t001" class="html-table">Table 1</a>. The horizontal axis is the epoch number (each epoch of 3600 simulation steps; see <a href="#sec2dot5-algorithms-17-00516" class="html-sec">Section 2.5</a>) from the start of the simulation experiment. The <b>right plot</b> has the same notations but shows the specificity evolution.</p>
Full article ">Figure 6
<p>Statistical distributions of the sensitivity (<b>left plot</b>) and the specificity (<b>right plot</b>) derived from the last 50 epochs of the simulations of the reinforcement procedure starting from various initial detector settings, using the same data as shown in <a href="#algorithms-17-00516-f005" class="html-fig">Figure 5</a>. The vertical axes are the corresponding quantities and the horizontal axes represent the number of initial parameter sets, as given in <a href="#algorithms-17-00516-t001" class="html-table">Table 1</a>. The boxes are the 25th–75th percentiles, the red lines are the median values, the whiskers denote the 5th–95th percentile values, and the red crosses are the outliers.</p>
Full article ">Figure 7
<p>Traces (black lines) of the detector parameter adaptation from simulations of the reinforcement procedure starting from various initial detector settings given in <a href="#algorithms-17-00516-t001" class="html-table">Table 1</a>. The horizontal axis is the epoch number (each epoch of 3600 simulation steps; see <a href="#sec2dot5-algorithms-17-00516" class="html-sec">Section 2.5</a>) from the start of the simulation experiment. The labels on the vertical axes denote the corresponding parameter.</p>
Full article ">Figure 8
<p>The notations are the same as those of <a href="#algorithms-17-00516-f005" class="html-fig">Figure 5</a>. Each row of images represents a session with the different initial detection parameters given in <a href="#algorithms-17-00516-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 9
<p>Statistical distributions of sensitivity (the <b>left</b> column of the plots) and specificity (the <b>right</b> column of the plots) for the same data as shown in <a href="#algorithms-17-00516-f008" class="html-fig">Figure 8</a>. Vertical axes represent the quantities; the labels on the horizontal axes indicate the various confusion factors as given in the sixth column of <a href="#algorithms-17-00516-t003" class="html-table">Table 3</a>. The boxplot features are the same as those of <a href="#algorithms-17-00516-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 10
<p>The same notations as in <a href="#algorithms-17-00516-f006" class="html-fig">Figure 6</a>. The vertical axes here represent the simulated maximal seizure duration in simulation steps. The three lines of images correspond to the three different initial detection parameters from <a href="#algorithms-17-00516-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 11
<p>Statistical distributions of sensitivity (the <b>left</b> column of the plots) and specificity (the <b>right</b> column of the plots) for the same data as shown in <a href="#algorithms-17-00516-f010" class="html-fig">Figure 10</a>. Vertical axes represent the quantities; the labels on the horizontal axes indicate the various maximal seizure lengths. The boxplot features are the same as those of <a href="#algorithms-17-00516-f006" class="html-fig">Figure 6</a>.</p>
Full article ">
28 pages, 5224 KiB  
Article
Unsupervised Image Segmentation on 2D Echocardiogram
by Gabriel Farias Cacao, Dongping Du and Nandini Nair
Algorithms 2024, 17(11), 515; https://doi.org/10.3390/a17110515 - 7 Nov 2024
Viewed by 428
Abstract
Echocardiography is a widely used, non-invasive imaging technique for diagnosing and monitoring heart conditions. However, accurate segmentation of cardiac structures, particularly the left ventricle, remains a complex task due to the inherent variability and noise in echocardiographic images. Current supervised models have achieved [...] Read more.
Echocardiography is a widely used, non-invasive imaging technique for diagnosing and monitoring heart conditions. However, accurate segmentation of cardiac structures, particularly the left ventricle, remains a complex task due to the inherent variability and noise in echocardiographic images. Current supervised models have achieved state-of-the-art results but are highly dependent on large, annotated datasets, which are costly and time-consuming to obtain and depend on the quality of the annotated data. These limitations motivate the need for unsupervised methods that can generalize across different image conditions without relying on annotated data. In this study, we propose an unsupervised approach for segmenting 2D echocardiographic images. By combining customized objective functions with convolutional neural networks (CNNs), our method effectively segments cardiac structures, addressing the challenges posed by low-resolution and gray-scale images. Our approach leverages techniques traditionally used outside of medical imaging, optimizing feature extraction through CNNs in a data-driven manner and with a new and smaller network design. Another key contribution of this work is the introduction of a post-processing algorithm that refines the segmentation to isolate the left ventricle in both diastolic and systolic positions, enabling the calculation of the ejection fraction (EF). This calculation serves as a benchmark for evaluating the performance of our unsupervised method. Our results demonstrate the potential of unsupervised learning to improve echocardiogram analysis by overcoming the limitations of supervised approaches, particularly in settings where labeled data are scarce or unavailable. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram of the neural network architecture used for unsupervised segmentation. The network consists of an encoder–decoder structure with skip connections, downsampling and upsampling layers, and convolutional operations. The input batch of echocardiogram images is progressively encoded, with the middle bottleneck capturing deep features, followed by decoding to produce both the reconstructed image and semantic segmentation masks. The network is optimized using reconstruction loss, similarity and contour regularization loss.</p>
Full article ">Figure 2
<p>Score distribution for training, validation, and test datasets.</p>
Full article ">Figure 3
<p>Training and validation loss over epochs, highlighting the smallest losses for both training and validation at epoch 32. The model was saved at this point, as it achieved the best validation loss of 0.0602.</p>
Full article ">Figure 4
<p>Reduction in the number of segmentation masks as training progresses, illustrating the refinement in segmentation and the need for additional processing to extract chamber volumes.</p>
Full article ">Figure 5
<p>Watershed segmentation pipeline. Top row, from left to right: (<b>1</b>) original image, (<b>2</b>) segmented image after the CNN model, (<b>3</b>) pre-processing for the watershed by isolating the background and heart walls. Bottom row, from left to right: (<b>4</b>) after dilation and closing to reduce internal noise in the structures, (<b>5</b>) distance transform applied, (<b>6</b>) final segmentation using the 3D watershed algorithm.</p>
Full article ">Figure 6
<p>Comparison between the network output and the 3D Watershed segmentation. (<b>a</b>) Network output: the network accurately segments the heart wall but does not isolate the left ventricle chamber, leading to a segmentation that includes both the heart wall and the chamber. (<b>b</b>) Watershed 3D: after applying the Watershed algorithm, the left ventricle chamber is successfully isolated from the surrounding heart wall, providing a clearer and more anatomically accurate segmentation.</p>
Full article ">Figure 7
<p>Comparison of segmentation results for Systolic (<b>top row</b>) and Diastolic (<b>bottom row</b>) frames: (<b>a</b>) original frame, (<b>b</b>) ground truth mask, (<b>c</b>) our model mask, (<b>d</b>) EchoNet mask.</p>
Full article ">Figure 8
<p>Good performance: original vs. segmented images for 2 frames of 2 different videos. Good performance images show well-defined boundaries and accurate segmentation.</p>
Full article ">Figure 9
<p>Bad performance: original vs segmented images for 2 frames of 2 different videos. Poor performance examples exhibit incorrect boundary detection or incomplete segmentation.</p>
Full article ">Figure A1
<p>(<b>a</b>) Example of videos not selected after quality assessment. Lower contrast between the chambers and the heart walls, with more noise inside the chambers; (<b>b</b>) example of videos selected after quality assessment. Better sharpness, well-defined edges, and clear chamber visibility.</p>
Full article ">Figure A2
<p>Normalized training and validation loss over epochs for models trained with different frame sizes.</p>
Full article ">Figure A3
<p>Grid search validation losses for different weight configurations. The selected configuration (black dashed line) balances loss value and segmentation quality.</p>
Full article ">Figure A4
<p>Example of segmentation results from models trained with different combinations of loss components. The first result is from a model using reconstruction and similarity losses (<math display="inline"><semantics> <mrow> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>i</mi> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>), showing clearer shapes but losing boundary information. The second result comes from a model using reconstruction and contour losses (<math display="inline"><semantics> <mrow> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>i</mi> <mi>m</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>), which preserves the boundaries better but struggles with internal details and accuracy in the shapes. The third result is from a model using contour and similarity losses (<math display="inline"><semantics> <mrow> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>i</mi> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>), which leads to excessive internal class noise (within the segmented pixels). These results highlight the importance of a balanced configuration that incorporates all three components to maintain both boundary clarity and internal structure.</p>
Full article ">Figure A5
<p>Validation loss for different ablation configurations. The lower loss values resulted in overfitting and poor segmentation quality, while higher loss configurations provided better image segmentation performance.</p>
Full article ">Figure A6
<p>Top row: (<b>1</b>) original image, (<b>2</b>) segmentation using the proposed model, (<b>3</b>) result after applying Watershed algorithm. Bottom row: (<b>4</b>) segmentation using pre-trained DeepLabV3 (ResNet-101), (<b>5</b>) Mask R-CNN (ResNet-50 FPN), and (<b>6</b>) U-Net (ResNet-34 encoder with ImageNet weights). The pre-trained models were used for comparison to highlight the strengths and weaknesses of different architectures on gray-scale echocardiographic data.</p>
Full article ">Figure A7
<p>Top row: (<b>1</b>) original echocardiogram, (<b>2</b>) segmentation using W-Net. Bottom row: (<b>3</b>) original echocardiogram, (<b>4</b>) segmentation using W-Net. While the W-Net is able to differentiate some background areas, it struggles with boundary detection and noise handling, as well as accurately segmenting the heart chambers.</p>
Full article ">Figure A8
<p>Additional comparison of segmentation results for systolic (<b>top row</b>) and diastolic (<b>bottom row</b>) frames. In this sample, a noticeable chamber deformity and imaging artifacts are present, which pose challenges to accurate segmentation. (<b>a</b>) Original frame, (<b>b</b>) ground truth mask, (<b>c</b>) our model mask, (<b>d</b>) EchoNet mask. The comparison illustrates how each segmentation method handles the deformity and artifacts. Our model shows resilience to these distortions, maintaining clear boundaries, while EchoNet struggles to account for the abnormalities.</p>
Full article ">Figure A9
<p>Segmentation algorithm pipeline applied to a sample of the private dataset: (<b>a</b>) Original image, (<b>b</b>) initial segmentation (our model), (<b>c</b>) final segmentation (after Watershed).</p>
Full article ">
22 pages, 436 KiB  
Article
Data-Driven Formation Control for Multi-Vehicle Systems Induced by Leader Motion
by Gianfranco Parlangeli
Algorithms 2024, 17(11), 514; https://doi.org/10.3390/a17110514 - 7 Nov 2024
Viewed by 304
Abstract
In this paper, a leader motion mechanism is studied for the finite time achievement of any desired formation of a multi-agent system. The approach adopted in this paper exploits a recent technique based on leader motion to the formation control problem of second-order [...] Read more.
In this paper, a leader motion mechanism is studied for the finite time achievement of any desired formation of a multi-agent system. The approach adopted in this paper exploits a recent technique based on leader motion to the formation control problem of second-order systems, with a special effort to networks of mobile devices and teams of vehicles. After a thorough description of the problem framework, the leader motion mechanism is designed to accomplish the prescribed formation attainment in finite time. Both asymptotic and transient behavior are thoroughly analyzed, to derive the appropriate analytical conditions for the controller design. The overall algorithm is then finalized by two procedures that allow the exploitation of local data only, and the leader motion mechanism is performed based on data collected by the leader during a preliminary experimental stage. A final section of simulation results closes the paper, confirming the effectiveness of the proposed strategy for formation control of a multi-agent system. Full article
(This article belongs to the Special Issue Intelligent Algorithms for Networked Robotic Systems)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Graph topology of the multi-agent system considered in Simulations.</p>
Full article ">Figure 2
<p>Evolution of multi-agent system trajectories under protocol (<a href="#FD12-algorithms-17-00514" class="html-disp-formula">12</a>) and no leader action. Solid lines are used to plot the positions over time, while dotted lines are adopted for agents’ velocity plots.</p>
Full article ">Figure 3
<p>Evolution of multi-agent system trajectories under protocol (<a href="#FD12-algorithms-17-00514" class="html-disp-formula">12</a>) and leader motion action: consensus value along <span class="html-italic">x</span> direction. Solid lines are used to plot the positions over time, while dotted lines are adopted for agents’ velocity plots.</p>
Full article ">Figure 4
<p>Evolution of multi-agent system trajectories under protocol (<a href="#FD12-algorithms-17-00514" class="html-disp-formula">12</a>) and leader motion action: consensus value along <span class="html-italic">y</span> direction. Solid lines are used to plot the positions over time, while dotted lines are adopted for agents’ velocity plots.</p>
Full article ">Figure 5
<p>Multi-agent system trajectories under protocol (<a href="#FD12-algorithms-17-00514" class="html-disp-formula">12</a>) and leader action. Desired formation: exagon.</p>
Full article ">Figure 6
<p>Evolution of multi-agent system trajectories under protocol (<a href="#FD12-algorithms-17-00514" class="html-disp-formula">12</a>) and leader action. Absolute positions of agents in the exagon configuration along the <span class="html-italic">x</span> direction.</p>
Full article ">Figure 7
<p>Evolution of multi-agent system trajectories under protocol (<a href="#FD12-algorithms-17-00514" class="html-disp-formula">12</a>) and leader action. Absolute positions of agents in the exagon configuration along the <span class="html-italic">y</span> direction.</p>
Full article ">Figure 8
<p>Multi-agent system trajectories under protocol (<a href="#FD12-algorithms-17-00514" class="html-disp-formula">12</a>) and leader action. Desired formation: rectangle.</p>
Full article ">Figure 9
<p>Multi-agent system trajectories under protocol (<a href="#FD12-algorithms-17-00514" class="html-disp-formula">12</a>) and leader action. Desired formation: zig-zag shape.</p>
Full article ">Figure 10
<p>Multi-agent system trajectories under protocol (<a href="#FD12-algorithms-17-00514" class="html-disp-formula">12</a>) and leader action. Desired formation: star with five vertices.</p>
Full article ">
16 pages, 2633 KiB  
Article
Bus Network Adjustment Pre-Evaluation Based on Biometric Recognition and Travel Spatio-Temporal Deduction
by Qingbo Wei, Nanfeng Zhang, Yuan Gao, Cheng Chen, Li Wang and Jingfeng Yang
Algorithms 2024, 17(11), 513; https://doi.org/10.3390/a17110513 - 7 Nov 2024
Viewed by 329
Abstract
A critical component of bus network adjustment is the accurate prediction of potential risks, such as the likelihood of complaints from passengers. Traditional simulation methods, however, face limitations in identifying passengers and understanding how their travel patterns may change. To address this issue, [...] Read more.
A critical component of bus network adjustment is the accurate prediction of potential risks, such as the likelihood of complaints from passengers. Traditional simulation methods, however, face limitations in identifying passengers and understanding how their travel patterns may change. To address this issue, a pre-evaluation method has been developed, leveraging the spatial distribution of bus networks and the spatio-temporal behavior of passengers. The method includes stage of travel demand analysis, accessible path set calculation, passenger assignment, and evaluation of key indicators. First, we explore the actual passengers’ origin and destination (OD) stop from bus card (or passenger Code) payment data and biometric recognition data, with the OD as one of the main input parameters. Second, a digital bus network model is constructed to represent the logical and spatial relationships between routes and stops. Upon inputting bus line adjustment parameters, these relationships allow for the precise and automatic identification of the affected areas, as well as the calculation of accessible paths of each OD pair. Third, the factors influencing passengers’ path selection are analyzed, and a predictive model is built to estimate post-adjustment path choices. A genetic algorithm is employed to optimize the model’s weights. Finally, various metrics, such as changes in travel routes and ride times, are analyzed by integrating passenger profiles. The proposed method was tested on the case of the Guangzhou 543 route adjustment. Results show that the accuracy of the number of predicted trips after adjustment is 89.6%, and the predicted flow of each associated bus line is also consistent with the actual situation. The main reason for the error is that the path selection has a certain level of irrationality, which stems from the fact that the proportion of passengers who choose the minimum cost path for direct travel is about 65%, while the proportion of one-transfer passengers is only about 50%. Overall, the proposed algorithm can quantitatively analyze the impact of rigid travel groups, occasional travel groups, elderly groups, and other groups that are prone to making complaints in response to bus line adjustment. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Algorithm process.</p>
Full article ">Figure 2
<p>Bus network model.</p>
Full article ">Figure 3
<p>Process of passenger travel demand analysis.</p>
Full article ">Figure 4
<p>Bus route model.</p>
Full article ">Figure 5
<p>Composition of bus passenger travel in Guangzhou.</p>
Full article ">Figure 6
<p>The proportion of the most popular bus routes in the same OD travel (Guangzhou).</p>
Full article ">Figure 7
<p>Path selection distribution.</p>
Full article ">Figure 8
<p>The process of passenger distribution and path comparison.</p>
Full article ">Figure 9
<p>Route map of bus line 543 and associated lines in Guangzhou.</p>
Full article ">Figure 10
<p>Analysis of the accuracy of sectional passenger volume (Line 238).</p>
Full article ">
14 pages, 2862 KiB  
Article
Optimizing Parameters for Enhanced Iterative Image Reconstruction Using Extended Power Divergence
by Takeshi Kojima, Yusaku Yamaguchi, Omar M. Abou Al-Ola and Tetsuya Yoshinaga
Algorithms 2024, 17(11), 512; https://doi.org/10.3390/a17110512 - 7 Nov 2024
Viewed by 339
Abstract
In this paper, we propose a method for optimizing the parameter values in iterative reconstruction algorithms that include adjustable parameters in order to optimize the reconstruction performance. Specifically, we focus on the power divergence-based expectation-maximization algorithm, which includes two power indices as adjustable [...] Read more.
In this paper, we propose a method for optimizing the parameter values in iterative reconstruction algorithms that include adjustable parameters in order to optimize the reconstruction performance. Specifically, we focus on the power divergence-based expectation-maximization algorithm, which includes two power indices as adjustable parameters. Through numerical and physical experiments, we demonstrate that optimizing the evaluation function based on the extended power-divergence and weighted extended power-divergence measures yields high-quality image reconstruction. Notably, the optimal parameter values derived from the proposed method produce reconstruction results comparable to those obtained using the true image, even when using distance functions based on differences between forward projection data and measured projection data, as verified by numerical experiments. These results suggest that the proposed method effectively improves reconstruction quality without the need for machine-learning techniques in parameter selection. Our findings also indicate that this approach is useful for enhancing the performance of iterative reconstruction algorithms, especially in medical imaging, where high-accuracy reconstruction under noisy conditions is required. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Disc and (<b>b</b>) modified Shepp–Logan phantom images.</p>
Full article ">Figure 2
<p>Images reconstructed from X-ray CT scanner projections using (<b>a</b>) FBP procedure and (<b>b</b>) MLEM method.</p>
Full article ">Figure 3
<p>Parameters <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> (green iterative points, left axis) and the evaluation function <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>λ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>;</mo> <mi>N</mi> <mo>)</mo> </mrow> </semantics></math> (brown iterative points, right axis) at each iteration <span class="html-italic">t</span> in the optimization process of the evaluation function for phantoms (<b>a</b>) disc and (<b>b</b>) Shepp–Logan. The distance function is the Euclidean distance between the reconstructed image and the true image.</p>
Full article ">Figure 4
<p>Evolution of the evaluation function <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>λ</mi> <mo>(</mo> <mn>0</mn> <mo>)</mo> <mo>;</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>λ</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> <mo>;</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math> with the number of iterations <span class="html-italic">n</span> for phantoms (<b>a</b>) disc and (<b>b</b>) Shepp–Logan (shown as blue and red iterative points, respectively). The distance function is the Euclidean distance between the reconstructed image and the true image.</p>
Full article ">Figure 5
<p>Images reconstructed using MLEM and PDEM (<b>top</b>) and corresponding subtraction images (<b>bottom</b>) defined by the distance function for phantoms (<b>a</b>) disc and (<b>b</b>) Shepp–Logan. The distance function is the Euclidean distance between the reconstructed image and the true image.</p>
Full article ">Figure 6
<p>The density profile along the column direction (<math display="inline"><semantics> <mrow> <mo>ℓ</mo> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>128</mn> </mrow> </semantics></math>), fixed at the 51st row (<math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> <mo>,</mo> <mtext> </mtext> <mn>400</mn> </mrow> </semantics></math>), for images reconstructed using MLEM and PDEM for phantoms (<b>a</b>) disc and (<b>b</b>) Shepp–Logan. The black, blue, and red lines represent the true values, MLEM, and PDEM, respectively. The distance function is the Euclidean distance between the reconstructed image and the true image.</p>
Full article ">Figure 7
<p>The parameters <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> (green iterative points, left axis) and the evaluation function <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>λ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>;</mo> <mi>N</mi> <mo>)</mo> </mrow> </semantics></math> (brown iterative points, right axis) during each iteration <span class="html-italic">t</span> in the optimization process for a disc phantom. The distance function is the (<b>a</b>) Euclidean distance, (<b>b</b>) KL-divergence, (<b>c</b>) EPD, and (<b>d</b>) WEPD between forward and measured projections.</p>
Full article ">Figure 8
<p>Changes in the evaluation function <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>λ</mi> <mo>(</mo> <mn>0</mn> <mo>)</mo> <mo>;</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>λ</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> <mo>;</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math> with the number of iterations <span class="html-italic">n</span> (shown as blue and red iterative points, respectively) for a disc phantom. The distance function is the (<b>a</b>) Euclidean distance, (<b>b</b>) KL-divergence, (<b>c</b>) EPD, and (<b>d</b>) WEPD between forward and measured projections.</p>
Full article ">Figure 9
<p>Images reconstructed using PDEM (with parameters <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>) (<b>top</b>) and the corresponding subtraction images (<b>bottom</b>) for a disc phantom. The distance function between forward and measured projections is (<b>a</b>) Euclidean distance, (<b>b</b>) KL-divergence, (<b>c</b>) EPD, and (<b>d</b>) WEPD.</p>
Full article ">Figure 10
<p>(<b>a</b>) The parameters <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> (green iterative points, left axis) and the evaluation function <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>λ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>;</mo> <mi>N</mi> <mo>)</mo> </mrow> </semantics></math> (brown iterative points, right axis) at each iteration <span class="html-italic">t</span> during the optimization process of the evaluation function using projections from an X-ray CT scanner, and (<b>b</b>) the change in the evaluation function <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>λ</mi> <mo>(</mo> <mn>0</mn> <mo>)</mo> <mo>;</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>λ</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> <mo>;</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math> (shown as blue and red iterative points, respectively) over iteration count <span class="html-italic">n</span>. The distance function is the WEPD between forward and measured projections.</p>
Full article ">Figure 11
<p>Images reconstructed using PDEM (parameters <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>) from projections obtained with an X-ray CT scanner. The distance functions between forward and measured projections are (<b>a</b>) Euclidean distance, (<b>b</b>) KL-divergence, (<b>c</b>) EPD, and (<b>d</b>) WEPD.</p>
Full article ">Figure 12
<p>Density profile along the column direction (<math display="inline"><semantics> <mrow> <mo>ℓ</mo> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>674</mn> </mrow> </semantics></math>), fixed at the 224th row (<math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>150</mn> <mo>,</mo> <mtext> </mtext> <mn>302</mn> </mrow> </semantics></math>), in images reconstructed using MLEM and PDEM from projections obtained with an X-ray CT scanner. The blue and red lines represent MLEM and PDEM, respectively. The distance functions between forward and measured projections are (<b>a</b>) Euclidean distance, (<b>b</b>) KL-divergence, (<b>c</b>) EPD, and (<b>d</b>) WEPD.</p>
Full article ">
13 pages, 696 KiB  
Article
PIPET: A Pipeline to Generate PET Phantom Datasets for Reconstruction Based on Convolutional Neural Network Training
by Alejandro Sanz-Sanchez, Francisco B. García, Pablo Mesas-Lafarga, Joan Prats-Climent and María José Rodríguez-Álvarez
Algorithms 2024, 17(11), 511; https://doi.org/10.3390/a17110511 - 7 Nov 2024
Viewed by 428
Abstract
There has been a strong interest in using neural networks to solve several tasks in PET medical imaging. One of the main problems faced when using neural networks is the quality, quantity, and availability of data to train the algorithms. In order to [...] Read more.
There has been a strong interest in using neural networks to solve several tasks in PET medical imaging. One of the main problems faced when using neural networks is the quality, quantity, and availability of data to train the algorithms. In order to address this issue, we have developed a pipeline that enables the generation of voxelized synthetic PET phantoms, simulates the acquisition of a PET scan, and reconstructs the image from the simulated data. In order to achieve these results, several pieces of software are used in the different steps of the pipeline. This pipeline solves the problem of generating diverse PET datasets and images of high quality for different types of phantoms and configurations. The data obtained from this pipeline can be used to train convolutional neural networks for PET reconstruction. Full article
Show Figures

Figure 1

Figure 1
<p>PIPET diagram with the definition of the inputs and outputs for each software used in the pipeline.</p>
Full article ">Figure 2
<p>Phantoms: (<b>a</b>) NEMA, (<b>b</b>) Jaszczak, (<b>c</b>) Derenzo, and (<b>d</b>) Shepp–Logan.</p>
Full article ">Figure 3
<p>Scanner simulated in GATE.</p>
Full article ">Figure 4
<p><b>Top</b> row: Voxelized likewise phantoms. <b>Bottom</b> row: Reconstructed likewise phantoms.</p>
Full article ">Figure 5
<p>Line profiles for the phantom samples in <a href="#algorithms-17-00511-f004" class="html-fig">Figure 4</a>.</p>
Full article ">
28 pages, 4502 KiB  
Article
Improved Bacterial Foraging Optimization Algorithm with Machine Learning-Driven Short-Term Electricity Load Forecasting: A Case Study in Peninsular Malaysia
by Farah Anishah Zaini, Mohamad Fani Sulaima, Intan Azmira Wan Abdul Razak, Mohammad Lutfi Othman and Hazlie Mokhlis
Algorithms 2024, 17(11), 510; https://doi.org/10.3390/a17110510 - 6 Nov 2024
Viewed by 430
Abstract
Accurate electricity demand forecasting is crucial for ensuring the sustainability and reliability of power systems. Least square support vector machines (LSSVM) are well suited to handle complex non-linear power load series. However, the less optimal regularization parameter and the Gaussian kernel function in [...] Read more.
Accurate electricity demand forecasting is crucial for ensuring the sustainability and reliability of power systems. Least square support vector machines (LSSVM) are well suited to handle complex non-linear power load series. However, the less optimal regularization parameter and the Gaussian kernel function in the LSSVM model have contributed to flawed forecasting accuracy and random generalization ability. Thus, these parameters of LSSVM need to be chosen appropriately using intelligent optimization algorithms. This study proposes a new hybrid model based on the LSSVM optimized by the improved bacterial foraging optimization algorithm (IBFOA) for forecasting the short-term daily electricity load in Peninsular Malaysia. The IBFOA based on the sine cosine equation addresses the limitations of fixed chemotaxis constants in the original bacterial foraging optimization algorithm (BFOA), enhancing its exploration and exploitation capabilities. Finally, the load forecasting model based on LSSVM-IBFOA is constructed using mean absolute percentage error (MAPE) as the objective function. The comparative analysis demonstrates the model, achieving the highest determination coefficient (R2) of 0.9880 and significantly reducing the average MAPE value by 28.36%, 27.72%, and 5.47% compared to the deep neural network (DNN), LSSVM, and LSSVM-BFOA, respectively. Additionally, IBFOA exhibits faster convergence times compared to BFOA, highlighting the practicality of LSSVM-IBFOA for short-term load forecasting. Full article
Show Figures

Figure 1

Figure 1
<p>Summary of different types of LF with respective time horizons, domains, inputs, and outputs.</p>
Full article ">Figure 2
<p>Framework for electricity load forecasting.</p>
Full article ">Figure 3
<p>Structures of LSSVM.</p>
Full article ">Figure 4
<p>The flowchart of LSSVM-IBFOA.</p>
Full article ">Figure 5
<p>Monthly average electricity load profile in 24 h (2019–2021).</p>
Full article ">Figure 6
<p>Typical weekly average load profile in December 2021.</p>
Full article ">Figure 7
<p>Visualization of forecasting result for the DNN, LSSVM, LSSVM-BFOA, and LSSVM-IBFOA for (<b>a</b>) Monday; (<b>b</b>) Tuesday–Thursday; (<b>c</b>) Friday; (<b>d</b>) Saturday, and (<b>e</b>) Sunday.</p>
Full article ">Figure 7 Cont.
<p>Visualization of forecasting result for the DNN, LSSVM, LSSVM-BFOA, and LSSVM-IBFOA for (<b>a</b>) Monday; (<b>b</b>) Tuesday–Thursday; (<b>c</b>) Friday; (<b>d</b>) Saturday, and (<b>e</b>) Sunday.</p>
Full article ">Figure 7 Cont.
<p>Visualization of forecasting result for the DNN, LSSVM, LSSVM-BFOA, and LSSVM-IBFOA for (<b>a</b>) Monday; (<b>b</b>) Tuesday–Thursday; (<b>c</b>) Friday; (<b>d</b>) Saturday, and (<b>e</b>) Sunday.</p>
Full article ">Figure 8
<p>Illustrations of plots for MAPE and MAE.</p>
Full article ">Figure 9
<p>Convergence curve of BFOA and IBFOA for (<b>a</b>) Monday; (<b>b</b>) Tuesday–Thursday; (<b>c</b>) Friday; (<b>d</b>) Saturday; and (<b>e</b>) Sunday.</p>
Full article ">
14 pages, 6820 KiB  
Article
Local Search Heuristic for the Two-Echelon Capacitated Vehicle Routing Problem in Educational Decision Support Systems
by José Pedro Gomes da Cruz, Matthias Winkenbach and Hugo Tsugunobu Yoshida Yoshizaki
Algorithms 2024, 17(11), 509; https://doi.org/10.3390/a17110509 - 6 Nov 2024
Viewed by 386
Abstract
This study focuses on developing a heuristic for Decision Support Systems (DSS) in e-commerce logistics education, specifically addressing the Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP). The 2E-CVRP involves using Urban Transshipment Points (UTPs) to optimize deliveries. To tackle the complexity of the 2E-CVRP, [...] Read more.
This study focuses on developing a heuristic for Decision Support Systems (DSS) in e-commerce logistics education, specifically addressing the Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP). The 2E-CVRP involves using Urban Transshipment Points (UTPs) to optimize deliveries. To tackle the complexity of the 2E-CVRP, DSS can employ fast and effective techniques for visual problem-solving. Therefore, the objective of this work is to develop a local search heuristic to solve the 2E-CVRP quickly and efficiently for implementation in DSS. The efficiency of the heuristic is assessed through benchmarks from the literature and applied to real-world problems from a Brazilian e-commerce retailer, contributing to advancements in the 2E-CVRP approach and promoting operational efficiency in e-commerce logistics education. The heuristic yielded promising results, solving problems almost instantly, for instances in the literature on average in 1.06 s, with average gaps of 6.3% in relation to the best-known solutions and, for real problems with hundreds of customers, in 1.4 s, with gaps of 8.3%, demonstrating its effectiveness in achieving the study’s objectives. Full article
(This article belongs to the Special Issue New Insights in Algorithms for Logistics Problems and Management)
Show Figures

Figure 1

Figure 1
<p>Illustrative example of a route in the 2E-CVRP. Adapted from [<a href="#B12-algorithms-17-00509" class="html-bibr">12</a>].</p>
Full article ">Figure 2
<p>Delivery density in the regions evaluated in the case study.</p>
Full article ">
18 pages, 9219 KiB  
Article
Automated Evaluation Method for Risk Behaviors of Quay Crane Operators at Ports Using Virtual Reality
by Mengjie He, Yujie Zhang, Yi Liu, Yang Shen and Chao Mi
Algorithms 2024, 17(11), 508; https://doi.org/10.3390/a17110508 - 5 Nov 2024
Viewed by 450
Abstract
Currently, the operational risk assessment of quay crane operators at ports relies on manual evaluations based on experience, but this method lacks objectivity and fairness. As port throughput continues to grow, the port accident rate has also increased, making it crucial to scientifically [...] Read more.
Currently, the operational risk assessment of quay crane operators at ports relies on manual evaluations based on experience, but this method lacks objectivity and fairness. As port throughput continues to grow, the port accident rate has also increased, making it crucial to scientifically evaluate the risk behaviors of operators and improve their safety awareness. This paper proposes an automated evaluation method based on a Deep Q-Network (DQN) to assess the risk behaviors of quay crane operators in virtual scenarios. A risk simulation module has been added to the existing automated quay crane remote operation simulation system to simulate potential risks during operations. Based on the collected data, a DQN-based benchmark model reflecting the operational behaviors and decision-making processes of skilled operators has been developed. This model enables a quantitative evaluation of operators’ behaviors, ensuring the objectivity and accuracy of the assessment process. The experimental results show that, compared with traditional manual scoring methods, the proposed method is more stable and objective, effectively reducing subjective biases and providing a reliable alternative to conventional manual evaluations. Additionally, this method enhances operators’ safety awareness and their ability to handle risks, helping them identify and avoid risks during actual operations, thereby ensuring both operational safety and efficiency. Full article
(This article belongs to the Special Issue Algorithms for Virtual and Augmented Environments)
Show Figures

Figure 1

Figure 1
<p>Automated quay crane remote operation simulator.</p>
Full article ">Figure 2
<p>Diagram corresponding to the parameters in <a href="#algorithms-17-00508-t001" class="html-table">Table 1</a>. (<b>a</b>) is the schematic diagram of control parameters, (<b>b</b>) is the schematic diagram of constant and control parameters, (<b>c</b>,<b>d</b>) are the schematic diagram of passive measurement parameters.</p>
Full article ">Figure 2 Cont.
<p>Diagram corresponding to the parameters in <a href="#algorithms-17-00508-t001" class="html-table">Table 1</a>. (<b>a</b>) is the schematic diagram of control parameters, (<b>b</b>) is the schematic diagram of constant and control parameters, (<b>c</b>,<b>d</b>) are the schematic diagram of passive measurement parameters.</p>
Full article ">Figure 3
<p>DQN driver operation model network structure.</p>
Full article ">Figure 4
<p>DQN model training flowchart.</p>
Full article ">Figure 5
<p>Score distribution chart for Driver 1. (<b>a</b>) is the box plot for Driver 1 in the experimental and control groups shows the central tendency and dispersion of the driver’s scores; (<b>b</b>) is the line chart for Driver 1 in the experimental and control groups illustrates the stability of the driver’s scores through the fluctuation range of the line.</p>
Full article ">Figure 6
<p>Score distribution chart for Driver 6. (<b>a</b>) is the box plot for Driver 6 in the experimental and control groups shows the central tendency and dispersion of the driver’s scores; (<b>b</b>) is the line chart for Driver 6 in the experimental and control groups illustrates the stability of the driver’s scores through the fluctuation range of the line.</p>
Full article ">
20 pages, 3504 KiB  
Article
On the Estimation of Logistic Models with Banking Data Using Particle Swarm Optimization
by Moch. Fandi Ansori, Kuntjoro Adji Sidarto, Novriana Sumarti and Iman Gunadi
Algorithms 2024, 17(11), 507; https://doi.org/10.3390/a17110507 - 5 Nov 2024
Viewed by 402
Abstract
This paper presents numerical works on estimating some logistic models using particle swarm optimization (PSO). The considered models are the Verhulst model, Pearl and Reed generalization model, von Bertalanffy model, Richards model, Gompertz model, hyper-Gompertz model, Blumberg model, Turner et al. model, and [...] Read more.
This paper presents numerical works on estimating some logistic models using particle swarm optimization (PSO). The considered models are the Verhulst model, Pearl and Reed generalization model, von Bertalanffy model, Richards model, Gompertz model, hyper-Gompertz model, Blumberg model, Turner et al. model, and Tsoularis model. We employ data on commercial and rural banking assets in Indonesia due to their tendency to correspond with logistic growth. Most banking asset forecasting uses statistical methods concentrating solely on short-term data forecasting. In banking asset forecasting, deterministic models are seldom employed, despite their capacity to predict data behavior for an extended time. Consequently, this paper employs logistic model forecasting. To improve the speed of the algorithm execution, we use the Cauchy criterion as one of the stopping criteria. For choosing the best model out of the nine models, we analyze several considerations such as the mean absolute percentage error, the root mean squared error, and the value of the carrying capacity in determining which models can be unselected. Consequently, we obtain the best-fitted model for each commercial and rural bank. We evaluate the performance of PSO against another metaheuristic algorithm known as spiral optimization for benchmarking purposes. We assess the robustness of the algorithm employing the Taguchi method. Ultimately, we present a novel logistic model which is a generalization of the existence model. We evaluate its parameters and compare the result with the best-obtained model. Full article
(This article belongs to the Special Issue New Insights in Algorithms for Logistics Problems and Management)
Show Figures

Figure 1

Figure 1
<p>Total assets of (<b>a</b>) commercial banks and (<b>b</b>) rural banks in Indonesia in the period January 2007−January 2020. The monthly fluctuation of total assets of (<b>c</b>) commercial banks and (<b>d</b>) rural banks.</p>
Full article ">Figure 2
<p>The number of commercial and rural banks in Indonesia over the years.</p>
Full article ">Figure 3
<p>MAPE and RMSE of the obtained models for (<b>a</b>) commercial banks and (<b>b</b>) rural banks.</p>
Full article ">Figure 4
<p>Plot of the Pearl–Reed generalization model versus the data of (<b>a</b>) commercial banks and (<b>c</b>) rural banks and the Richards model versus the data of (<b>b</b>) commercial banks and (<b>d</b>) rural banks.</p>
Full article ">Figure 5
<p>The carrying capacity of the obtained models.</p>
Full article ">Figure 6
<p>The SN ratio for PSO’s parameters.</p>
Full article ">Figure 7
<p>The result of data fitting and prediction of Indonesian (<b>a</b>) commercial and (<b>b</b>) rural banking data.</p>
Full article ">
20 pages, 2003 KiB  
Article
Enhanced Curvature-Based Fabric Defect Detection: A Experimental Study with Gabor Transform and Deep Learning
by Mehmet Erdogan and Mustafa Dogan
Algorithms 2024, 17(11), 506; https://doi.org/10.3390/a17110506 - 5 Nov 2024
Viewed by 465
Abstract
Quality control at every stage of production in the textile industry is essential for maintaining competitiveness in the global market. Manual fabric defect inspections are often characterized by low precision and high time costs, in contrast to intelligent anomaly detection systems implemented in [...] Read more.
Quality control at every stage of production in the textile industry is essential for maintaining competitiveness in the global market. Manual fabric defect inspections are often characterized by low precision and high time costs, in contrast to intelligent anomaly detection systems implemented in the early stages of fabric production. To achieve successful automated fabric defect identification, significant challenges must be addressed, including accurate detection, classification, and decision-making processes. Traditionally, fabric defect classification has relied on inefficient and labor-intensive human visual inspection, particularly as the variety of fabric defects continues to increase. Despite the global chip crisis and its adverse effects on supply chains, electronic hardware costs for quality control systems have become more affordable. This presents a notable advantage, as vision systems can now be easily developed with the use of high-resolution, advanced cameras. In this study, we propose a discrete curvature algorithm, integrated with the Gabor transform, which demonstrates significant success in near real-time defect classification. The primary contribution of this work is the development of a modified curvature algorithm that achieves high classification performance without the need for training. This method is particularly efficient due to its low data storage requirements and minimal processing time, making it ideal for real-time applications. Furthermore, we implemented and evaluated several other methods from the literature, including Gabor and Convolutional Neural Networks (CNNs), within a unified coding framework. Each defect type was analyzed individually, with results indicating that the proposed algorithm exhibits comparable success and robust performance relative to deep learning-based approaches. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Experimental Setup. (<b>b</b>) Close-Up View of Experimental Setup.</p>
Full article ">Figure 2
<p>Defect data-set type hole, none, needle break, lycra, may (<b>left</b> to <b>right</b>).</p>
Full article ">Figure 3
<p>Fabric with simple defect.</p>
Full article ">Figure 4
<p>Fabric image with Canny edge detector.</p>
Full article ">Figure 5
<p>Contour Pieces.</p>
Full article ">Figure 6
<p>(<b>A</b>) Standard curve with points. (<b>B</b>) Calculation of discreate curvature.</p>
Full article ">Figure 7
<p>Curvature radius values.</p>
Full article ">Figure 8
<p>Exterior angle values.</p>
Full article ">Figure 9
<p>Curvature radius vs exterior angle.</p>
Full article ">Figure 10
<p>(<b>A</b>) Sample Curve, (<b>B</b>) Standard Curvature, (<b>C</b>) Optimized Curvature, (<b>D</b>) Efficient Curvature.</p>
Full article ">Figure 11
<p>Defect detection with modified discrete curvature function.</p>
Full article ">Figure 12
<p>MDCA Sample Fabric (<b>left</b> side) detected defects. (<b>rigth</b> side).</p>
Full article ">Figure 13
<p>Comparision CA and MDCA algorithms (<b>a</b>) original (<b>b</b>) CA (<b>c</b>) MDCA.</p>
Full article ">Figure 14
<p>Fabric image captured on experimental setup.</p>
Full article ">Figure 15
<p>Gabor filter on fabric with defect.</p>
Full article ">Figure 16
<p>Gabor filter on local fabric image.</p>
Full article ">Figure 17
<p>(<b>a</b>) model accuracy, (<b>b</b>) model loss.</p>
Full article ">Figure 18
<p>Fabric image dataset with defects.</p>
Full article ">Figure 19
<p>Fabric defect detection and label with Cnn.</p>
Full article ">
23 pages, 7960 KiB  
Article
Novelty in Intelligent Controlled Oscillations in Smart Structures
by Amalia Moutsopoulou, Markos Petousis, Georgios E. Stavroulakis, Anastasios Pouliezos and Nectarios Vidakis
Algorithms 2024, 17(11), 505; https://doi.org/10.3390/a17110505 - 4 Nov 2024
Viewed by 415
Abstract
Structural control techniques can be used to protect engineering structures. By computing instantaneous control forces based on the input from the observed reactions and adhering to a strong control strategy, intelligent control in structural engineering can be achieved. In this study, we employed [...] Read more.
Structural control techniques can be used to protect engineering structures. By computing instantaneous control forces based on the input from the observed reactions and adhering to a strong control strategy, intelligent control in structural engineering can be achieved. In this study, we employed intelligent piezoelectric patches to reduce vibrations in structures. The actuators and sensors were implemented using piezoelectric patches. We reduced structural oscillations by employing sophisticated intelligent control methods. Examples of such control methods include H-infinity and H2. An advantage of this study is that the results are presented for both static and dynamic loading, as well as for the frequency domain. Oscillation suppression must be achieved over the entire frequency range. In this study, advanced programming was used to solve this problem and complete oscillation suppression was achieved. This study explored in detail the methods and control strategies that can be used to address the problem of oscillations. These techniques have been thoroughly described and analyzed, offering valuable insights into their effective applications. The ability to reduce oscillations has significant implications for applications that extend to various structures and systems such as airplanes, metal bridges, and large metallic structures. Full article
Show Figures

Figure 1

Figure 1
<p>Piezoelectric patch attached to a beam.</p>
Full article ">Figure 2
<p>One pair of actuator patches.</p>
Full article ">Figure 3
<p>An intelligent beam that incorporates piezoelectric actuators and sensors.</p>
Full article ">Figure 4
<p>The actuators were positioned over the entire smart structure.</p>
Full article ">Figure 5
<p>Smart structure.</p>
Full article ">Figure 6
<p>Beam with noise output, error, disturbance input, and controller.</p>
Full article ">Figure 7
<p>Block diagram disturbance and errors.</p>
Full article ">Figure 8
<p>Noise and errors in block diagram form.</p>
Full article ">Figure 9
<p>Disturbance and control voltages presented in a block diagram form.</p>
Full article ">Figure 10
<p>Noise and control voltages presented in a block diagram form.</p>
Full article ">Figure 11
<p>A block diagram showing the beam scenario’s weights.</p>
Full article ">Figure 12
<p>Diagram with two ports for the beam issue.</p>
Full article ">Figure 13
<p>Nominal performance in Simulink, where Wd, Wn, Wu, and We are the whets of our system for disturbances (d) noise (n), control vector (u), and error (e); x is the state vector and y is the output; in the displacement the rotation and the control vector (u), K is our control (H-infinity or H<sub>2</sub>). In the results, we take the diagram for the open loop (without control) with H-infinity and the H<sub>2</sub> controller.</p>
Full article ">Figure 14
<p>Comparing the nodes’ rotations in the smart structures both with and without control.</p>
Full article ">Figure 15
<p>Analyzing and contrasting the smart structure node displacements with and without control.</p>
Full article ">Figure 16
<p>Results of displacements with and without control when applying static loading at the beam’s free end.</p>
Full article ">Figure 17
<p>Results of rotations with and without control when applying static loading at the beam’s free end.</p>
Full article ">Figure 18
<p>Control voltages for each smart structure node. The numbers 1, 2, 3, and 4 correspond to the four piezoelectric actuators (voltages) present in the smart beam.</p>
Full article ">Figure 19
<p>Smart structure’s free end displacement with (H-infinity) and without control (open loop, OL).</p>
Full article ">Figure 20
<p>Displacement of the 3rd node of the smart structure presented with (H-infinity) and without control (open loop, OL).</p>
Full article ">Figure 21
<p>Control voltages for all the nodes of the smart structures under sinusoidal disturbances. The numbers 1, 2, 3, and 4 correspond to the four piezoelectric actuators (voltages) present in the smart beam.</p>
Full article ">Figure 22
<p>The Bode diagram of the smart structures. Υ axis is the magnitude (dB).</p>
Full article ">
24 pages, 2294 KiB  
Article
Fast Algorithm for Cyber-Attack Estimation and Attack Path Extraction Using Attack Graphs with AND/OR Nodes
by Eugene Levner and Dmitry Tsadikovich
Algorithms 2024, 17(11), 504; https://doi.org/10.3390/a17110504 - 4 Nov 2024
Viewed by 503
Abstract
This paper studies the security issues for cyber–physical systems, aimed at countering potential malicious cyber-attacks. The main focus is on solving the problem of extracting the most vulnerable attack path in a known attack graph, where an attack path is a sequence of [...] Read more.
This paper studies the security issues for cyber–physical systems, aimed at countering potential malicious cyber-attacks. The main focus is on solving the problem of extracting the most vulnerable attack path in a known attack graph, where an attack path is a sequence of steps that an attacker can take to compromise the underlying network. Determining an attacker’s possible attack path is critical to cyber defenders as it helps identify threats, harden the network, and thwart attacker’s intentions. We formulate this problem as a path-finding optimization problem with logical constraints represented by AND and OR nodes. We propose a new Dijkstra-type algorithm that combines elements from Dijkstra’s shortest path algorithm and the critical path method. Although the path extraction problem is generally NP-hard, for the studied special case, the proposed algorithm determines the optimal attack path in polynomial time, O(nm), where n is the number of nodes and m is the number of edges in the attack graph. To our knowledge this is the first exact polynomial algorithm that can solve the path extraction problem for different attack graphs, both cycle-containing and cycle-free. Computational experiments with real and synthetic data have shown that the proposed algorithm consistently and quickly finds optimal solutions to the problem. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>The flow chart of the proposed algorithm.</p>
Full article ">Figure 2
<p>(<b>a</b>) Example 1 adapted from [<a href="#B41-algorithms-17-00504" class="html-bibr">41</a>]. (<b>b</b>) Extracted minimum-length attack path.</p>
Full article ">Figure 3
<p>(<b>a</b>) Acyclic attack graph equipped with the node times. (<b>b</b>) Extracted minimum-length attack path for Example 2.</p>
Full article ">Figure 4
<p>(<b>a</b>) Attack graph with cycles. (<b>b</b>) Extracted minimum-length attack path.</p>
Full article ">Figure 5
<p>(<b>a</b>) The unweighted attack graph with cycles. (<b>b</b>) Extracted minimum-length attack path.</p>
Full article ">Figure 6
<p>Attack graph with cycles without an attack path.</p>
Full article ">Figure 7
<p>The extended attack graph with the start node (adapted from [<a href="#B20-algorithms-17-00504" class="html-bibr">20</a>]).</p>
Full article ">Figure 8
<p>Extracted minimum-length attack path for the attack graph in <a href="#algorithms-17-00504-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 9
<p>A scheme of defenders’ response to a malicious attack.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop