Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 15, October
Previous Issue
Volume 15, August
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Algorithms, Volume 15, Issue 9 (September 2022) – 38 articles

Cover Story (view full-size image): In configuration design, the task is to compose a system out of a set of predefined, modular building blocks. Product configuration systems implement reasoning techniques to model and explore the resulting solution spaces. Among others, the formulation of constraint satisfaction problems (CSP) is state of the art and the background in many proprietary configuration engine software packages. Configuration design tasks can also be implemented in computer-aided design (CAD) systems as these contain different techniques for knowledge-based product modeling, but the literature reports only little about modeling examples or training materials. This article is aimed at bridging this gap and presents a step-by-step implementation guide for CSP-based CAD configurators on the example of Autodesk Inventor. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 845 KiB  
Article
Non-Interactive Decision Trees and Applications with Multi-Bit TFHE
by Jestine Paul, Benjamin Hong Meng Tan, Bharadwaj Veeravalli and Khin Mi Mi Aung
Algorithms 2022, 15(9), 333; https://doi.org/10.3390/a15090333 - 18 Sep 2022
Cited by 5 | Viewed by 2241
Abstract
Machine learning classification algorithms, such as decision trees and random forests, are commonly used in many applications. Clients who want to classify their data send them to a server that performs their inference using a trained model. The client must trust the server [...] Read more.
Machine learning classification algorithms, such as decision trees and random forests, are commonly used in many applications. Clients who want to classify their data send them to a server that performs their inference using a trained model. The client must trust the server and provide the data in plaintext. Moreover, if the classification is done at a third-party cloud service, the model owner also needs to trust the cloud service. In this paper, we propose a protocol for privately evaluating decision trees. The protocol uses a novel private comparison function based on fully homomorphic encryption over the torus (TFHE) scheme and a programmable bootstrapping technique. Our comparison function for 32-bit and 64-bit integers is 26% faster than the naive TFHE implementation. The protocol is designed to be non-interactive and is less complex than the existing interactive protocols. Our experiment results show that our technique scales linearly with the depth of the decision tree and efficiently evaluates large decision trees on real datasets. Compared with the state of the art, ours is the only non-interactive protocol to evaluate a decision tree with high precision on encrypted parameters. The final download bandwidth is also 50% lower than the state of the art. Full article
Show Figures

Figure 1

Figure 1
<p>Private machine learning as a service.</p>
Full article ">Figure 2
<p>A 32-bit comparator circuit by cascading 4-bit IC 7485 comparator.</p>
Full article ">Figure 3
<p>The 2-to-1 MUX and its truth table.</p>
Full article ">Figure 4
<p>Decision tree with three internal nodes containing threshold value <math display="inline"><semantics> <msub> <mi>τ</mi> <mi>i</mi> </msub> </semantics></math> and four leaf nodes containing classification result <math display="inline"><semantics> <msub> <mi>v</mi> <mi>i</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Encrypted nibble comparator: <span class="html-small-caps">HE-Nibble-Eq</span> and <span class="html-small-caps">HE-Nibble-Gt</span>.</p>
Full article ">Figure 6
<p><span class="html-small-caps">Parallel-Prefix-Sum</span> algorithm data flow.</p>
Full article ">Figure 7
<p>Multiplexer with encrypted inputs and output.</p>
Full article ">Figure 8
<p>Encrypted 32-bit comparator circuit by cascading 4-bit encrypted comparator.</p>
Full article ">Figure 9
<p>Decision tree evaluation with threshold comparison results and edge costs.</p>
Full article ">Figure 10
<p>Private decision tree evaluation using our private comparison function.</p>
Full article ">Figure 11
<p>Decision tree with one-hot encoded labels.</p>
Full article ">Figure 12
<p>Evaluation time of our comparison method, TFHE [<a href="#B3-algorithms-15-00333" class="html-bibr">3</a>] and XCMP [<a href="#B6-algorithms-15-00333" class="html-bibr">6</a>], for comparing two encrypted value aspects to various bit lengths.</p>
Full article ">Figure 13
<p>Time taken for path cost calculation.</p>
Full article ">
15 pages, 999 KiB  
Article
Tree-Based Classifier Ensembles for PE Malware Analysis: A Performance Revisit
by Maya Hilda Lestari Louk and Bayu Adhi Tama
Algorithms 2022, 15(9), 332; https://doi.org/10.3390/a15090332 - 17 Sep 2022
Cited by 17 | Viewed by 3729
Abstract
Given their escalating number and variety, combating malware is becoming increasingly strenuous. Machine learning techniques are often used in the literature to automatically discover the models and patterns behind such challenges and create solutions that can maintain the rapid pace at which malware [...] Read more.
Given their escalating number and variety, combating malware is becoming increasingly strenuous. Machine learning techniques are often used in the literature to automatically discover the models and patterns behind such challenges and create solutions that can maintain the rapid pace at which malware evolves. This article compares various tree-based ensemble learning methods that have been proposed in the analysis of PE malware. A tree-based ensemble is an unconventional learning paradigm that constructs and combines a collection of base learners (e.g., decision trees), as opposed to the conventional learning paradigm, which aims to construct individual learners from training data. Several tree-based ensemble techniques, such as random forest, XGBoost, CatBoost, GBM, and LightGBM, are taken into consideration and are appraised using different performance measures, such as accuracy, MCC, precision, recall, AUC, and F1. In addition, the experiment includes many public datasets, such as BODMAS, Kaggle, and CIC-MalMem-2022, to demonstrate the generalizability of the classifiers in a variety of contexts. Based on the test findings, all tree-based ensembles performed well, and performance differences between algorithms are not statistically significant, particularly when their respective hyperparameters are appropriately configured. The proposed tree-based ensemble techniques also outperformed other, similar PE malware detectors that have been published in recent years. Full article
Show Figures

Figure 1

Figure 1
<p>Performance comparison methodology of tree-based ensembles for PE malware detection.</p>
Full article ">Figure 2
<p>Feature correlation analysis of each malware dataset: (<b>a</b>) BODMAS, (<b>b</b>) Kaggle, and (<b>c</b>) CIC-MalMem-2022.</p>
Full article ">Figure 3
<p>Two-dimensional visualization of instance pairs using t-SNE technique of each malware dataset: (<b>a</b>) BODMAS, (<b>b</b>) Kaggle, and (<b>c</b>) CIC-MalMem-2022.</p>
Full article ">Figure 4
<p>Performance comparison of various tree-based ensemble models on different datasets, i.e., (<b>a</b>) BODMAS, (<b>b</b>) Kaggle, and (<b>c</b>) CIC-MalMem-2022.</p>
Full article ">
13 pages, 3774 KiB  
Article
Irregular Workpiece Template-Matching Algorithm Using Contour Phase
by Shaohui Su, Jiadong Wang and Dongyang Zhang
Algorithms 2022, 15(9), 331; https://doi.org/10.3390/a15090331 - 16 Sep 2022
Cited by 2 | Viewed by 2038
Abstract
The current template-matching algorithm can match the target workpiece but cannot give the position and orientation of the irregular workpiece. Aiming at this problem, this paper proposes a template-matching algorithm for irregular workpieces based on the contour phase difference. By this, one can [...] Read more.
The current template-matching algorithm can match the target workpiece but cannot give the position and orientation of the irregular workpiece. Aiming at this problem, this paper proposes a template-matching algorithm for irregular workpieces based on the contour phase difference. By this, one can firstly gain the profile curve of the irregular workpiece by measuring its radius in orderly fashion and then, calculate the similarity of the template workpiece profile and the target one by phase-shifting and finally, compute the rotation measure between the two according to the number of phase movements. The experimental results showed that in this way one could not only match the shaped workpiece in the template base accurately, but also accurately calculate the rotation angle of the irregular workpiece relative to the template with the maximum error controlled within 1%. Full article
Show Figures

Figure 1

Figure 1
<p>Template shaped workpiece. A–D are different types of artifacts, B-mirror is the mirror image of B, and D-mirror is the mirror image of D.</p>
Full article ">Figure 2
<p>Graphical radius of round workpiece. (<b>a</b>) Round piece, (<b>b</b>) Curve graph.</p>
Full article ">Figure 3
<p>Graphical radius of irregular workpiece. (<b>a</b>) Irregular piece, (<b>b</b>) Curve graph.</p>
Full article ">Figure 4
<p>Periodogram comparison chart of the template workpiece and the target one.</p>
Full article ">Figure 5
<p>The template coincides with the target workpiece profile after phase move.</p>
Full article ">Figure 6
<p>Sub-template work piece database.</p>
Full article ">Figure 7
<p>Sub-template workpiece contour curve.</p>
Full article ">Figure 8
<p>Target workpiece phase difference match. (<b>a</b>) <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>18</mn> <mo>°</mo> </mrow> </semantics></math> Rotation of the KB-Left; (<b>b</b>) 22°rotation of the KB-Right; (<b>c</b>) 180°rotation of the ZD631; (<b>d</b>) 90°rotation of the ZD632.</p>
Full article ">Figure 9
<p>B, B-Mirror phase difference algorithm matching results. (<b>a</b>) <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>70</mn> <mo>°</mo> </mrow> </semantics></math> Rotation of the B; (<b>b</b>) <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>180</mn> <mo>°</mo> <mo> </mo> </mrow> </semantics></math> rotation of the B-Mirror.</p>
Full article ">
8 pages, 1976 KiB  
Review
Polymer Models of Chromatin Imaging Data in Single Cells
by Mattia Conte, Andrea M. Chiariello, Alex Abraham, Simona Bianco, Andrea Esposito, Mario Nicodemi, Tommaso Matteuzzi and Francesca Vercellone
Algorithms 2022, 15(9), 330; https://doi.org/10.3390/a15090330 - 16 Sep 2022
Cited by 4 | Viewed by 2081
Abstract
Recent super-resolution imaging technologies enable tracing chromatin conformation with nanometer-scale precision at the single-cell level. They revealed, for example, that human chromosomes fold into a complex three-dimensional structure within the cell nucleus that is essential to establish biological activities, such as the regulation [...] Read more.
Recent super-resolution imaging technologies enable tracing chromatin conformation with nanometer-scale precision at the single-cell level. They revealed, for example, that human chromosomes fold into a complex three-dimensional structure within the cell nucleus that is essential to establish biological activities, such as the regulation of the genes. Yet, to decode from imaging data the molecular mechanisms that shape the structure of the genome, quantitative methods are required. In this review, we consider models of polymer physics of chromosome folding that we benchmark against multiplexed FISH data available in human loci in IMR90 fibroblast cells. By combining polymer theory, numerical simulations and machine learning strategies, the predictions of the models are validated at the single-cell level, showing that chromosome structure is controlled by the interplay of distinct physical processes, such as active loop-extrusion and thermodynamic phase-separation. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

Figure 1
<p>The median spatial distance matrix of the IMR90 locus (Chr21: 28–30 Mb) from multiplexed FISH microscopy [<a href="#B15-algorithms-15-00330" class="html-bibr">15</a>] is compared against the corresponding in silico matrices of the considered polymer models [<a href="#B28-algorithms-15-00330" class="html-bibr">28</a>]. Both loop-extrusion (LE) and phase-separation based models (SBS and LE + SBS) well recapitulate the complex experimental pattern of contacts. Adapted from [<a href="#B28-algorithms-15-00330" class="html-bibr">28</a>].</p>
Full article ">Figure 2
<p>The genomic boundary probability function of the IMR90 locus is consistently recapitulated by the different models. Adapted from [<a href="#B28-algorithms-15-00330" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Example of experiment-model best-matching 3D structures, along with their corresponding single-molecule distance maps, as identified by the RMSD method. Adapted from [<a href="#B28-algorithms-15-00330" class="html-bibr">28</a>].</p>
Full article ">Figure 4
<p>Statistical significance of the RMSD analysis: (<b>a</b>) the RMSD distribution of the experiment-model best-matches (for each type of model) is statistically distinguishable from a control made of random pairs of imaged conformations; (<b>b</b>) less than 5% of the entries of the experiment-model best-match distributions are within the first 10% of the control. Adapted from [<a href="#B28-algorithms-15-00330" class="html-bibr">28</a>].</p>
Full article ">
20 pages, 3066 KiB  
Article
Classification of Program Texts Represented as Markov Chains with Biology-Inspired Algorithms-Enhanced Extreme Learning Machines
by Liliya A. Demidova and Artyom V. Gorchakov
Algorithms 2022, 15(9), 329; https://doi.org/10.3390/a15090329 - 15 Sep 2022
Cited by 5 | Viewed by 2416
Abstract
The massive nature of modern university programming courses increases the burden on academic workers. The Digital Teaching Assistant (DTA) system addresses this issue by automating unique programming exercise generation and checking, and provides means for analyzing programs received from students by the end [...] Read more.
The massive nature of modern university programming courses increases the burden on academic workers. The Digital Teaching Assistant (DTA) system addresses this issue by automating unique programming exercise generation and checking, and provides means for analyzing programs received from students by the end of semester. In this paper, we propose a machine learning-based approach to the classification of student programs represented as Markov chains. The proposed approach enables real-time student submissions analysis in the DTA system. We compare the performance of different multi-class classification algorithms, such as support vector machine (SVM), the k nearest neighbors (KNN) algorithm, random forest (RF), and extreme learning machine (ELM). ELM is a single-hidden layer feedforward network (SLFN) learning scheme that drastically speeds up the SLFN training process. This is achieved by randomly initializing weights of connections among input and hidden neurons, and explicitly computing weights of connections among hidden and output neurons. The experimental results show that ELM is the most computationally efficient algorithm among the considered ones. In addition, we apply biology-inspired algorithms to ELM input weights fine-tuning in order to further improve the generalization capabilities of this algorithm. The obtained results show that ELMs fine-tuned with biology-inspired algorithms achieve the best accuracy on test data in most of the considered problems. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications III)
Show Figures

Figure 1

Figure 1
<p>Examples of formulations for unique programming tasks of various types (see <a href="#algorithms-15-00329-t001" class="html-table">Table 1</a>) automatically generated [<a href="#B6-algorithms-15-00329" class="html-bibr">6</a>] by the DTA system [<a href="#B9-algorithms-15-00329" class="html-bibr">9</a>,<a href="#B10-algorithms-15-00329" class="html-bibr">10</a>]: (<b>a</b>) implement a decision tree, (<b>b</b>) implement an iterative function, (<b>c</b>) implement bit fields conversion.</p>
Full article ">Figure 2
<p>AST of a program that solves one of the automatically generated tasks of type 6 (see <a href="#algorithms-15-00329-t001" class="html-table">Table 1</a>, see <a href="#algorithms-15-00329-f001" class="html-fig">Figure 1</a>a) using a pattern matching-based approach, after applying AST transformations during program text vectorization in DTA.</p>
Full article ">Figure 3
<p>Markov chain state transition graphs for programs that employ different approaches to solve automatically generated tasks of type 6 (see <a href="#algorithms-15-00329-t001" class="html-table">Table 1</a>, see <a href="#algorithms-15-00329-f001" class="html-fig">Figure 1</a>a): (<b>a</b>) solution using conditional operators; (<b>b</b>) solution using pattern matching; (<b>c</b>) solution using dictionaries.</p>
Full article ">Figure 4
<p>The structure of an extreme learning machine.</p>
Full article ">Figure 5
<p>ELM (see Algorithms 3 and 4) hyperparameter selection for classification of program texts solving automatically generated tasks of various types (see <a href="#algorithms-15-00329-t001" class="html-table">Table 1</a>): (<b>a</b>) task of type 6; (<b>b</b>) task of type 7; (<b>c</b>) task of type 8; (<b>d</b>) task of type 9; (<b>e</b>) task of type 10; (<b>f</b>) task of type 11.</p>
Full article ">Figure 6
<p>Convergence of the considered biology-inspired algorithms DE and ETFSS optimizing the macro f1 score (10) in testing part of the dataset containing program texts solving automatically generated tasks of various types (see <a href="#algorithms-15-00329-t001" class="html-table">Table 1</a>): (<b>a</b>) type 8; (<b>b</b>) type 9; (<b>c</b>) type 10; (<b>d</b>) type 11.</p>
Full article ">Figure 7
<p>DTA [<a href="#B9-algorithms-15-00329" class="html-bibr">9</a>,<a href="#B10-algorithms-15-00329" class="html-bibr">10</a>] web-based UI: (<b>a</b>) displaying an error message, (<b>b</b>) displaying real-time program text analysis results obtained from ELM fine-tuned with biology-inspired algorithms.</p>
Full article ">
21 pages, 2743 KiB  
Concept Paper
A Model Architecture for Public Transport Networks Using a Combination of a Recurrent Neural Network Encoder Library and a Attention Mechanism
by Thilo Reich, David Hulbert and Marcin Budka
Algorithms 2022, 15(9), 328; https://doi.org/10.3390/a15090328 - 14 Sep 2022
Cited by 1 | Viewed by 1785
Abstract
This study presents a working concept of a model architecture allowing to leverage the state of an entire transport network to make estimated arrival time (ETA) and next-step location predictions. To this end, a combination of an attention mechanism with a dynamically changing [...] Read more.
This study presents a working concept of a model architecture allowing to leverage the state of an entire transport network to make estimated arrival time (ETA) and next-step location predictions. To this end, a combination of an attention mechanism with a dynamically changing recurrent neural network (RNN)-based encoder library is used. To achieve this, an attention mechanism was employed that incorporates the states of other vehicles in the network by encoding their positions using gated recurrent units (GRUs) of the individual bus line to encode their current state. By muting specific parts of the imputed information, their impact on prediction accuracy can be estimated on a subset of the available data. The results of the experimental investigation show that the full model with access to all the network data performed better in some scenarios. However, a model limited to vehicles of the same line ahead of the target was the best performing model, suggesting that the incorporation of additional data can have a negative impact on the prediction accuracy if they do not add any useful information. This could be caused by poor data quality but also by a lack of interaction between the included lines and the target line. The technical aspects of this study are challenging and resulted in a very inefficient training procedure. We highlight several areas where improvements to our presented method are required to make it a viable alternative to current methods. The findings in this study should be considered as a possible and promising avenue for further research into this novel architecture. As such, it is a stepping stone for future research to improve public transport predictions if network operators provide high-quality datasets. Full article
(This article belongs to the Special Issue Machine Learning for Time Series Analysis)
Show Figures

Figure 1

Figure 1
<p>Map of Reading showing both the Eastbound as well as Westbound journey patterns. The blow-out area shows the city centre where the line negotiates a one-way system and therefore, runs on two separate routes depending on the direction of travel.</p>
Full article ">Figure 2
<p>Map of the Reading city centre showing the route of the eastbound line 17 in blue with a square indicating the area of interest where the selection of additional lines to be included in the model was made.</p>
Full article ">Figure 3
<p>Ratio of data contribution for both directions an their interacting lines. Only those lines are shown which contribute more than 3% of data points in the area of interests as shown in <a href="#algorithms-15-00328-f002" class="html-fig">Figure 2</a>. <b>Left</b> shows the ratio of data points to the target line with origin from each of the included lines for the westbound direction. <b>Right</b> shows the ratio of data points to the target line for the eastbound direction.</p>
Full article ">Figure 4
<p>Interacting line trajectories blue line is the are of interaction from <a href="#algorithms-15-00328-f002" class="html-fig">Figure 2</a>, red trajectories are the line of interest in this case line 17 eastbound.</p>
Full article ">Figure 5
<p>The architecture of a single GRU cell where: <math display="inline"><semantics> <mrow> <msup> <mi>H</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>previous</mi> <mspace width="4.pt"/> <mspace width="4.pt"/> <mi>hidden</mi> <mspace width="4.pt"/> <mspace width="4.pt"/> <mi>state</mi> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>Reset</mi> <mspace width="4.pt"/> <mspace width="4.pt"/> <mi>gate</mi> <mo>=</mo> <mi>σ</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mi>t</mi> </msup> <mo>)</mo> </mrow> <mo>·</mo> <mi>W</mi> <mi>x</mi> <mo>+</mo> <msup> <mi>H</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>·</mo> <mi>W</mi> <mi>h</mi> <mi>r</mi> <mo>+</mo> <mi>b</mi> <mi>r</mi> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <mi>Z</mi> <mo>=</mo> <mi>Update</mi> <mspace width="4.pt"/> <mspace width="4.pt"/> <mi>gate</mi> <mo>=</mo> <mi>σ</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mi>t</mi> </msup> <mo>)</mo> </mrow> <mo>·</mo> <mi>W</mi> <mi>x</mi> <mi>z</mi> <mo>+</mo> <msup> <mi>H</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>·</mo> <mi>W</mi> <mi>h</mi> <mi>z</mi> <mo>+</mo> <mi>b</mi> <mi>z</mi> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>=</mo> <mi>Hidden</mi> <mspace width="4.pt"/> <mspace width="4.pt"/> <mi>state</mi> <mo>=</mo> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mi>h</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mi>t</mi> </msup> <mo>·</mo> <mi>W</mi> <mi>x</mi> <mi>h</mi> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mi>R</mi> <mo>·</mo> <msup> <mi>H</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> <mo>·</mo> <mi>W</mi> <mi>h</mi> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>y</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>Output</mi> <mo>=</mo> <msup> <mi>H</mi> <mi>t</mi> </msup> <mo>+</mo> <mi>W</mi> <mi>h</mi> <mi>q</mi> <mo>+</mo> <mi>b</mi> <mi>q</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Schema of attention without fully connected layer or sigmoid. This illustrates the attention mechanism itself.</p>
Full article ">Figure 7
<p>(<b>a</b>) GRU embedding method where each vehicle input is iterative fed into the corresponding line model to then generate the embedding matrix with the embedding dimension e. (<b>b</b>) Attention decoder for a single embedding shown in figure a. This shows the generation of keys, values and scores as well as the weighting of the and finally the generation of the final output.</p>
Full article ">Figure 8
<p>These figures show the model mutedto the target vehicle itself thus removing all external information. This is compared to network-based model clearly showing that the performance is reduced if the model does not have access to any external data. (<b>A</b>) shows the training loss. (<b>B</b>) shows the validation loss. (<b>C</b>) shows the generalisation error calculated by subtracting the training error from the testing error. (<b>D</b>) shows the estimated validation error in km. (<b>E</b>) shows the estimated validation loss in minutes. (Note that subfigure (<b>D</b>) has a truncated <span class="html-italic">y</span> axis for illustration purposes).</p>
Full article ">Figure 9
<p>Per journey data composition of 1000 randomly selected journeys.</p>
Full article ">Figure 10
<p>These figures show the model muted to vehicles of the same line regardless of their location in relation to the target vehicle. This is compared to network-based model clearly showing that the performance is reduced if the model does only have access to indiscriminate information of its own line. (<b>A</b>) shows the training loss. (<b>B</b>) shows the validation loss. (<b>C</b>) shows the generalisation error calculated by subtracting the training error from the testing error. (<b>D</b>) shows the estimated validation error in km. (<b>E</b>) shows the estimated validation loss in minutes. (Note that subfigure (<b>D</b>) has a truncated <span class="html-italic">y</span> axis for illustration purposes).</p>
Full article ">Figure 11
<p>The model was muted to the headway meaning only vehicles ahead of the target were included. This shows a clear performance increase in both the training an testing dataset for both directions.(<b>A</b>) shows the training loss. (<b>B</b>) shows the validation loss. (<b>C</b>) shows the generalisation error calculated by subtracting the training error from the testing error. (<b>D</b>) shows the estimated validation error in km. (<b>E</b>) shows the estimated validation loss in minutes. (Note that subfigure (<b>D</b>) has a truncated <span class="html-italic">y</span> axis for illustration purposes).</p>
Full article ">Figure 12
<p>Training and validation of the <b>eastbound direction</b> for all muted models in comparison to the model with access to the entire network data. In the example of this direction the best validation loss was achieved by the model incorporating the entire network data. (<b>A</b>) shows the training loss. (<b>B</b>) shows the validation loss. (<b>C</b>) shows the generalisation error calculated by subtracting the training error from the testing error. (<b>D</b>) shows the estimated validation error in km. (<b>E</b>) shows the estimated validation loss in minutes. (Note that subfigure (<b>D</b>) has a truncated <span class="html-italic">y</span> axis for illustration purposes).</p>
Full article ">Figure 13
<p>Training and validation of the <b>westbound direction</b> for all muted models in comparison to the model with access to the entire network data.In the example of this direction the best validation loss was achieved by the model incorporating the entire network data. (<b>A</b>) shows the training loss. (<b>B</b>) shows the validation loss. (<b>C</b>) shows the generalisation error calculated by subtracting the training error from the testing error. (<b>D</b>) shows the estimated validation error in km. (<b>E</b>) shows the estimated validation loss in minutes. (Note that subfigure (<b>D</b>) has a truncated <span class="html-italic">y</span> axis for illustration purposes).</p>
Full article ">Figure 14
<p>Figure shows boxplots of the time per epoch for both directions.</p>
Full article ">Figure 15
<p>Comparison of the westbound headway muted model trained on a partial as well as a full dataset. (<b>A</b>) shows the training loss. (<b>B</b>) shows the validation loss. (<b>C</b>) shows the generalisation error calculated by subtracting the training error from the testing error. (<b>D</b>) shows the estimated validation error in km. (<b>E</b>) shows the estimated validation loss in minutes. (Note that subfigure (<b>D</b>) has a truncated <span class="html-italic">y</span> axis for illustration purposes).</p>
Full article ">Figure 16
<p>Comparison of the eastbound headway muted model trained on a partial as well as a full dataset.(<b>A</b>) shows the training loss. (<b>B</b>) shows the validation loss. (<b>C</b>) shows the generalisation error calculated by subtracting the training error from the testing error. (<b>D</b>) shows the estimated validation error in km. (<b>E</b>) shows the estimated validation loss in minutes. (Note that subfigure (<b>D</b>) has a truncated <span class="html-italic">y</span> axis for illustration purposes).</p>
Full article ">
16 pages, 6029 KiB  
Article
Infrared Image Deblurring Based on Lp-Pseudo-Norm and High-Order Overlapping Group Sparsity Regularization
by Zhen Ye, Xiaoming Ou, Juhua Huang and Yingpin Chen
Algorithms 2022, 15(9), 327; https://doi.org/10.3390/a15090327 - 14 Sep 2022
Cited by 2 | Viewed by 1698
Abstract
A traditional total variation (TV) model for infrared image deblurring amid salt-and-pepper noise produces a severe staircase effect. A TV model with low-order overlapping group sparsity (LOGS) suppresses this effect; however, it considers only the prior information of the low-order gradient of the [...] Read more.
A traditional total variation (TV) model for infrared image deblurring amid salt-and-pepper noise produces a severe staircase effect. A TV model with low-order overlapping group sparsity (LOGS) suppresses this effect; however, it considers only the prior information of the low-order gradient of the image. This study proposes an image-deblurring model (Lp_HOGS) based on the LOGS model to mine the high-order prior information of an infrared (IR) image amid salt-and-pepper noise. An Lp-pseudo-norm was used to model the salt-and-pepper noise and obtain a more accurate noise model. Simultaneously, the second-order total variation regular term with overlapping group sparsity was introduced into the proposed model to further mine the high-order prior information of the image and preserve the additional image details. The proposed model uses the alternating direction method of multipliers to solve the problem and obtains the optimal solution of the overall model by solving the optimal solution of several simple decoupled subproblems. Experimental results show that the model has better subjective and objective performance than Lp_LOGS and other advanced models, especially when eliminating motion blur. Full article
Show Figures

Figure 1

Figure 1
<p>Contour maps of the different norms. (<b>a</b>) the L1 norm, (<b>b</b>) the L2 norm, and (<b>c</b>) the Lp-pseudo-norm.</p>
Full article ">Figure 2
<p>The passerby image with different degrees of degradation. (<b>a</b>) The original image, (<b>b</b>) the 7 × 7 Gaussian blur, and (<b>c</b>) the 7 × 7 Gaussian blur + 30% salt-and-pepper noise.</p>
Full article ">Figure 3
<p>A comparison of the deblurring effects on a passerby image. (<b>a</b>) The original passerby image, (<b>b</b>) the image deblurred using Lp_LOGS, and (<b>c</b>) the image deblurred using Lp_HOGS. (<b>d</b>–<b>f</b>) The local enlarged images from the red boxes in (<b>a</b>–<b>c</b>), respectively.</p>
Full article ">Figure 4
<p>The station image with different degrees of degradation. (<b>a</b>) The original image, (<b>b</b>) the 7 × 7 box blur, and (<b>c</b>) the 7 × 7 box blur + 40% salt-and-pepper noise.</p>
Full article ">Figure 5
<p>A comparison of deblurring effects of the two models on a station image. (<b>a</b>) The original station image, (<b>b</b>) the image deblurred using Lp_LOGS, and (<b>c</b>) the image deblurred using Lp_HOGS. (<b>d</b>–<b>f</b>) The Local enlarged images from the red boxes in (<b>a</b>–<b>c</b>), respectively.</p>
Full article ">Figure 6
<p>The truck image with different degrees of degradation. (<b>a</b>) The original image, (<b>b</b>) the 7 × 7 motion blur, and (<b>c</b>) the 7 × 7 motion blur + 50% salt-and-pepper noise.</p>
Full article ">Figure 7
<p><b>A</b> comparison of deblurring effects of the two models on a truck image. (<b>a</b>) The original truck image, (<b>b</b>) the image deblurred using Lp_LOGS, and (<b>c</b>) the image deblurred using Lp_HOGS. (<b>d</b>–<b>f</b>) The local enlarged images of the red boxes from (<b>a</b>–<b>c</b>), respectively.</p>
Full article ">
22 pages, 2300 KiB  
Article
Autonomous Intersection Management by Using Reinforcement Learning
by P. Karthikeyan, Wei-Lun Chen and Pao-Ann Hsiung
Algorithms 2022, 15(9), 326; https://doi.org/10.3390/a15090326 - 13 Sep 2022
Cited by 5 | Viewed by 2819
Abstract
Developing a safer and more effective intersection-control system is essential given the trends of rising populations and vehicle numbers. Additionally, as vehicle communication and self-driving technologies evolve, we may create a more intelligent control system to reduce traffic accidents. We recommend deep reinforcement [...] Read more.
Developing a safer and more effective intersection-control system is essential given the trends of rising populations and vehicle numbers. Additionally, as vehicle communication and self-driving technologies evolve, we may create a more intelligent control system to reduce traffic accidents. We recommend deep reinforcement learning-inspired autonomous intersection management (DRLAIM) to improve traffic environment efficiency and safety. The three primary models used in this methodology are the priority assignment model, the intersection-control model learning, and safe brake control. The brake-safe control module is utilized to make sure that each vehicle travels safely, and we train the system to acquire an effective model by using reinforcement learning. We have simulated our proposed method by using a simulation of urban mobility tools. Experimental results show that our approach outperforms the traditional method. Full article
(This article belongs to the Special Issue Neural Network for Traffic Forecasting)
Show Figures

Figure 1

Figure 1
<p>Combination of traffic phases at intersection.</p>
Full article ">Figure 2
<p>Proposed system architecture.</p>
Full article ">Figure 3
<p>The architecture of queue.</p>
Full article ">Figure 4
<p>An example of interaction between two vehicles, where vehicle <span class="html-italic">i</span> has priority over vehicle <span class="html-italic">j</span>. (<b>a</b>) Trajectories of vehicle <span class="html-italic">i</span> and vehicle <span class="html-italic">j</span> have a possible collision. (<b>b</b>) Trajectories of vehicle <span class="html-italic">i</span> and vehicle <span class="html-italic">j</span> have no collision.</p>
Full article ">Figure 5
<p>An example of selecting different actions. (<b>a</b>) Agent selects an action to let vehicles in a specific lane to pass. (<b>b</b>) Agent selects the action to let all vehicles that have sent requests to pass.</p>
Full article ">Figure 6
<p>Flow of intersection control policy.</p>
Full article ">Figure 7
<p>Intersection model used in our experiments.</p>
Full article ">Figure 8
<p>Total reward while training by using action time step 0.2 s with epochs (<b>a</b>) 25, (<b>b</b>) 50, (<b>c</b>) 100, and (<b>d</b>) 200.</p>
Full article ">Figure 9
<p>Total reward while training using action time step 0.5 s with epochs (<b>a</b>) 25, (<b>b</b>) 50, (<b>c</b>) 100, and (<b>d</b>) 200.</p>
Full article ">Figure 10
<p>Total reward while training using action time step 1 s with epochs (<b>a</b>) 25 (<b>b</b>) 50 (<b>c</b>) 100 (<b>d</b>) 200.</p>
Full article ">Figure 11
<p>Performance results while training with three action time steps and using 25 epochs. (<b>a</b>) AQL. (<b>b</b>) AWT. (<b>c</b>) Throughput.</p>
Full article ">Figure 12
<p>Performance results while training with three action time steps and using 50 epochs. (<b>a</b>) AQL. (<b>b</b>) AWT. (<b>c</b>) Throughput.</p>
Full article ">Figure 13
<p>Performance results while training with three action time steps using 100 epochs. (<b>a</b>) AQL. (<b>b</b>) AWT. (<b>c</b>) Throughput.</p>
Full article ">Figure 14
<p>Performance results while training with three action time steps using 200 epochs. (<b>a</b>) AQL. (<b>b</b>) AWT. (<b>c</b>) Throughput.</p>
Full article ">
15 pages, 600 KiB  
Article
Fed-DeepONet: Stochastic Gradient-Based Federated Training of Deep Operator Networks
by Christian Moya and Guang Lin
Algorithms 2022, 15(9), 325; https://doi.org/10.3390/a15090325 - 12 Sep 2022
Cited by 6 | Viewed by 3049
Abstract
The Deep Operator Network (DeepONet) framework is a different class of neural network architecture that one trains to learn nonlinear operators, i.e., mappings between infinite-dimensional spaces. Traditionally, DeepONets are trained using a centralized strategy that requires transferring the training data to a centralized [...] Read more.
The Deep Operator Network (DeepONet) framework is a different class of neural network architecture that one trains to learn nonlinear operators, i.e., mappings between infinite-dimensional spaces. Traditionally, DeepONets are trained using a centralized strategy that requires transferring the training data to a centralized location. Such a strategy, however, limits our ability to secure data privacy or use high-performance distributed/parallel computing platforms. To alleviate such limitations, in this paper, we study the federated training of DeepONets for the first time. That is, we develop a framework, which we refer to as Fed-DeepONet, that allows multiple clients to train DeepONets collaboratively under the coordination of a centralized server. To achieve Fed-DeepONets, we propose an efficient stochastic gradient-based algorithm that enables the distributed optimization of the DeepONet parameters by averaging first-order estimates of the DeepONet loss gradient. Then, to accelerate the training convergence of Fed-DeepONets, we propose a moment-enhanced (i.e., adaptive) stochastic gradient-based strategy. Finally, we verify the performance of Fed-DeepONet by learning, for different configurations of the number of clients and fractions of available clients, (i) the solution operator of a gravity pendulum and (ii) the dynamic response of a parametric library of pendulums. Full article
(This article belongs to the Special Issue Gradient Methods for Optimization)
Show Figures

Figure 1

Figure 1
<p>The DeepONet’s architecture. The Branch (resp. Trunk) network processes the input function <math display="inline"><semantics> <msub> <mi>u</mi> <mi>m</mi> </msub> </semantics></math> (resp. query location within the output function domain <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>∈</mo> <mi>Y</mi> </mrow> </semantics></math>). The DeepONet’s output uses a dot product (represented via a crossed node) to fuse the trainable coefficients of the Branch’s output (<math display="inline"><semantics> <mrow> <mi>b</mi> <mo>∈</mo> <msup> <mi mathvariant="double-struck">R</mi> <mi>q</mi> </msup> </mrow> </semantics></math>) with the trainable basis functions of the Trunk’s output (<math display="inline"><semantics> <mrow> <mo>φ</mo> <mo>∈</mo> <msup> <mi mathvariant="double-struck">R</mi> <mi>q</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 2
<p>A comparison of the federated training convergence between Fed-DeepONet (Algorithm 1) and Adaptive Fed-DeepONet (Algorithm 2). We used an idealistic federated scenario with <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> clients with a fraction <math display="inline"><semantics> <mrow> <mo>α</mo> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math> of available clients, <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> global synchronization rounds, and <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math> local updates.</p>
Full article ">Figure 3
<p>Fed-DeepONet <math display="inline"><semantics> <msub> <mi>G</mi> <msup> <mo>θ</mo> <mo>*</mo> </msup> </msub> </semantics></math> prediction of three solution trajectories driven by three inputs generated using a Gaussian Random Field but not included in the training dataset. The computed <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>-relative errors are: (<b>left</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>0.442</mn> </mrow> </semantics></math>%, (<b>middle</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>1.252</mn> </mrow> </semantics></math>%, and (<b>right</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>1.561</mn> </mrow> </semantics></math>%.</p>
Full article ">Figure 4
<p>Fed-DeepONet <math display="inline"><semantics> <msub> <mi>G</mi> <msup> <mo>θ</mo> <mo>*</mo> </msup> </msub> </semantics></math> prediction of three solution trajectories driven by three out-of-distribution inputs: (<b>left</b>) <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi>t</mi> </mrow> </semantics></math>, (<b>middle</b>) <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mo form="prefix">sin</mo> <mo>(</mo> <mo>π</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>, and (<b>right</b>) <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi>t</mi> <mo form="prefix">sin</mo> <mo>(</mo> <mn>2</mn> <mo>π</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. The computed <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>-relative errors are: (<b>left</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>1.813</mn> </mrow> </semantics></math>%, (<b>middle</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>0.748</mn> </mrow> </semantics></math>%, and (<b>right</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>2.296</mn> </mrow> </semantics></math>%.</p>
Full article ">Figure 5
<p>Fed-DeepONet <math display="inline"><semantics> <msub> <mi>G</mi> <msup> <mo>θ</mo> <mo>*</mo> </msup> </msub> </semantics></math> (Algorithms 1 and 2) performance change on the in-distribution (<b>left</b>) and out-of-distribution (<b>right</b>) test trajectories when we vary the data heterogeneity using <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>shards</mi> </msub> <mo>/</mo> <mi>C</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Fed-DeepONet <math display="inline"><semantics> <msub> <mi>G</mi> <msup> <mo>θ</mo> <mo>*</mo> </msup> </msub> </semantics></math> (Algorithms 1 and 2) performance change on the in-distribution (<b>left</b>) and out-of-distribution (<b>right</b>) test trajectories when we increase the input functional data heterogeneity by increasing control parameter <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>.</p>
Full article ">Figure 7
<p>Fed-DeepONet <math display="inline"><semantics> <msub> <mi>G</mi> <msup> <mo>θ</mo> <mo>*</mo> </msup> </msub> </semantics></math> prediction of three solution trajectories driven by three inputs generated using a Gaussian Random Field but not included in the training dataset, and the parameter <span class="html-italic">k</span> sampled uniformly within the interval <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0.5</mn> <mo>,</mo> <mn>1.5</mn> <mo>]</mo> </mrow> </semantics></math>. The computed <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>-relative errors are: (<b>left</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>0.442</mn> </mrow> </semantics></math>%, (<b>middle</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>1.252</mn> </mrow> </semantics></math>%, and (<b>right</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>1.561</mn> </mrow> </semantics></math>%.</p>
Full article ">Figure 8
<p>Fed-DeepONet <math display="inline"><semantics> <msub> <mi>G</mi> <msup> <mo>θ</mo> <mo>*</mo> </msup> </msub> </semantics></math> prediction of three solution trajectories driven by three out-of-distribution inputs: (<b>left</b>) <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi>t</mi> </mrow> </semantics></math>, (<b>middle</b>) <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mo form="prefix">sin</mo> <mo>(</mo> <mo>π</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>, and (<b>right</b>) <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi>t</mi> <mo form="prefix">sin</mo> <mo>(</mo> <mn>2</mn> <mo>π</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and parameter <span class="html-italic">k</span> sampled uniformly within the interval <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0.5</mn> <mo>,</mo> <mn>1.5</mn> <mo>]</mo> </mrow> </semantics></math>. The computed <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>-relative errors are: (<b>left</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>1.726</mn> </mrow> </semantics></math>%, (<b>middle</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>0.419</mn> </mrow> </semantics></math>%, and (<b>right</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <msub> <mi>L</mi> <mn>2</mn> </msub> </msub> <mo>=</mo> <mn>10.012</mn> </mrow> </semantics></math>%.</p>
Full article ">
11 pages, 650 KiB  
Article
Accounting for Round-Off Errors When Using Gradient Minimization Methods
by Dmitry Lukyanenko, Valentin Shinkarev and Anatoly Yagola
Algorithms 2022, 15(9), 324; https://doi.org/10.3390/a15090324 - 9 Sep 2022
Cited by 3 | Viewed by 2272
Abstract
This paper discusses a method for taking into account rounding errors when constructing a stopping criterion for the iterative process in gradient minimization methods. The main aim of this work was to develop methods for improving the quality of the solutions for real [...] Read more.
This paper discusses a method for taking into account rounding errors when constructing a stopping criterion for the iterative process in gradient minimization methods. The main aim of this work was to develop methods for improving the quality of the solutions for real applied minimization problems, which require significant amounts of calculations and, as a result, can be sensitive to the accumulation of rounding errors. However, this paper demonstrates that the developed approach can also be useful in solving computationally small problems. The main ideas of this work are demonstrated using one of the possible implementations of the conjugate gradient method for solving an overdetermined system of linear algebraic equations with a dense matrix. Full article
(This article belongs to the Special Issue Gradient Methods for Optimization)
Show Figures

Figure 1

Figure 1
<p>Example 1: (<b>a</b>) Graph of <math display="inline"><semantics> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <msubsup> <mi>σ</mi> <mi>s</mi> <mn>2</mn> </msubsup> <msup> <mo>Δ</mo> <mn>2</mn> </msup> </mrow> <mrow> <mrow> <mo>∥</mo> </mrow> <msup> <mi>r</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </msup> <msup> <mrow> <mo>∥</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mstyle> </semantics></math> depending on the iteration number <span class="html-italic">s</span>. (<b>b</b>) Graph of <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </msup> <mo>−</mo> <msup> <mi>x</mi> <mrow> <mi>m</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mi>l</mi> </mrow> </msup> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> depending on the iteration number <span class="html-italic">s</span>.</p>
Full article ">Figure 2
<p>Example 1: Graph of <math display="inline"><semantics> <msubsup> <mi>x</mi> <mi>n</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> </mrow> </msubsup> </semantics></math> depending on the component index <span class="html-italic">n</span>. Figures correspond to two different arbitrary matrices <span class="html-italic">A</span>.</p>
Full article ">Figure 3
<p>Example 2: (<b>a</b>) Graph of <math display="inline"><semantics> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <msubsup> <mi>σ</mi> <mi>s</mi> <mn>2</mn> </msubsup> <msup> <mo>Δ</mo> <mn>2</mn> </msup> </mrow> <mrow> <mrow> <mo>∥</mo> </mrow> <msup> <mi>r</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </msup> <msup> <mrow> <mo>∥</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mstyle> </semantics></math> depending on the iteration number <span class="html-italic">s</span>. (<b>b</b>) Graph of <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </msup> <mo>−</mo> <msup> <mi>x</mi> <mrow> <mi>m</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mi>l</mi> </mrow> </msup> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> depending on the iteration number <span class="html-italic">s</span>.</p>
Full article ">Figure 4
<p>Example 2: Graph of <math display="inline"><semantics> <msubsup> <mi>x</mi> <mi>n</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> </mrow> </msubsup> </semantics></math> depending on the component index <span class="html-italic">n</span>. Figures correspond to two different arbitrary matrices <span class="html-italic">A</span>. Solutions for different criteria for stopping iterative processes are visually indistinguishable from each other (graphs overlap).</p>
Full article ">Figure 5
<p>(<b>a</b>) Graph of <math display="inline"><semantics> <msubsup> <mi>x</mi> <mi>n</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> </mrow> </msubsup> </semantics></math> depending on the component index <span class="html-italic">n</span>. (<b>b</b>) Graph of <math display="inline"><semantics> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <msubsup> <mi>σ</mi> <mi>s</mi> <mn>2</mn> </msubsup> <msup> <mo>Δ</mo> <mn>2</mn> </msup> </mrow> <mrow> <mrow> <mo>∥</mo> </mrow> <msup> <mi>r</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </msup> <msup> <mrow> <mo>∥</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mstyle> </semantics></math> depending on the iteration number <span class="html-italic">s</span>.</p>
Full article ">Figure 6
<p>(<b>a</b>) Graph of <math display="inline"><semantics> <msubsup> <mi>x</mi> <mi>n</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> </mrow> </msubsup> </semantics></math> depending on the component index <span class="html-italic">n</span>. (<b>b</b>) Graph of <math display="inline"><semantics> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <msubsup> <mi>σ</mi> <mi>s</mi> <mn>2</mn> </msubsup> <msup> <mo>Δ</mo> <mn>2</mn> </msup> </mrow> <mrow> <mrow> <mo>∥</mo> </mrow> <msup> <mi>r</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </msup> <msup> <mrow> <mo>∥</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mstyle> </semantics></math> depending on the iteration number <span class="html-italic">s</span>.</p>
Full article ">
2 pages, 171 KiB  
Editorial
Special Issue: Stochastic Algorithms and Their Applications
by Stéphanie Allassonnière
Algorithms 2022, 15(9), 323; https://doi.org/10.3390/a15090323 - 9 Sep 2022
Viewed by 1339
Abstract
Stochastic algorithms are at the core of machine learning and artificial intelligence [...] Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
11 pages, 355 KiB  
Article
Projection onto the Set of Rank-Constrained Structured Matrices for Reduced-Order Controller Design
by Masaaki Nagahara, Yu Iwai and Noboru Sebe
Algorithms 2022, 15(9), 322; https://doi.org/10.3390/a15090322 - 9 Sep 2022
Cited by 1 | Viewed by 2129
Abstract
In this paper, we propose an efficient numerical computation method of reduced-order controller design for linear time-invariant systems. The design problem is described by linear matrix inequalities (LMIs) with a rank constraint on a structured matrix, due to which the problem is non-convex. [...] Read more.
In this paper, we propose an efficient numerical computation method of reduced-order controller design for linear time-invariant systems. The design problem is described by linear matrix inequalities (LMIs) with a rank constraint on a structured matrix, due to which the problem is non-convex. Instead of the heuristic method that approximates the matrix rank by the nuclear norm, we propose a numerical projection onto the rank-constrained set based on the alternating direction method of multipliers (ADMM). Then the controller is obtained by alternating projection between the rank-constrained set and the LMI set. We show the effectiveness of the proposed method compared with existing heuristic methods, by using 95 benchmark models from the COMPLeib library. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
20 pages, 2794 KiB  
Article
A Practical Staff Scheduling Strategy Considering Various Types of Employment in the Construction Industry
by Chan Hee Park and Young Dae Ko
Algorithms 2022, 15(9), 321; https://doi.org/10.3390/a15090321 - 9 Sep 2022
Cited by 2 | Viewed by 2786
Abstract
The Korean government implemented a 52-h workweek policy for employees’ welfare. Consequently, companies face workforce availability reduction with the same number of employees. That is, labor-dependent companies suffer from workforce shortage. To handle the workforce shortage, they increase irregular employees who are paid [...] Read more.
The Korean government implemented a 52-h workweek policy for employees’ welfare. Consequently, companies face workforce availability reduction with the same number of employees. That is, labor-dependent companies suffer from workforce shortage. To handle the workforce shortage, they increase irregular employees who are paid relatively less. However, the problem of ‘no-show’, due to the stochastic characteristics of irregular employee’s absence, happens. Therefore, this study aims to propose a staff scheduling strategy considering irregular employee absence and a new labor policy by using linear programming. By deriving a deterministic staff schedule through system parameters derived from the features and rules of an actual company in the numerical experiment, the practicality and applicability of the developed mathematical model are proven. Furthermore, through sensitivity analysis and simulation considering the stochastic characteristics of absences, various proactive cases are provided. Through the proactive cases, the influence of the change of the average percent of irregular employees’ absences on the total labor costs and staff schedules and the expected number who would not come to work could be given when assuming the application in practice. This finding can help decision-makers prepare precautious measures, such as assigning extra employees in case of an irregular employee’s absence. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>Contribution of the construction industry to the economic growth.</p>
Full article ">Figure 2
<p>Solution process of the simplex method.</p>
Full article ">Figure 3
<p>A staff schedule before applying a mathematical model.</p>
Full article ">Figure 4
<p>Optimal staff schedule during daytime.</p>
Full article ">Figure 5
<p>Optimal schedule of regular employees during overtime.</p>
Full article ">Figure 6
<p>Result of simulation for evaluation of a staff schedule.</p>
Full article ">Figure 7
<p>The difference between an expected number and the assigned number of irregular employees depending on attendance rate (the percent of irregular employees’ absences).</p>
Full article ">
27 pages, 1604 KiB  
Article
Adaptive Piecewise Poly-Sinc Methods for Ordinary Differential Equations
by Omar Khalil, Hany El-Sharkawy, Maha Youssef and Gerd Baumann
Algorithms 2022, 15(9), 320; https://doi.org/10.3390/a15090320 - 8 Sep 2022
Cited by 2 | Viewed by 2183
Abstract
We propose a new method of adaptive piecewise approximation based on Sinc points for ordinary differential equations. The adaptive method is a piecewise collocation method which utilizes Poly-Sinc interpolation to reach a preset level of accuracy for the approximation. Our work extends the [...] Read more.
We propose a new method of adaptive piecewise approximation based on Sinc points for ordinary differential equations. The adaptive method is a piecewise collocation method which utilizes Poly-Sinc interpolation to reach a preset level of accuracy for the approximation. Our work extends the adaptive piecewise Poly-Sinc method to function approximation, for which we derived an a priori error estimate for our adaptive method and showed its exponential convergence in the number of iterations. In this work, we show the exponential convergence in the number of iterations of the a priori error estimate obtained from the piecewise collocation method, provided that a good estimate of the exact solution of the ordinary differential equation at the Sinc points exists. We use a statistical approach for partition refinement. The adaptive greedy piecewise Poly-Sinc algorithm is validated on regular and stiff ordinary differential equations. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. A proper subset of the set of points <math display="inline"><semantics> <mi mathvariant="sans-serif">S</mi> </semantics></math> is shown. (<b>b</b>) Plot of the statistic <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>. (<span style="color:#4682B4">━</span>) <math display="inline"><semantics> <mrow> <mover> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>¯</mo> </mover> <mo>≈</mo> <mn>0.6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>(<b>a</b>) Fitting the upper bound (<a href="#FD24-algorithms-15-00320" class="html-disp-formula">24</a>) with the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math>. (<b>b</b>) Visualization of the residual, absolute local approximation error, and the mean value.</p>
Full article ">Figure 3
<p>The approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. A proper subset of the set of points <math display="inline"><semantics> <mi mathvariant="sans-serif">S</mi> </semantics></math> is shown.</p>
Full article ">Figure 4
<p>(<b>a</b>) Fitting the upper bound (<a href="#FD24-algorithms-15-00320" class="html-disp-formula">24</a>) with the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math>. (<b>b</b>) Visualization of the residual, absolute local approximation error and the mean value.</p>
Full article ">Figure 5
<p>(<b>a</b>) The approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. A proper subset of the set of points <math display="inline"><semantics> <mi mathvariant="sans-serif">S</mi> </semantics></math> is shown. (<b>b</b>) Plot of the statistic <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>. (<span style="color:#4682B4">━</span>) <math display="inline"><semantics> <mrow> <mover> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>¯</mo> </mover> <mo>≈</mo> <mn>0.64</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>(<b>a</b>) Fitting the upper bound (<a href="#FD24-algorithms-15-00320" class="html-disp-formula">24</a>) with the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math>. (<b>b</b>) Visualization of the residual, absolute local approximation error, and the mean value.</p>
Full article ">Figure 7
<p>(<b>a</b>) Fitting the upper bound (<a href="#FD24-algorithms-15-00320" class="html-disp-formula">24</a>) with the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math>. (<b>b</b>) Visualization of the residual, absolute local approximation error, and the mean value.</p>
Full article ">Figure 8
<p>Visualization of the approximating polynomial and statistic <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>. (<b>a</b>) Approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> obtained by multiplying Equation (<a href="#FD25-algorithms-15-00320" class="html-disp-formula">25</a>) by <span class="html-italic">x</span>. (<b>b</b>) Approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> obtained by <math display="inline"><semantics> <mrow> <mi>y</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>y</mi> <mi mathvariant="normal">c</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>c</b>) Plot of <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>,</mo> <mspace width="0.166667em"/> <mi>i</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mrow> <mn>3</mn> <mo>,</mo> </mrow> <mo>…</mo> <mo>,</mo> <mn>9</mn> </mrow> </semantics></math>. (<span style="color:#4682B4">━</span>) <math display="inline"><semantics> <mrow> <mover> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>¯</mo> </mover> <mo>≈</mo> <mn>0.66</mn> </mrow> </semantics></math>. (<b>d</b>) Plot of <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>,</mo> <mspace width="0.166667em"/> <mi>i</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mrow> <mn>3</mn> <mo>,</mo> </mrow> <mo>…</mo> <mo>,</mo> <mn>8</mn> </mrow> </semantics></math>. (<span style="color:#4682B4">━</span>) <math display="inline"><semantics> <mrow> <mover> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>¯</mo> </mover> <mo>≈</mo> <mn>0.61</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>(<b>a</b>) Fitting the upper bound (<a href="#FD24-algorithms-15-00320" class="html-disp-formula">24</a>) with the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math>. (<b>b</b>) Visualization of the residual, absolute local approximation error, and the mean value.</p>
Full article ">Figure 10
<p>Visualization of the approximating polynomial and the statistic <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>. (<b>a</b>) Approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> obtained from multiplying Equation (<a href="#FD25-algorithms-15-00320" class="html-disp-formula">25</a>) by <span class="html-italic">x</span>. (<b>b</b>) Approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> obtained from <math display="inline"><semantics> <mrow> <mi>y</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>y</mi> <mi mathvariant="normal">c</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>c</b>) Plot of <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>,</mo> <mspace width="0.166667em"/> <mi>i</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mrow> <mn>3</mn> <mo>,</mo> </mrow> <mo>…</mo> <mo>,</mo> <mn>7</mn> </mrow> </semantics></math>. (<span style="color:#4682B4">━</span>) <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>ω</mi> <mo>˜</mo> </mover> <mi>i</mi> </msub> <mo>≈</mo> <mn>0.49</mn> </mrow> </semantics></math>. (<b>d</b>) Plot of <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>,</mo> <mspace width="0.166667em"/> <mi>i</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mrow> <mn>3</mn> <mo>,</mo> </mrow> <mo>…</mo> <mo>,</mo> <mn>6</mn> </mrow> </semantics></math>. (<span style="color:#4682B4">━</span>) <math display="inline"><semantics> <mrow> <mover> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>¯</mo> </mover> <mo>≈</mo> <mn>0.58</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>(<b>a</b>) The approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. A proper subset of the set of points <math display="inline"><semantics> <mi mathvariant="sans-serif">S</mi> </semantics></math> is shown. (<b>b</b>) Plot of the statistic <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>. (<span style="color:#4682B4">━</span>) <math display="inline"><semantics> <mrow> <mover> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>¯</mo> </mover> <mo>≈</mo> <mn>0.65</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>(<b>a</b>) Fitting the upper bound (<a href="#FD24-algorithms-15-00320" class="html-disp-formula">24</a>) with the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math>. (<b>b</b>) Visualization of the residual, absolute local approximation error, and the mean value.</p>
Full article ">Figure 13
<p>(<b>a</b>) The approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. A proper subset of the set of points <math display="inline"><semantics> <mi mathvariant="sans-serif">S</mi> </semantics></math> is shown. (<b>b</b>) Plot of the statistic <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>. (<span style="color:#4682B4">━</span>) <math display="inline"><semantics> <mrow> <mover> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>¯</mo> </mover> <mo>≈</mo> <mn>0.62</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>(<b>a</b>) Fitting the upper bound (<a href="#FD24-algorithms-15-00320" class="html-disp-formula">24</a>) with the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math>. (<b>b</b>) Visualization of the residual, absolute local approximation error, and the mean value.</p>
Full article ">Figure 15
<p>Plotting the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> <mi>N</mi> <mo>+</mo> <mn>1</mn> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> Sinc points (<span style="color:#4682B4">●</span>) and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> <mi>N</mi> <mo>+</mo> <mn>1</mn> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math> Sinc points (<span style="color:#FFA500">■</span>).</p>
Full article ">Figure 16
<p>(<b>a</b>) The approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. A proper subset of the set of points <math display="inline"><semantics> <mi mathvariant="sans-serif">S</mi> </semantics></math> is shown. (<b>b</b>) Plot of the statistic <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>. (<span style="color:#4682B4">━</span>) <math display="inline"><semantics> <mrow> <mover> <msub> <mi>ω</mi> <mi>i</mi> </msub> <mo>¯</mo> </mover> <mo>≈</mo> <mn>0.46</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>(<b>a</b>) Fitting the upper bound (<a href="#FD24-algorithms-15-00320" class="html-disp-formula">24</a>) with the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math>. (<b>b</b>) Visualization of the residual, absolute local approximation error, and the mean value.</p>
Full article ">Figure 18
<p>(<b>a</b>) The approximating polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>y</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. A proper subset of the set of points <math display="inline"><semantics> <mi mathvariant="sans-serif">S</mi> </semantics></math> is shown. (<b>b</b>) Plot of the statistic <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>.</p>
Full article ">Figure 19
<p>(<b>a</b>) Fitting the upper bound (<a href="#FD24-algorithms-15-00320" class="html-disp-formula">24</a>) with the set <math display="inline"><semantics> <mi mathvariant="sans-serif">R</mi> </semantics></math>. (<b>b</b>) Visualization of the residual, absolute local approximation error, and the mean value.</p>
Full article ">
22 pages, 939 KiB  
Article
Federated Optimization of 0-norm Regularized Sparse Learning
by Qianqian Tong, Guannan Liang, Jiahao Ding, Tan Zhu, Miao Pan and Jinbo Bi
Algorithms 2022, 15(9), 319; https://doi.org/10.3390/a15090319 - 6 Sep 2022
Viewed by 2543
Abstract
Regularized sparse learning with the 0-norm is important in many areas, including statistical learning and signal processing. Iterative hard thresholding (IHT) methods are the state-of-the-art for nonconvex-constrained sparse learning due to their capability of recovering true support and scalability with large [...] Read more.
Regularized sparse learning with the 0-norm is important in many areas, including statistical learning and signal processing. Iterative hard thresholding (IHT) methods are the state-of-the-art for nonconvex-constrained sparse learning due to their capability of recovering true support and scalability with large datasets. The current theoretical analysis of IHT assumes the use of centralized IID data. In realistic large-scale scenarios, however, data are distributed, seldom IID, and private to edge computing devices at the local level. Consequently, it is required to study the property of IHT in a federated environment, where local devices update the sparse model individually and communicate with a central server for aggregation infrequently without sharing local data. In this paper, we propose the first group of federated IHT methods: Federated Hard Thresholding (Fed-HT) and Federated Iterative Hard Thresholding (FedIter-HT) with theoretical guarantees. We prove that both algorithms have a linear convergence rate and guarantee for recovering the optimal sparse estimator, which is comparable to classic IHT methods, but with decentralized, non-IID, and unbalanced data. Empirical results demonstrate that the Fed-HT and FedIter-HT outperform their competitor—a distributed IHT, in terms of reducing objective values with fewer communication rounds and bandwidth requirements. Full article
(This article belongs to the Special Issue Gradient Methods for Optimization)
Show Figures

Figure 1

Figure 1
<p>The comparison of proposed algorithms for different K values in terms of the objective function value vs. communication rounds (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 2
<p>The objective function value vs. communication rounds for regression (<b>a</b>,<b>b</b>) and classification (<b>c</b>,<b>d</b>), and for Fed-HT (<b>a</b>,<b>c</b>) and FedIter-HT (<b>b</b>,<b>d</b>) with varying degrees of non-IID data.</p>
Full article ">Figure 3
<p>The comparison of different algorithms in terms of the objective function value vs. communication rounds (<b>a</b>,<b>b</b>) and for regression (<b>a</b>) and classification (<b>b</b>). Note that the Distributed-IHT is the baseline method that communicates every local update (so the number of rounds equals the number of iterations) and may be the best scenario for reducing the objective value.</p>
Full article ">Figure 4
<p>Visualization of 10 K-means clusters for E2006-dfidf using t-SNE.</p>
Full article ">Figure 5
<p>Visualization of 10 K-means clusters for RCV1 using t-SNE.</p>
Full article ">Figure 6
<p>Comparison of the algorithms on different datasets in terms of the objective function value vs. communication rounds. <math display="inline"><semantics> <msup> <mi>f</mi> <mo>*</mo> </msup> </semantics></math> is a lower bound of <math display="inline"><semantics> <mrow> <mi>f</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>. FedIter-HT performs consistently better across all datasets, which confirms our theoretical result.</p>
Full article ">
28 pages, 4194 KiB  
Article
Joining Constraint Satisfaction Problems and Configurable CAD Product Models: A Step-by-Step Implementation Guide
by Paul Christoph Gembarski
Algorithms 2022, 15(9), 318; https://doi.org/10.3390/a15090318 - 6 Sep 2022
Cited by 5 | Viewed by 2201
Abstract
In configuration design, the task is to compose a system out of a set of predefined, modu-lar building blocks assembled by defined interfaces. Product configuration systems, both with or without integration of geometric models, implement reasoning techniques to model and explore the resulting [...] Read more.
In configuration design, the task is to compose a system out of a set of predefined, modu-lar building blocks assembled by defined interfaces. Product configuration systems, both with or without integration of geometric models, implement reasoning techniques to model and explore the resulting solution spaces. Among others, the formulation of constraint satisfaction problems (CSP) is state of the art and the informational background in many proprietary configuration engine software packages. Basically, configuration design tasks can also be implemented in modern computer aided design (CAD) systems as these contain different techniques for knowledge-based product modeling but literature reports only little about detailed application examples, best practices or training materials. This article aims at bridging this gap and presents a step-by-step implementation guide for CSP-based CAD configurators for combinatorial designs with the example of Autodesk Inventor. Full article
(This article belongs to the Special Issue Combinatorial Designs: Theory and Applications)
Show Figures

Figure 1

Figure 1
<p>Map coloring problem (initial value for constraint propagation highlighted in L1).</p>
Full article ">Figure 2
<p>Configurable toy sorting box with according brick variants.</p>
Full article ">Figure 3
<p>Use case diagram for customer interaction.</p>
Full article ">Figure 4
<p>Constraint network for the sorting box configuration problem.</p>
Full article ">Figure 5
<p>Activity diagram for constraint handling.</p>
Full article ">Figure 6
<p>Spreadsheet definition for slot (<b>left</b>) and brick (<b>right</b>) domain.</p>
Full article ">Figure 7
<p>Spreadsheet constraint definition.</p>
Full article ">Figure 8
<p>Declaration of public classes.</p>
Full article ">Figure 9
<p>User form for sorting box configuration.</p>
Full article ">Figure 10
<p>Activity diagram for inequality (<b>a</b>) and equality (<b>b</b>) constraint handling.</p>
Full article ">Figure 11
<p>Activity diagram for handling the obligatory assignment of fourth template slot.</p>
Full article ">Figure 12
<p>Activity diagram for relaxing constraints and update.</p>
Full article ">Figure 13
<p>Activity diagram for CAD model update.</p>
Full article ">Figure 14
<p>Configured sorting box.</p>
Full article ">
25 pages, 981 KiB  
Article
Improved Slime Mold Algorithm with Dynamic Quantum Rotation Gate and Opposition-Based Learning for Global Optimization and Engineering Design Problems
by Yunyang Zhang, Shiyu Du and Quan Zhang
Algorithms 2022, 15(9), 317; https://doi.org/10.3390/a15090317 - 4 Sep 2022
Cited by 5 | Viewed by 2357
Abstract
The slime mold algorithm (SMA) is a swarm-based metaheuristic algorithm inspired by the natural oscillatory patterns of slime molds. Compared with other algorithms, the SMA is competitive but still suffers from unbalanced development and exploration and the tendency to fall into local optima. [...] Read more.
The slime mold algorithm (SMA) is a swarm-based metaheuristic algorithm inspired by the natural oscillatory patterns of slime molds. Compared with other algorithms, the SMA is competitive but still suffers from unbalanced development and exploration and the tendency to fall into local optima. To overcome these drawbacks, an improved SMA with a dynamic quantum rotation gate and opposition-based learning (DQOBLSMA) is proposed in this paper. Specifically, for the first time, two mechanisms are used simultaneously to improve the robustness of the original SMA: the dynamic quantum rotation gate and opposition-based learning. The dynamic quantum rotation gate proposes an adaptive parameter control strategy based on the fitness to achieve a balance between exploitation and exploration compared to the original quantum rotation gate. The opposition-based learning strategy enhances population diversity and avoids falling into the local optima. Twenty-three benchmark test functions verify the superiority of the DQOBLSMA. Three typical engineering design problems demonstrate the ability of the DQOBLSMA to solve practical problems. Experimental results show that the proposed algorithm outperforms other comparative algorithms in convergence speed, convergence accuracy, and reliability. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>The process of updating the state of a quantum bit.</p>
Full article ">Figure 2
<p>Convergence figures on test functions F1–F23.</p>
Full article ">Figure 2 Cont.
<p>Convergence figures on test functions F1–F23.</p>
Full article ">Figure 2 Cont.
<p>Convergence figures on test functions F1–F23.</p>
Full article ">Figure 3
<p>Welded beam design problem.</p>
Full article ">Figure 4
<p>Tension/compression spring design problem.</p>
Full article ">Figure 5
<p>Pressure vessel design problem.</p>
Full article ">
17 pages, 1342 KiB  
Article
Sustainable Risk Identification Using Formal Ontologies
by Avi Shaked and Oded Margalit
Algorithms 2022, 15(9), 316; https://doi.org/10.3390/a15090316 - 2 Sep 2022
Cited by 6 | Viewed by 2354
Abstract
The cyber threat landscape is highly dynamic, posing a significant risk to the operations of systems and organisations. An organisation should, therefore, continuously monitor for new threats and properly contextualise them to identify and manage the resulting risks. Risk identification is typically performed [...] Read more.
The cyber threat landscape is highly dynamic, posing a significant risk to the operations of systems and organisations. An organisation should, therefore, continuously monitor for new threats and properly contextualise them to identify and manage the resulting risks. Risk identification is typically performed manually, relying on the integration of information from various systems as well as subject matter expert knowledge. This manual risk identification hinders the systematic consideration of new, emerging threats. This paper describes a novel method to promote automated cyber risk identification: OnToRisk. This artificial intelligence method integrates information from various sources using formal ontology definitions, and then relies on these definitions to robustly frame cybersecurity threats and provide risk-related insights. We describe a successful case study implementation of the method to frame the threat from a newly disclosed vulnerability and identify its induced organisational risk. The case study is representative of common and widespread real-life challenges, and, therefore, showcases the feasibility of using OnToRisk to sustainably identify new risks. Further applications may contribute to establishing OnToRisk as a comprehensive, disciplined mechanism for risk identification. Full article
Show Figures

Figure 1

Figure 1
<p>The vulnerability-induced risk case represented as a formal graph of meta-level concepts and relations. This graph is generated by applying the CoModIDE plugin for Protégé [<a href="#B36-algorithms-15-00316" class="html-bibr">36</a>] to the formal ontology. The graph shows: the classes as nodes; the object properties between classes (from domain to range) as solid, annotated arrows; and subtype (subclass) relations as dashed arrows.</p>
Full article ">Figure 2
<p>The Log4shell ontology-based assertions in Protege. Manually stated (explicit) assertions appear in bold font, while automatically inferred assertions appear in regular font.</p>
Full article ">Figure 3
<p>The reasoner explanation for asserting the risksVia properties. (<b>a</b>) for App3, as a result of the risksInfo with respect to ClientIDsList; (<b>b</b>) for App4, as a result of the risksFunction with respect to OpenAccount.</p>
Full article ">
10 pages, 879 KiB  
Article
High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms
by Moshe Sipper
Algorithms 2022, 15(9), 315; https://doi.org/10.3390/a15090315 - 2 Sep 2022
Cited by 13 | Viewed by 4071
Abstract
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. However, just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein [...] Read more.
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. However, just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein we carry out a large-scale investigation, specifically one involving 26 ML algorithms, 250 datasets (regression and both binary and multinomial classification), 6 score metrics, and 28,857,600 algorithm runs. Analyzing the results we conclude that for many ML algorithms, we should not expect considerable gains from hyperparameter tuning on average; however, there may be some datasets for which default hyperparameters perform poorly, especially for some algorithms. By defining a single hp_score value, which combines an algorithm’s accumulated statistics, we are able to rank the 26 ML algorithms from those expected to gain the most from hyperparameter tuning to those expected to gain the least. We believe such a study shall serve ML practitioners at large. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

Figure 1
<p>Characteristics of the 144 classification datasets and 106 regression datasets used in our study.</p>
Full article ">
22 pages, 1304 KiB  
Article
CVE2ATT&CK: BERT-Based Mapping of CVEs to MITRE ATT&CK Techniques
by Octavian Grigorescu, Andreea Nica, Mihai Dascalu and Razvan Rughinis
Algorithms 2022, 15(9), 314; https://doi.org/10.3390/a15090314 - 31 Aug 2022
Cited by 26 | Viewed by 9670
Abstract
Since cyber-attacks are ever-increasing in number, intensity, and variety, a strong need for a global, standardized cyber-security knowledge database has emerged as a means to prevent and fight cybercrime. Attempts already exist in this regard. The Common Vulnerabilities and Exposures (CVE) list documents [...] Read more.
Since cyber-attacks are ever-increasing in number, intensity, and variety, a strong need for a global, standardized cyber-security knowledge database has emerged as a means to prevent and fight cybercrime. Attempts already exist in this regard. The Common Vulnerabilities and Exposures (CVE) list documents numerous reported software and hardware vulnerabilities, thus building a community-based dictionary of existing threats. The MITRE ATT&CK Framework describes adversary behavior and offers mitigation strategies for each reported attack pattern. While extremely powerful on their own, the tremendous extra benefit gained when linking these tools cannot be overlooked. This paper introduces a dataset of 1813 CVEs annotated with all corresponding MITRE ATT&CK techniques and proposes models to automatically link a CVE to one or more techniques based on the text description from the CVE metadata. We establish a strong baseline that considers classical machine learning models and state-of-the-art pre-trained BERT-based language models while counteracting the highly imbalanced training set with data augmentation strategies based on the TextAttack framework. We obtain promising results, as the best model achieved an F1-score of 47.84%. In addition, we perform a qualitative analysis that uses Lime explanations to point out limitations and potential inconsistencies in CVE descriptions. Our model plays a critical role in finding kill chain scenarios inside complex infrastructures and enables the prioritization of CVE patching by the threat level. We publicly release our code together with the dataset of annotated CVEs. Full article
Show Figures

Figure 1

Figure 1
<p>The distribution of CVEs among techniques.</p>
Full article ">Figure 2
<p>The distribution of CVEs among the 31 considered techniques after applying the threshold.</p>
Full article ">Figure 3
<p>The distribution of CVEs among the 31 considered techniques after data augmentation.</p>
Full article ">Figure 4
<p>Architecture of the CNN with Word2Vec embeddings.</p>
Full article ">Figure 5
<p>BERT-based architecture with multiple output layers.</p>
Full article ">Figure 6
<p>The design of the multi-labeling BERT-based architecture.</p>
Full article ">Figure 7
<p>Comparison of training loss for the multi-output BERT architecture. (<b>a</b>) Without data augmentation; (<b>b</b>) With data augmentation.</p>
Full article ">Figure 8
<p>Comparing F1-score per technique between SciBERT model trained on initial and augmented dataset.</p>
Full article ">Figure 9
<p>Comparing the F1-score over the CVE distribution for the SciBERT model. (<b>a</b>) Without augmentation; (<b>b</b>) With augmentation.</p>
Full article ">Figure 10
<p>Comparison of word mappings for each technique corresponding to CVE #2 from <a href="#algorithms-15-00314-t004" class="html-table">Table 4</a>. (<b>a</b>) Mapping Valid Accounts; (<b>b</b>) Mapping Exploitation for Client Execution.</p>
Full article ">Figure 11
<p>Comparison of word mappings for each technique corresponding to CVE #4 from <a href="#algorithms-15-00314-t004" class="html-table">Table 4</a>. (<b>a</b>) Mapping User Execution technique; (<b>b</b>) Mapping Browser Session hijacking.</p>
Full article ">Figure 12
<p>Comparison of word mappings for each technique corresponding to CVE #5 from <a href="#algorithms-15-00314-t004" class="html-table">Table 4</a>. (<b>a</b>) Mapping Exploitation for Client Execution; (<b>b</b>) Mapping Endpoint Denial of Service.</p>
Full article ">
22 pages, 1059 KiB  
Review
Artificial Intelligence for Cell Segmentation, Event Detection, and Tracking for Label-Free Microscopy Imaging
by Lucia Maddalena, Laura Antonelli, Alexandra Albu, Aroj Hada and Mario Rosario Guarracino
Algorithms 2022, 15(9), 313; https://doi.org/10.3390/a15090313 - 31 Aug 2022
Cited by 14 | Viewed by 5256
Abstract
Background: Time-lapse microscopy imaging is a key approach for an increasing number of biological and biomedical studies to observe the dynamic behavior of cells over time which helps quantify important data, such as the number of cells and their sizes, shapes, and dynamic [...] Read more.
Background: Time-lapse microscopy imaging is a key approach for an increasing number of biological and biomedical studies to observe the dynamic behavior of cells over time which helps quantify important data, such as the number of cells and their sizes, shapes, and dynamic interactions across time. Label-free imaging is an essential strategy for such studies as it ensures that native cell behavior remains uninfluenced by the recording process. Computer vision and machine/deep learning approaches have made significant progress in this area. Methods: In this review, we present an overview of methods, software, data, and evaluation metrics for the automatic analysis of label-free microscopy imaging. We aim to provide the interested reader with a unique source of information, with links for further detailed information. Results: We review the most recent methods for cell segmentation, event detection, and tracking. Moreover, we provide lists of publicly available software and datasets. Finally, we summarize the metrics most frequently adopted for evaluating the methods under exam. Conclusions: We provide hints on open challenges and future research directions. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

Figure 1
<p>Example of differences in appearance of fluorescence and phase contrast microscopy images (culture of human lymphocyte cells [<a href="#B9-algorithms-15-00313" class="html-bibr">9</a>]): (<b>a</b>) fluorescence image of nuclear envelope; (<b>b</b>) fluorescence image of interior nuclei (DNA); (<b>c</b>) phase contrast image of whole cells.</p>
Full article ">Figure 2
<p>Results of different image processing tasks: (<b>a</b>) cell detection; (<b>b</b>) cell semantic segmentation; (<b>c</b>) cell instance segmentation.</p>
Full article ">Figure 3
<p>Example data from the EVICAN dataset [<a href="#B88-algorithms-15-00313" class="html-bibr">88</a>]: (<b>a</b>) original image (ID 92_ACHN); (<b>b</b>) image with annotated cells (red) and nuclei (blue); (<b>c</b>) image where non-annotated areas have been blurred.</p>
Full article ">Figure 4
<p>Example data from the Usiigaci dataset [<a href="#B63-algorithms-15-00313" class="html-bibr">63</a>]: (<b>a</b>) original image (20180101ef002xy01t01.tif); (<b>b</b>) corresponding indexed mask, where each color indicates a different cell in all sequence images.</p>
Full article ">
17 pages, 4178 KiB  
Article
Images Segmentation Based on Cutting the Graph into Communities
by Sergey V. Belim and Svetlana Yu. Belim
Algorithms 2022, 15(9), 312; https://doi.org/10.3390/a15090312 - 30 Aug 2022
Cited by 2 | Viewed by 2246
Abstract
This article considers the problem of image segmentation based on its representation as an undirected weighted graph. Image segmentation is equivalent to partitioning a graph into communities. The image segment corresponds to each community. The growing area algorithm search communities on the graph. [...] Read more.
This article considers the problem of image segmentation based on its representation as an undirected weighted graph. Image segmentation is equivalent to partitioning a graph into communities. The image segment corresponds to each community. The growing area algorithm search communities on the graph. The average edge weight in the community is a measure of the separation quality. The correlation radius determines the number of next nearest neighbors connected by edges. Edge weight is a function of the difference between color and geometric coordinates of pixels. The exponential law calculates the weights of an edge in a graph. The computer experiment determines the parameters of the algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Dependence of average edge weight in the community on the number of nodes included at different <math display="inline"><semantics> <mi>R</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>Example of algorithm operation at R = 1 and different values of parameter h: (<b>a</b>) initial image, (<b>b</b>) ground truth, (<b>c</b>) h = 0.005, (<b>d</b>) h = 0.01.</p>
Full article ">Figure 3
<p>Examples of segmentation at <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>=</mo> <mn>0.007</mn> </mrow> </semantics></math> and different values <math display="inline"><semantics> <mi>R</mi> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>25</mn> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Examples of segment highlighting in different images: (<b>a</b>) source image, (<b>b</b>) the result of the algorithm.</p>
Full article ">Figure 4 Cont.
<p>Examples of segment highlighting in different images: (<b>a</b>) source image, (<b>b</b>) the result of the algorithm.</p>
Full article ">Figure 5
<p>Schematic model of the road image: (<b>a</b>) direct image of the road, (<b>b</b>) image of the road with displacement.</p>
Full article ">Figure 6
<p>Examples of highlighting a road segment (<b>b</b>) and its boundaries (<b>c</b>), (<b>a</b>) is original image.</p>
Full article ">Figure 7
<p>Examples of highlighting road boundaries: (<b>a</b>) original image, (<b>b</b>) road contours, (<b>c</b>) road boundaries.</p>
Full article ">Figure 8
<p>Examples of road edge approximation for the road segment (<b>a</b>) and the original image (<b>b</b>). Red lines show the border of the road.</p>
Full article ">Figure 9
<p>The optical system parameters. (<b>a</b>) Imaging the road in the lens, (<b>b</b>) parameters in the image.</p>
Full article ">Figure 10
<p>The road image is converted at <math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>≠</mo> <mi>M</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>. The black line is the original image of the road. A blue line is an image of a road after conversion ((<b>a</b>) rising, (<b>b</b>) slope).</p>
Full article ">Figure 11
<p>Segmentation examples: (<b>a</b>) original images, (<b>b</b>) ground truth, (<b>c</b>) [<a href="#B54-algorithms-15-00312" class="html-bibr">54</a>], (<b>d</b>) [<a href="#B55-algorithms-15-00312" class="html-bibr">55</a>], (<b>e</b>) our algorithm.</p>
Full article ">
19 pages, 1284 KiB  
Article
Accelerating the Sinkhorn Algorithm for Sparse Multi-Marginal Optimal Transport via Fast Fourier Transforms
by Fatima Antarou Ba and Michael Quellmalz
Algorithms 2022, 15(9), 311; https://doi.org/10.3390/a15090311 - 30 Aug 2022
Cited by 6 | Viewed by 2877
Abstract
We consider the numerical solution of the discrete multi-marginal optimal transport (MOT) by means of the Sinkhorn algorithm. In general, the Sinkhorn algorithm suffers from the curse of dimensionality with respect to the number of marginals. If the MOT cost function decouples according [...] Read more.
We consider the numerical solution of the discrete multi-marginal optimal transport (MOT) by means of the Sinkhorn algorithm. In general, the Sinkhorn algorithm suffers from the curse of dimensionality with respect to the number of marginals. If the MOT cost function decouples according to a tree or circle, its complexity is linear in the number of marginal measures. In this case, we speed up the convolution with the radial kernel required in the Sinkhorn algorithm via non-uniform fast Fourier methods. Each step of the proposed accelerated Sinkhorn algorithm with a tree-structured cost function has a complexity of O(KN) instead of the classical O(KN2) for straightforward matrix–vector operations, where K is the number of marginals and each marginal measure is supported on, at most, N points. In the case of a circle-structured cost function, the complexity improves from O(KN3) to O(KN2). This is confirmed through numerical experiments. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

Figure 1
<p>Regularized kernel <math display="inline"><semantics> <msub> <mi>κ</mi> <mi mathvariant="normal">R</mi> </msub> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math>, periodicity length <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, boundary interval <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi mathvariant="normal">B</mi> </msub> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>, and smoothness <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Computation time in seconds of <math display="inline"><semantics> <msub> <mi>MOT</mi> <mi>η</mi> </msub> </semantics></math> with tree-structured cost function with regularization parameter <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0.1</mn> <mo>.</mo> </mrow> </semantics></math> <b>Left</b>: fixed <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>4</mn> </msup> <mo>.</mo> </mrow> </semantics></math> <b>Right</b>: fixed <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Computation time in seconds of <math display="inline"><semantics> <msub> <mi>MOT</mi> <mi>η</mi> </msub> </semantics></math> with a circle-structured cost function with regularization parameter <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0.1</mn> <mo>.</mo> </mrow> </semantics></math> <b>Left</b>: fixed <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>700</mn> <mo>.</mo> </mrow> </semantics></math> <b>Right</b>: fixed <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <msub> <mi>MOT</mi> <mi>η</mi> </msub> </semantics></math> with tree-structured cost function, where <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mn>000</mn> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>3</mn> <mo>.</mo> </mrow> </semantics></math> <b>Left</b>: Approximation error of <math display="inline"><semantics> <msup> <mi mathvariant="script">S</mi> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </msup> </semantics></math> between the Sinkhorn and the NFFT-Sinkhorn algorithm, depending on the number of Fourier coefficients <span class="html-italic">M</span> of the NFFT. <b>Right</b>: Computation time in seconds.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <msub> <mi>MOT</mi> <mi>η</mi> </msub> </semantics></math> with circle-structured cost function, where <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> <mo>,</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>3</mn> <mo>.</mo> </mrow> </semantics></math> Left: Approximation error of <math display="inline"><semantics> <msup> <mi mathvariant="script">S</mi> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </msup> </semantics></math> between Sinkhorn and NFFT-Sinkhorn algorithm depending on the number of Fourier coefficients <span class="html-italic">M</span>. Right: Computation time in seconds.</p>
Full article ">Figure 6
<p>The tree graph of the barycenter problem, with leaves <math display="inline"><semantics> <mi mathvariant="script">L</mi> </semantics></math> marked in blue.</p>
Full article ">Figure 7
<p>Given measures:{ <math display="inline"><semantics> <mrow> <mrow> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>6</mn> <mo>,</mo> <mn>7</mn> </mrow> <mo>,</mo> </mrow> </semantics></math>} entropy regularization parameter <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>5</mn> <mspace width="0.166667em"/> <mi>·</mi> <mspace width="0.166667em"/> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>150</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>w</mi> <mo stretchy="false">˜</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </semantics></math> NFFT-Sinkhorn parameters: <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>156</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>3</mn> <mo>.</mo> </mrow> </semantics></math> The test images are adapted with permission (MIT License) from <a href="https://github.com/PythonOT/POT" target="_blank">https://github.com/PythonOT/POT</a>, see [<a href="#B54-algorithms-15-00311" class="html-bibr">54</a>]. 2016, Rémi Flamary. (<b>a</b>) Sinkhorn. (<b>b</b>) NFFT-Sinkhorn.</p>
Full article ">Figure 8
<p>Joint measures <math display="inline"><semantics> <msub> <mo>Π</mo> <mrow> <mn>1</mn> <mo>,</mo> <mi>k</mi> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mn>5</mn> </mrow> </semantics></math> representing the movement of the particles from initial position <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math> (<span class="html-italic">x</span>-axis) to position <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <msub> <mi>t</mi> <mi>k</mi> </msub> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </mrow> </semantics></math> (<span class="html-italic">y</span>-axis) at time <math display="inline"><semantics> <msub> <mi>t</mi> <mi>k</mi> </msub> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mi>x</mi> </mrow> </semantics></math>. <b>First row:</b> Sinkhorn algorithm. <b>Second row:</b> NFFT-Sinkhorn algorithm.</p>
Full article ">Figure 9
<p>Joint measures as in <a href="#algorithms-15-00311-f008" class="html-fig">Figure 8</a>, but with the function <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>=</mo> <mo movablelimits="true" form="prefix">min</mo> <mo>(</mo> <mn>2</mn> <mi>x</mi> <mo>,</mo> <mn>1</mn> <mo>−</mo> <mn>2</mn> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">
30 pages, 863 KiB  
Article
Implicit A-Stable Peer Triplets for ODE Constrained Optimal Control Problems
by Jens Lang and Bernhard A. Schmitt
Algorithms 2022, 15(9), 310; https://doi.org/10.3390/a15090310 - 29 Aug 2022
Cited by 2 | Viewed by 1697
Abstract
This paper is concerned with the construction and convergence analysis of novel implicit Peer triplets of two-step nature with four stages for nonlinear ODE constrained optimal control problems. We combine the property of superconvergence of some standard Peer method for inner grid points [...] Read more.
This paper is concerned with the construction and convergence analysis of novel implicit Peer triplets of two-step nature with four stages for nonlinear ODE constrained optimal control problems. We combine the property of superconvergence of some standard Peer method for inner grid points with carefully designed starting and end methods to achieve order four for the state variables and order three for the adjoint variables in a first-discretize-then-optimize approach together with A-stability. The notion triplets emphasize that these three different Peer methods have to satisfy additional matching conditions. Four such Peer triplets of practical interest are constructed. In addition, as a benchmark method, the well-known backward differentiation formula BDF4, which is only A(73.35)-stable, is extended to a special Peer triplet to supply an adjoint consistent method of higher order and BDF type with equidistant nodes. Within the class of Peer triplets, we found a diagonally implicit A(84)-stable method with nodes symmetric in [0, 1] to a common center that performs equally well. Numerical tests with four well established optimal control problems confirm the theoretical findings also concerning A-stability. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

Figure 1
<p>Rayleigh Problem: Convergence of the maximal state errors <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mi mathvariant="sans-serif">T</mi> </msup> <mo>⊗</mo> <mi>I</mi> <mo>)</mo> </mrow> <msub> <mi>Y</mi> <mi>n</mi> </msub> </mrow> </semantics></math>−<math display="inline"><semantics> <mrow> <mi>y</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msub> <mrow> <mo>∥</mo> </mrow> <mo>∞</mo> </msub> </mrow> </semantics></math> (<b>left</b>) and adjoint errors <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>v</mi> <mi mathvariant="sans-serif">T</mi> </msup> <mo>⊗</mo> <mi>I</mi> <mo>)</mo> </mrow> <msub> <mi>P</mi> <mi>n</mi> </msub> </mrow> </semantics></math>−<math display="inline"><semantics> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <msub> <mrow> <mo>∥</mo> </mrow> <mo>∞</mo> </msub> </mrow> </semantics></math> (<b>right</b>), <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Van der Pol Problem: Convergence of the maximal state errors <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mi mathvariant="sans-serif">T</mi> </msup> <mo>⊗</mo> <mi>I</mi> <mo>)</mo> </mrow> <msub> <mi>Y</mi> <mi>n</mi> </msub> </mrow> </semantics></math>−<math display="inline"><semantics> <mrow> <mi>y</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msub> <mrow> <mo>∥</mo> </mrow> <mo>∞</mo> </msub> </mrow> </semantics></math> (<b>left</b>) and adjoint errors <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>v</mi> <mi mathvariant="sans-serif">T</mi> </msup> <mo>⊗</mo> <mi>I</mi> <mo>)</mo> </mrow> <msub> <mi>P</mi> <mi>n</mi> </msub> </mrow> </semantics></math>−<math display="inline"><semantics> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <msub> <mrow> <mo>∥</mo> </mrow> <mo>∞</mo> </msub> </mrow> </semantics></math> (<b>right</b>), <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Controlled Motion Problem: optimal path <math display="inline"><semantics> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mn>1</mn> <mo>☆</mo> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mn>2</mn> <mo>☆</mo> </msubsup> <mo>)</mo> </mrow> </semantics></math> through the total energy field <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msubsup> <mi>y</mi> <mn>2</mn> <mn>2</mn> </msubsup> </mrow> </semantics></math>+<math display="inline"><semantics> <mrow> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msubsup> <mi>y</mi> <mn>1</mn> <mn>4</mn> </msubsup> </mrow> </semantics></math>−<math display="inline"><semantics> <mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msubsup> <mi>y</mi> <mn>1</mn> <mn>2</mn> </msubsup> </mrow> </semantics></math> visualized by isolines and exhibiting a saddle point structure (<b>top</b>). Convergence of the maximal state errors <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mi mathvariant="sans-serif">T</mi> </msup> <mo>⊗</mo> <mi>I</mi> <mo>)</mo> </mrow> <msub> <mi>Y</mi> <mi>n</mi> </msub> </mrow> </semantics></math>−<math display="inline"><semantics> <mrow> <mi>y</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msub> <mrow> <mo>∥</mo> </mrow> <mo>∞</mo> </msub> </mrow> </semantics></math> (<b>bottom left</b>) and adjoint errors <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>v</mi> <mi mathvariant="sans-serif">T</mi> </msup> <mo>⊗</mo> <mi>I</mi> <mo>)</mo> </mrow> <msub> <mi>P</mi> <mi>n</mi> </msub> </mrow> </semantics></math>−<math display="inline"><semantics> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <msub> <mrow> <mo>∥</mo> </mrow> <mo>∞</mo> </msub> </mrow> </semantics></math> (<b>bottom right</b>), <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Wave Problem: Convergence of the maximal state errors <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mi mathvariant="sans-serif">T</mi> </msup> <mo>⊗</mo> <mi>I</mi> <mo>)</mo> </mrow> <msub> <mi>Y</mi> <mi>n</mi> </msub> </mrow> </semantics></math>−<math display="inline"><semantics> <mrow> <mi>y</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msub> <mrow> <mo>∥</mo> </mrow> <mo>∞</mo> </msub> </mrow> </semantics></math> (<b>left</b>) and adjoint errors <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>v</mi> <mi mathvariant="sans-serif">T</mi> </msup> <mo>⊗</mo> <mi>I</mi> <mo>)</mo> </mrow> <msub> <mi>P</mi> <mi>n</mi> </msub> </mrow> </semantics></math>−<math display="inline"><semantics> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <msub> <mrow> <mo>∥</mo> </mrow> <mo>∞</mo> </msub> </mrow> </semantics></math> (<b>right</b>), <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math>.</p>
Full article ">
38 pages, 1397 KiB  
Systematic Review
Integrated Industrial Reference Architecture for Smart Healthcare in Internet of Things: A Systematic Investigation
by Aswani Devi Aguru, Erukala Suresh Babu, Soumya Ranjan Nayak, Abhisek Sethy and Amit Verma
Algorithms 2022, 15(9), 309; https://doi.org/10.3390/a15090309 - 29 Aug 2022
Cited by 19 | Viewed by 4205
Abstract
Internet of Things (IoT) is one of the efflorescing technologies of recent years with splendid real-time applications in the fields of healthcare, agriculture, transportation, industry, and environmental monitoring. In addition to the dominant applications and services of IoT, many challenges exist. As there [...] Read more.
Internet of Things (IoT) is one of the efflorescing technologies of recent years with splendid real-time applications in the fields of healthcare, agriculture, transportation, industry, and environmental monitoring. In addition to the dominant applications and services of IoT, many challenges exist. As there is a lack of standardization for IoT technologies, the architecture emerged as the foremost challenge. The salient issues in designing an IoT architecture encompass connectivity, data handling, heterogeneity, privacy, scalability, and security. The standard IoT architectures are the ETSI IoT Standard, the ITU-T IoT Reference Model, IoT-A Reference Model, Intel’s IoT Architecture, the Three-Layer Architecture, Middle-Based Architecture, Service-Oriented Architecture, Five-Layer Architecture, and IWF Architecture. In this paper, we have reviewed these architectures and concluded that IWF Architecture is most suitable for the effortless development of IoT applications because of its immediacy and depth of insight in dealing with IoT data. We carried out this review concerning smart healthcare as it is among the major industries that have been leaders and forerunners in IoT technologies. Motivated by this, we designed the novel Smart Healthcare Reference Architecture (SHRA) based on IWF Architecture. Finally, present the significance of smart healthcare during the COVID-19 pandemic. We have synthesized our findings in a systematic way for addressing the research questions on IoT challenges. To the best of our knowledge, our paper is the first to provide an exhaustive investigation on IoT architectural challenges with a use case in a smart healthcare system. Full article
(This article belongs to the Special Issue Deep Learning for Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of our systematic investigation using the PRISMA approach.</p>
Full article ">Figure 2
<p>Number of papers published in major IoT application domains.</p>
Full article ">Figure 3
<p>Road-map of the paper.</p>
Full article ">Figure 4
<p>Flow of our systematic investigation.</p>
Full article ">Figure 5
<p>ESTI Reference Architecture.</p>
Full article ">Figure 6
<p>ITU-T Reference Architecture.</p>
Full article ">Figure 7
<p>IoT-A Architecture.</p>
Full article ">Figure 8
<p>IoT architecture by Intel.</p>
Full article ">Figure 9
<p>(<b>a</b>) Three-Layer Architecture. (<b>b</b>) Middle-Ware-Based Architecture. (<b>c</b>) Service-Oriented Architecture Based Model. (<b>d</b>) Five-Layer Architecture.</p>
Full article ">Figure 10
<p>Social Internet of Things Representative Architecture for the IoT server.</p>
Full article ">Figure 11
<p>Multi-IoT Architecture.</p>
Full article ">Figure 12
<p>IWF 7-layered reference architecture.</p>
Full article ">Figure 13
<p>Protocol categorization of IWF layers.</p>
Full article ">Figure 14
<p>Smart Healthcare Reference Architecture (SHRA).</p>
Full article ">Figure 15
<p>IoT connectivity architecture.</p>
Full article ">Figure 16
<p>IoT data handling architecture.</p>
Full article ">Figure 17
<p>Possible attacks and respective security solutions of IWF layers.</p>
Full article ">Figure 18
<p>A generic IoT authentication mechanism.</p>
Full article ">
15 pages, 2158 KiB  
Article
Early Prediction of Chronic Kidney Disease: A Comprehensive Performance Analysis of Deep Learning Models
by Chaity Mondol, F. M. Javed Mehedi Shamrat, Md. Robiul Hasan, Saidul Alam, Pronab Ghosh, Zarrin Tasnim, Kawsar Ahmed, Francis M. Bui and Sobhy M. Ibrahim
Algorithms 2022, 15(9), 308; https://doi.org/10.3390/a15090308 - 29 Aug 2022
Cited by 18 | Viewed by 4308
Abstract
Chronic kidney disease (CKD) is one of the most life-threatening disorders. To improve survivability, early discovery and good management are encouraged. In this paper, CKD was diagnosed using multiple optimized neural networks against traditional neural networks on the UCI machine learning dataset, to [...] Read more.
Chronic kidney disease (CKD) is one of the most life-threatening disorders. To improve survivability, early discovery and good management are encouraged. In this paper, CKD was diagnosed using multiple optimized neural networks against traditional neural networks on the UCI machine learning dataset, to identify the most efficient model for the task. The study works on the binary classification of CKD from 24 attributes. For classification, optimized CNN (OCNN), ANN (OANN), and LSTM (OLSTM) models were used as well as traditional CNN, ANN, and LSTM models. With various performance matrixes, error measures, loss values, AUC values, and compilation time, the implemented models are compared to identify the most competent model for the classification of CKD. It is observed that, overall, the optimized models have better performance compared to the traditional models. The highest validation accuracy among the tradition models were achieved from CNN with 92.71%, whereas OCNN, OANN, and OLSTM have higher accuracies of 98.75%, 96.25%, and 98.5%, respectively. Additionally, OCNN has the highest AUC score of 0.99 and the lowest compilation time for classification with 0.00447 s, making it the most efficient model for the diagnosis of CKD. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Medicine)
Show Figures

Figure 1

Figure 1
<p>The proposed system architecture for CKD prediction.</p>
Full article ">Figure 2
<p>Highly correlated features of CKD dataset.</p>
Full article ">Figure 3
<p>The architecture of optimized convolutional neural network (OCNN), a fine-tuned network.</p>
Full article ">Figure 4
<p>The architecture of optimized artificial neural network (OANN), a fine-tuned network.</p>
Full article ">Figure 5
<p>The architecture of optimized long short-term memory (OLSTM), a fine-tuned network.</p>
Full article ">Figure 6
<p>Runtime analysis among the models.4.7. Evaluation of Prediction Result.</p>
Full article ">
12 pages, 2349 KiB  
Article
Traffic Demand Estimations Considering Route Trajectory Reconstruction in Congested Networks
by Wenyun Tang, Jiahui Chen, Chao Sun, Hanbing Wang and Gen Li
Algorithms 2022, 15(9), 307; https://doi.org/10.3390/a15090307 - 28 Aug 2022
Viewed by 1787
Abstract
Traffic parameter characteristics in congested road networks are explored based on traffic flow theory, and observed variables are transformed to a uniform format. The Gaussian mixture model is used to reconstruct route trajectories based on data regarding travel routes containing only the origin [...] Read more.
Traffic parameter characteristics in congested road networks are explored based on traffic flow theory, and observed variables are transformed to a uniform format. The Gaussian mixture model is used to reconstruct route trajectories based on data regarding travel routes containing only the origin and destination information. Using a bi-level optimization framework, a Bayesian traffic demand estimation model was built using route trajectory reconstruction in congested networks. Numerical examples demonstrate that traffic demand estimation errors, without considering a congested network, are within ±12; whereas estimation demands considering traffic congestion are close to the real values. Using the Gaussian mixture model’s technology of trajectory reconstruction, the mean of the traffic demand root mean square error can be stabilized to approximately 1.3. Traffic demand estimation accuracy decreases with an increase in observed data usage, and the designed iterative algorithm can predict convergence with 0.06 accuracy. The evolution rules of urban traffic demands and road flows in congested networks are uncovered, and a theoretical basis for alleviating urban traffic congestion is provided to determine traffic management and control strategies. Full article
Show Figures

Figure 1

Figure 1
<p>Performance function of link travel time. (<b>a</b>) Traditional assumption. (<b>b</b>) Actual situation.</p>
Full article ">Figure 2
<p>Topology, link characteristics, and OD demands of the Nguyen–Dupuis network.</p>
Full article ">Figure 3
<p>Probability distribution of Bayesian learning-based OD demands under different time intervals. (<b>a</b>) OD pair 1-2. (<b>b</b>) OD pair 1-3.</p>
Full article ">Figure 4
<p>Comparison of traffic demand estimation errors when traffic congestion is and is not considered. (<b>a</b>) Traffic demand errors in the 30th time window. (<b>b</b>) Probability distribution of traffic demand errors in all time periods.</p>
Full article ">Figure 5
<p>Root mean square errors of traffic demands with different observed data.</p>
Full article ">Figure 6
<p>Traffic demands between different OD pairs under fluctuations of observed data on links 16 and 28.</p>
Full article ">Figure 7
<p>Evolution processes of root mean square errors of traffic demands.</p>
Full article ">
15 pages, 3492 KiB  
Article
MIMO Radar Imaging Method with Non-Orthogonal Waveforms Based on Deep Learning
by Hongbing Li and Qunfei Zhang
Algorithms 2022, 15(9), 306; https://doi.org/10.3390/a15090306 - 28 Aug 2022
Cited by 1 | Viewed by 1667
Abstract
Transmitting orthogonal waveforms are the basis for giving full play to the advantages of MIMO radar imaging technology, but the commonly used waveforms with the same frequency cannot meet the orthogonality requirement, resulting in serious coupling noise in traditional imaging methods and affecting [...] Read more.
Transmitting orthogonal waveforms are the basis for giving full play to the advantages of MIMO radar imaging technology, but the commonly used waveforms with the same frequency cannot meet the orthogonality requirement, resulting in serious coupling noise in traditional imaging methods and affecting the imaging effect. In order to effectively suppress the mutual coupling interference caused by non-orthogonal waveforms, a new non-orthogonal waveform MIMO radar imaging method based on deep learning is proposed in this paper: with the powerful nonlinear fitting ability of deep learning, the mapping relationship between the non-orthogonal waveform MIMO radar echo and ideal target image is automatically learned by constructing a deep imaging network and training on a large number of simulated training data. The learned imaging network can effectively suppress the coupling interference between non-ideal orthogonal waveforms and improve the imaging quality of MIMO radar. Finally, the effectiveness of the proposed method is verified by experiments with point scattering model data and electromagnetic scattering calculation data. Full article
Show Figures

Figure 1

Figure 1
<p>Non-orthogonal waveform MIMO radar imaging framework based on deep network.</p>
Full article ">Figure 2
<p>Network structure of MIMO-LI-Net.</p>
Full article ">Figure 3
<p>Ideal target image.</p>
Full article ">Figure 4
<p>Imaging results of point target simulation data via comparison methods: (<b>a</b>) matched filtering method; (<b>b</b>) 1D-SR method; (<b>c</b>) MB-SL0 method; (<b>d</b>) CNN method.</p>
Full article ">Figure 5
<p>Imaging result of point target simulation data via MIMO-LI-Net.</p>
Full article ">Figure 6
<p>Target CAD model.</p>
Full article ">Figure 7
<p>Imaging results of electromagnetic calculation data via comparison methods: (<b>a</b>) matched filtering method; (<b>b</b>) 1D-SR method; (<b>c</b>) MB-SL0 method; (<b>d</b>) CNN method.</p>
Full article ">Figure 8
<p>Imaging result of electromagnetic calculation data via MIMO-LI-Net.</p>
Full article ">
18 pages, 4981 KiB  
Article
A Systematic Approach for Developing a Robust Artwork Recognition Framework Using Smartphone Cameras
by Zenonas Theodosiou, Marios Thoma, Harris Partaourides and Andreas Lanitis
Algorithms 2022, 15(9), 305; https://doi.org/10.3390/a15090305 - 27 Aug 2022
Cited by 2 | Viewed by 1878
Abstract
The provision of information encourages people to visit cultural sites more often. Exploiting the great potential of using smartphone cameras and egocentric vision, we describe the development of a robust artwork recognition algorithm to assist users when visiting an art space. The algorithm [...] Read more.
The provision of information encourages people to visit cultural sites more often. Exploiting the great potential of using smartphone cameras and egocentric vision, we describe the development of a robust artwork recognition algorithm to assist users when visiting an art space. The algorithm recognizes artworks under any physical museum conditions, as well as camera point of views, making it suitable for different use scenarios towards an enhanced visiting experience. The algorithm was developed following a multiphase approach, including requirements gathering, experimentation in a virtual environment, development of the algorithm in real environment conditions, implementation of a demonstration smartphone app for artwork recognition and provision of assistive information, and its evaluation. During the algorithm development process, a convolutional neural network (CNN) model was trained for automatic artwork recognition using data collected in an art gallery, followed by extensive evaluations related to the parameters that may affect recognition accuracy, while the optimized algorithm was also evaluated through a dedicated app by a group of volunteers with promising results. The overall algorithm design and evaluation adopted for this work can also be applied in numerous applications, especially in cases where the algorithm performance under varying conditions and end-user satisfaction are critical factors. Full article
Show Figures

Figure 1

Figure 1
<p>The systematic approach we followed for the robust artwork recognition.</p>
Full article ">Figure 2
<p>The setup of the virtual gallery.</p>
Full article ">Figure 3
<p>Samples showing examples of different conditions simulated in the virtual environment. Row 1 shows the simulation of changes in camera orientation, row 2 shows changes in distance, row 3 shows the introduction of occlusions in the form of virtual visitors and row 4 shows changes in wall texture.</p>
Full article ">Figure 4
<p>Typical examples of simulated training images for a particular painting.</p>
Full article ">Figure 5
<p>The artworks used in the experimental setup (artworks marked with an * are 3D objects).</p>
Full article ">Figure 6
<p>Example of the ambient light intensity variations in the hall.</p>
Full article ">Figure 7
<p>Example of figures of virtual visitors superimposed on images showing paintings, in order to simulate occlusions (the human silhouettes used originated from a freely available dataset [<a href="#B30-algorithms-15-00305" class="html-bibr">30</a>]).</p>
Full article ">Figure 8
<p>Confusion matrix for the predictions in data without occlusions and brightness intensity modifications.</p>
Full article ">Figure 9
<p>Misclassifications between artworks 5 and 7: (<b>a</b>) artwork 5; (<b>b</b>) artwork 7; (<b>c</b>,<b>d</b>) taking a photo of artwork 5 under certain angles of view.</p>
Full article ">Figure 10
<p>Screenshots taken from an Android device running the app prototype.</p>
Full article ">Figure 11
<p>The process used for artwork identification: while the user is pointing at an artwork, the embedded CNN is used to continuously analyze individual frames from the phone’s camera stream. The app considers a sliding window of 5 CNN predictions and picks the artwork with the highest mean probability in the window as the correct artwork.</p>
Full article ">Figure 12
<p>Results of the app evaluation using the UEQ questionnaire.</p>
Full article ">
26 pages, 20610 KiB  
Article
Computational Analysis of PDE-Based Shape Analysis Models by Exploring the Damped Wave Equation
by Alexander Köhler and Michael Breuß
Algorithms 2022, 15(9), 304; https://doi.org/10.3390/a15090304 - 27 Aug 2022
Viewed by 1695
Abstract
The computation of correspondences between shapes is a principal task in shape analysis. In this work, we consider correspondences constructed by a numerical solution of partial differential equations (PDEs). The underlying model of interest is thereby the classic wave equation, since this may [...] Read more.
The computation of correspondences between shapes is a principal task in shape analysis. In this work, we consider correspondences constructed by a numerical solution of partial differential equations (PDEs). The underlying model of interest is thereby the classic wave equation, since this may give the most accurate shape matching. As has been observed in previous works, numerical time discretisation has a substantial influence on matching quality. Therefore, it is of interest to understand the underlying mechanisms and to investigate at the same time if there is an analytical model that could best describe the most suitable method for shape matching. To this end, we study here the damped wave equation, which mainly serves as a tool to understand and model properties of time discretisation. At the hand of a detailed study of possible parameters, we illustrate that the method that gives the most reasonable feature descriptors benefits from a damping mechanism which can be introduced numerically or within the PDE. This sheds light on some basic mechanisms of underlying computational and analytic models, as one may conjecture by our investigation that an ideal model could be composed of a transport mechanism and a diffusive component that helps to counter grid effects. Full article
Show Figures

Figure 1

Figure 1
<p>Almost isometric transformation <span class="html-italic">T</span> between two centaur shapes <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math> (shape 0) and <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="script">M</mi> <mo>˜</mo> </mover> </semantics></math> (shape 2) of the TOSCA dataset [<a href="#B2-algorithms-15-00304" class="html-bibr">2</a>]. Additionally, we see the feature descriptor (function plots in the boxes) at three different points <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>∈</mo> <mo>{</mo> <mn>13</mn> <mo>,</mo> <mn>221</mn> <mo>,</mo> <mspace width="4pt"/> <mn>2686</mn> <mo>,</mo> <mspace width="4pt"/> <mn>3269</mn> <mo>}</mo> </mrow> </semantics></math>. These are located at the left thumb, the back and the left front hoof, in this order. The plotted feature descriptors are created for illustration of the underlying idea and are not based on real calculations.</p>
Full article ">Figure 2
<p>The one-dimensional analytical solution of the damped wave equation. The spatial interval is chosen from <math display="inline"><semantics> <mrow> <mo>−</mo> <mi>π</mi> </mrow> </semantics></math> to <math display="inline"><semantics> <mi>π</mi> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>10</mn> <mo>]</mo> </mrow> </semantics></math> for the time interval. The feature descriptor <math display="inline"><semantics> <msub> <mi>f</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </msub> </semantics></math> is the solid black line and is the function <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> restricted to <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. For more details on the solution and how we obtain it, we refer the reader to <a href="#sec5-algorithms-15-00304" class="html-sec">Section 5</a>.</p>
Full article ">Figure 3
<p>(<b>Left</b>) The 2D Riemannian manifold <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math> of the centaur’s head. (<b>Middle</b>) The discrete approximation <math display="inline"><semantics> <msub> <mi mathvariant="script">M</mi> <mi>d</mi> </msub> </semantics></math> of this manifold. <math display="inline"><semantics> <msub> <mi mathvariant="script">M</mi> <mi>d</mi> </msub> </semantics></math> is displayed via the point cloud <math display="inline"><semantics> <mi mathvariant="script">P</mi> </semantics></math> and the triangles <math display="inline"><semantics> <mi mathvariant="script">T</mi> </semantics></math>. (<b>Right</b>) The setting for the calculation of the LBO around the point <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math>. The yellow area indicates the cell volume <math display="inline"><semantics> <msub> <mo>Ω</mo> <mi>i</mi> </msub> </semantics></math>. Additionally, we see the two angles <math display="inline"><semantics> <msub> <mi>α</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>β</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math> opposite to the edge <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </semantics></math> (green line).</p>
Full article ">Figure 4
<p>Visualisation of the feature descriptor at different points of the centaur shape, obtained through the solution of the wave equation using the IE solver with <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-peak initial conditions. Shape 0, Shape 2 and the point are equal to the points, and shapes are shown in <a href="#algorithms-15-00304-f001" class="html-fig">Figure 1</a>. (<b>Rows</b>) Different solver. (<b>Columns</b>) Different points.</p>
Full article ">Figure 5
<p>For all plots, red circles indicate the CN method, and the blue triangle denotes the SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> scheme. The black solid line indicates the analytical damped feature descriptor, and the red line stands for the difference between the two analytical damped feature descriptors obtained from the second-order methods. (<b>Top left</b>) The damping parameter for the analytical damped feature descriptor was set to <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mi>τ</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>. (<b>Top right</b>) Damping parameter was set to <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mi>τ</mi> <mo>/</mo> <mn>8</mn> </mrow> </semantics></math>. (<b>Bottom left</b>) We plotted the difference between the analytical damped feature descriptors obtained with the CN and SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> method. The difference will not exceed <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>1</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>. Therefore, only CN is used as a representative of the second-order methods. (<b>Bottom right</b>) We compared the damped feature descriptor (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mi>τ</mi> </mrow> </semantics></math>) obtained using the CN method with the feature descriptor using IE. All (damped) feature descriptors are extracted from a solution <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> at the point <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. For the numerical calculations, we set the time increment <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and the lattice parameter to <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>(<b>Top row</b>) Cosine initial condition. (<b>Bottom row</b>) We plotted the solution from our academic setting at different iterations. The solution was generated using the SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> (green) scheme and the CN (blue) method. We chose iterations close to 100, 250 and 500 at which the SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> solution has a maximum amplitude. We notice a phase drift and small damping as well. For faster results, we increased the time increment to <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>(<b>First row</b>) We see a delta peak as the initial condition. (<b>Columns</b>) The solutions of different numerical schemes (left to right) for the IE, CN and SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> schemes are presented. (<b>Rows 2–4</b>) We see the solution after different iterations. In the second row, we calculated 10 iterations. The third row shows us the solution after 20 iterations, and in the fourth row, the 50th iteration is presented. All numerical schemes show these oscillations, called the Gibbs phenomenon. This is more visible in the second-order schemes than in the first-order method. The time increment was set to <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>(<b>First row</b>) We see the Gaussian distribution <math display="inline"><semantics> <mrow> <mo form="prefix">exp</mo> <mfenced separators="" open="(" close=")"> <mo>−</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>/</mo> <mn>0.01</mn> </mfenced> </mrow> </semantics></math> as the initial condition. (<b>Columns</b>) The solutions of different numerical schemes for the (left to right) IE, CN and SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> schemes are presented. (<b>Rows 2–4</b>) We see the solution after different iterations. In the second row, we calculated 10 iterations. The third row shows us the solution after 20 iterations, and in the fourth row, the 50th iteration is presented. The time increment was set to <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>We plotted the hit rate at a geodesic error of 0.25 over the damping parameter <span class="html-italic">k</span>. With the three different colours, we identify the numerical schemes that were used to solve the damped wave equation on the different shapes. Red indicate the IE scheme, blue denotes the CN method, and green stands for the SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> scheme. Solid lines denote a changing damping parameter <span class="html-italic">k</span>, and the dashed lines indicate a fixed <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Hit rate from the centaur shapes over different damping parameters <span class="html-italic">k</span>. Solid lines indicate changing damping parameters and dashed lines denote a fixed damping parameter (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>). The colours distinguish between the different numerical methods: red for IE, blue for CN and green for SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>(<b>Top</b>) to (<b>bottom</b>) We see the hit rate at a geodesic error of 0.25 from the centaur shapes obtained by three different numerical schemes. The colours and rows distinguish between the different solvers we used: (red, (<b>top</b>)) IE, (blue, (<b>middle</b>)) CN and (green, (<b>bottom</b>)) SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math>. Solid lines indicate a changing damping parameter <span class="html-italic">k</span> and dashed lines for a fixed (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>) one. The symbols distinguish between the different widths <math display="inline"><semantics> <mi>σ</mi> </semantics></math> of the Gaussian distributed initial condition.</p>
Full article ">Figure 12
<p>Resized version from <a href="#algorithms-15-00304-f011" class="html-fig">Figure 11</a>. We only plotted the IE method between <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>. We resized the hit rate axes as well. With these adjustments, it is possible to distinguish the small changes caused by changing the width of the Gaussian initial condition.</p>
Full article ">Figure 13
<p>The hit rate of the centaur shape over different widths <math display="inline"><semantics> <mi>σ</mi> </semantics></math> of the Gaussian initial condition. For this, we fixed the damping parameter at <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. The red solid line denotes the IE method, the blue indicates the CN method and the green line is for the SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> method. The red dashed line shows us the hit rate of the wave equation solved using the IE method and <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-peak initial conditions.</p>
Full article ">Figure 14
<p>The comparison of the feature descriptors from the centaur shape. We used the damped wave equation (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>) with the <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-peak initial condition to generate the feature descriptor. Shape 0, Shape 2 and the points are equal to the points, and shapes shown in <a href="#algorithms-15-00304-f001" class="html-fig">Figure 1</a>. (<b>Rows</b>) Different solvers. (<b>Columns</b>) Different points.</p>
Full article ">Figure 15
<p>The comparison of the feature descriptors from the centaur shape. We used the damped wave equation (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>) with the Gaussian (<math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>) initial condition to generate the feature descriptor. Shape 0, Shape 2 and the points are equal to the points, and shapes shown in <a href="#algorithms-15-00304-f001" class="html-fig">Figure 1</a>. (<b>Rows</b>) Different solvers. (<b>Columns</b>) Different points.</p>
Full article ">Figure 16
<p>We solved the damped wave equation using different numerical schemes and the <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-peaked initial condition over different <span class="html-italic">k</span>. Then, we computed the hit rate at a geodesic error of 0.25 and plotted the hit rate over different damping parameters <span class="html-italic">k</span>. From (<b>Top</b>) to (<b>Bottom</b>), we see the hit rate produced with the IE method (red), the CN scheme (blue) and the SO<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> method (green). With the dashed line, we indicate the results from the <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> scenario. The triangle and square denote a noise of <math display="inline"><semantics> <mrow> <mn>6</mn> <mo>%</mo> </mrow> </semantics></math> and a noise of <math display="inline"><semantics> <mrow> <mn>12</mn> <mo>%</mo> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 17
<p>We solved the damped wave equation (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>) with the Gaussian initial condition. Then, we computed the hit rate at a geodesic error of 0.25 and plotted it over different values of <math display="inline"><semantics> <mi>σ</mi> </semantics></math>. Additionally, we plotted the hit rate from the wave equation without damping obtained with <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-peak initial conditions and solved by the IE method as a red dashed line. The triangle and square denote a noise of <math display="inline"><semantics> <mrow> <mn>6</mn> <mo>%</mo> </mrow> </semantics></math> and a noise of <math display="inline"><semantics> <mrow> <mn>12</mn> <mo>%</mo> </mrow> </semantics></math>, respectively.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop