Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (74)

Search Parameters:
Keywords = known priori information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3195 KiB  
Article
Improved Bayes-Based Reliability Prediction of Small-Sample Hall Current Sensors
by Ting Chen, Zhengyu Liu, Ling Ju, Yongling Lu and Shike Wei
Machines 2024, 12(9), 618; https://doi.org/10.3390/machines12090618 - 4 Sep 2024
Viewed by 506
Abstract
As a type of magnetic sensor known for its high reliability and long lifespan, the reliability issues of Hall current sensors have attracted attention in fields such as electromagnetic compatibility. However, there is still a lack of sufficient failure data for reliability prediction. [...] Read more.
As a type of magnetic sensor known for its high reliability and long lifespan, the reliability issues of Hall current sensors have attracted attention in fields such as electromagnetic compatibility. However, there is still a lack of sufficient failure data for reliability prediction. Therefore, a small-sample reliability prediction method based on the improved Bayes method is proposed. Firstly, the pseudo-failure lifespan data are acquired through the accelerated degradation testing of Hall current sensors subjected to temperature and humidity stressors, and the life is examined by the Weibull distribution; then, the data expanded using the BP neural network model are used as the a priori information, and the parameter estimation of the Weibull distribution is obtained by the Bootstrap method and Gibbs sampling; finally, the Peck accelerated model is implemented to achieve the normal temperature-humidity reliability prediction of Hall current sensors under stress, and the utility of the enhanced Bayes technique is confirmed through the application of the Wiener stochastic process model. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental Flowchart.</p>
Full article ">Figure 2
<p>Picture and block diagram of the acceleration test platform. (<b>a</b>) Constant temperature and humidity chamber. (<b>b</b>) Hall current sensors. (<b>c</b>) DC stable power supply and Hall current sensor zero-voltage detection device. (<b>d</b>) Block diagram of the acceleration test platform.</p>
Full article ">Figure 3
<p>Zero-point voltage degradation curves under 65 °C-70%RH stress.</p>
Full article ">Figure 4
<p>Variation of zero-point voltage under different stresses.</p>
Full article ">Figure 5
<p>Comparison results between original and expanded data.</p>
Full article ">Figure 6
<p>Scatterplot of <span class="html-italic">α</span> distribution test.</p>
Full article ">Figure 7
<p>Scatterplot of <span class="html-italic">β</span> distribution test.</p>
Full article ">Figure 8
<p><span class="html-italic">α</span> a posteriori distribution density.</p>
Full article ">Figure 9
<p><span class="html-italic">β</span> a posteriori distribution density.</p>
Full article ">Figure 10
<p>Comparison of Reliability Curves for Two Methods.</p>
Full article ">
16 pages, 2616 KiB  
Article
Wandering Drunkards Walk after Fibonacci Rabbits: How the Presence of Shared Market Opinions Modifies the Outcome of Uncertainty
by Nicolas Maloumian
Entropy 2024, 26(8), 686; https://doi.org/10.3390/e26080686 - 13 Aug 2024
Viewed by 799
Abstract
Shared market opinions and beliefs by market participants generate a set of constraints that mediate information through a not-so-unstable system of expected target prices. Price trajectories, within these sets of constraints, confirm or disprove the likelihood of participant expectations and cannot, de facto, [...] Read more.
Shared market opinions and beliefs by market participants generate a set of constraints that mediate information through a not-so-unstable system of expected target prices. Price trajectories, within these sets of constraints, confirm or disprove the likelihood of participant expectations and cannot, de facto, be considered permutable, as literature has shown, since their inner structure is dynamically affected by their own progress, suggesting per se the presence of both heat and cycles. This study described and discussed how trajectories are built using different alphabets and suggests that prices follow an ergodic course within structurally similar tessellation classes. It is reported that the courses of price moves are self-similar due to their a priori structure, and they do not need to be complete in order to create the conditions, in resembling conditions, for the appearance of the well-known and commonly used Fibonacci ratios between price trajectories. To date, financial models and engineering are mostly based on the mathematics of randomness. If these theoretical findings need empirical validation, such a potential infrastructure of ratios would suggest the possibility for a superstructure to exist, in other words, the emergence of exploitable patterns. Full article
(This article belongs to the Special Issue Complexity in Financial Networks)
Show Figures

Figure 1

Figure 1
<p>Different possible combinations after six trades, ‘a’ marking any price change, ‘H’ marking no change.</p>
Full article ">Figure 2
<p>Different possible combinations posting a six-letter chain.</p>
Full article ">Figure 3
<p>Different ways to see the same rise in price from a bottom to a top as depicted in ‘a’, with ‘b’ presenting an l-composition (l standing for left) of ‘a’, and with ‘c’ presenting an r-composition (r standing for right) of ‘a’. On both ‘b’ and ‘c’, each horizontal line is an ‘H’.</p>
Full article ">Figure 4
<p>Grouping different ‘tiles’ of an eight-letter chain composed with ‘1’s’ (one tick up or down) and ‘0’s’ (no change). (<b>A</b>) shows the set of tiles for moves going up, and (<b>B</b>) shows the set of tiles for moves going down.</p>
Full article ">Figure 5
<p>Probability structure of Fibonacci <span class="html-italic">n</span>-letter chains, or classes, probability levels on the y-axis.</p>
Full article ">Figure 6
<p>Showing how the tile sets T{9} T{8} T{7} are related. Black squares indicate no price changes (‘H’ or ‘0’), and all other squares indicate a price change of one unit of price (‘1’).</p>
Full article ">
8 pages, 226 KiB  
Article
Multimodel Approaches Are Not the Best Way to Understand Multifactorial Systems
by Benjamin M. Bolker
Entropy 2024, 26(6), 506; https://doi.org/10.3390/e26060506 - 11 Jun 2024
Cited by 2 | Viewed by 1026
Abstract
Information-theoretic (IT) and multi-model averaging (MMA) statistical approaches are widely used but suboptimal tools for pursuing a multifactorial approach (also known as the method of multiple working hypotheses) in ecology. (1) Conceptually, IT encourages ecologists to perform tests on sets of artificially simplified [...] Read more.
Information-theoretic (IT) and multi-model averaging (MMA) statistical approaches are widely used but suboptimal tools for pursuing a multifactorial approach (also known as the method of multiple working hypotheses) in ecology. (1) Conceptually, IT encourages ecologists to perform tests on sets of artificially simplified models. (2) MMA improves on IT model selection by implementing a simple form of shrinkage estimation (a way to make accurate predictions from a model with many parameters relative to the amount of data, by “shrinking” parameter estimates toward zero). However, other shrinkage estimators such as penalized regression or Bayesian hierarchical models with regularizing priors are more computationally efficient and better supported theoretically. (3) In general, the procedures for extracting confidence intervals from MMA are overconfident, providing overly narrow intervals. If researchers want to use limited data sets to accurately estimate the strength of multiple competing ecological processes along with reliable confidence intervals, the current best approach is to use full (maximal) statistical models (possibly with Bayesian priors) after making principled, a priori decisions about model complexity. Full article
20 pages, 1657 KiB  
Article
Exploring Simplicity Bias in 1D Dynamical Systems
by Kamal Dingle, Mohammad Alaskandarani, Boumediene Hamzi and Ard A. Louis
Entropy 2024, 26(5), 426; https://doi.org/10.3390/e26050426 - 16 May 2024
Cited by 1 | Viewed by 1224
Abstract
Arguments inspired by algorithmic information theory predict an inverse relation between the probability and complexity of output patterns in a wide range of input–output maps. This phenomenon is known as simplicity bias. By viewing the parameters of dynamical systems as inputs, and the [...] Read more.
Arguments inspired by algorithmic information theory predict an inverse relation between the probability and complexity of output patterns in a wide range of input–output maps. This phenomenon is known as simplicity bias. By viewing the parameters of dynamical systems as inputs, and the resulting (digitised) trajectories as outputs, we study simplicity bias in the logistic map, Gauss map, sine map, Bernoulli map, and tent map. We find that the logistic map, Gauss map, and sine map all exhibit simplicity bias upon sampling of map initial values and parameter values, but the Bernoulli map and tent map do not. The simplicity bias upper bound on the output pattern probability is used to make a priori predictions regarding the probability of output patterns. In some cases, the predictions are surprisingly accurate, given that almost no details of the underlying dynamical systems are assumed. More generally, we argue that studying probability–complexity relationships may be a useful tool when studying patterns in dynamical systems. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Figure 1
<p>An example of a real-valued (orange) and digitised (blue) trajectory of the logistic map, with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>3.8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. The discretisation is defined by writing 1 if <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>≥</mo> <mn>0.5</mn> </mrow> </semantics></math> and 0 otherwise, resulting in the pattern <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> </mrow> </semantics></math> 0101011011111011010110111, which has a length of <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> </mrow> </semantics></math> 25 bits.</p>
Full article ">Figure 2
<p>A bifurcation diagram for the logistic map. In (<b>a</b>), we see the diagram for parameters <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> </mrow> </semantics></math> (0, 4.0]; and in (<b>b</b>), we see the diagram for values <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> </mrow> </semantics></math> (2.9, 4.0]. The value 0.5 has been highlighted in red, to indicate the cut-off threshold used to digitise trajectories by a value of 0 if the output is below the threshold, and a value of 1 if it is greater than or equal to the threshold.</p>
Full article ">Figure 3
<p>Simplicity bias in the digitised logistic map from random samples with <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>∈</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math> sampled in different intervals. Each blue data-point corresponds to a different binary digitised trajectory <span class="html-italic">x</span> of 25 bits in length. The black line is the upper-bound prediction of Equation (<a href="#FD3-entropy-26-00426" class="html-disp-formula">3</a>). (<b>a</b>) Clear simplicity bias for <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> </mrow> </semantics></math> (0.0, 4.0] with <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> closely following the upper bound, except for low frequency and high complexity outputs which suffer from increased sampling noise; (<b>b</b>) simplicity bias is still present for <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> </mrow> </semantics></math> [3.0, 4.0]; (<b>c</b>) the distribution of <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> becomes more flat (less biased) and simplicity bias is much less clear when <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> </mrow> </semantics></math> [3.57, 4.0] due to constraining the sampling to <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-regions more likely to show chaos; (<b>d</b>) the distribution of <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> is roughly uniform when using <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>4.0</mn> </mrow> </semantics></math>, with almost no bias, and hence no possibility of simplicity bias.</p>
Full article ">Figure 4
<p>The distribution <math display="inline"><semantics> <mrow> <mi>P</mi> <mo stretchy="false">(</mo> <mover accent="true"> <mi>K</mi> <mo stretchy="false">˜</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>r</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> of output complexity values, with <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>∈</mo> <mrow> <mo>(</mo> <mn>0.0</mn> <mo>,</mo> <mn>1.0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math> sampled from different intervals. (<b>a</b>) A roughly uniform complexity distribution for <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> </mrow> </semantics></math> (0.0, 4.0], with some bias towards lower complexities (mean is 3.4 bits); (<b>b</b>) close to uniform distribution of complexities for <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> </mrow> </semantics></math> [3.0, 4.0], mean is 10.3 bits; (<b>c</b>) the distribution leans toward higher complexities when <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> </mrow> </semantics></math> [3.57, 4.0], mean is 14.1 bits; (<b>d</b>) the distribution is biased to higher complexity values when <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>4.0</mn> </mrow> </semantics></math> (mean is 16.4 bits); (<b>e</b>) for comparison, purely random binary strings of 25 bits in length were generated (mean is 16.2 bits). The distributions of complexity values in (<b>d</b>,<b>e</b>) are very similar, but (<b>a</b>–<b>c</b>) show distinct differences. Calculating and comparing <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>K</mi> <mo>)</mo> </mrow> </semantics></math> is an efficient way of checking how simplicity-biased a map is.</p>
Full article ">Figure 5
<p>Simplicity bias in (<b>a</b>) the logistic map with <math display="inline"><semantics> <mi>μ</mi> </semantics></math> sampled in [0.0, 3.5699], which is the non-chaotic period doubling regime (upper bound fitted slope is −0.17); (<b>b</b>) the Gauss map (upper bound fitted slope is −0.13); and (<b>c</b>) the sine map (upper bound fitted slope is −0.17).</p>
Full article ">Figure A1
<p>Simplicity bias with different number of iterations. (<b>a</b>) With <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> iterations, there is some simplicity bias but it is not pronounced; (<b>b</b>) with <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>25</mn> </mrow> </semantics></math> iterations, the simplicity bias is very clear; with (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> iterations there is still clear simplicity bias, but a long ‘tail’ begins to emerge, illustrating low-frequency patterns; (<b>d</b>) with <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> iterations, there is still some simplicity bias but the ‘tail’ has become more dominant and the simplicity bias is less clear.</p>
Full article ">Figure A2
<p>Simplicity bias in the logistic map, which is the same as in <a href="#entropy-26-00426-f003" class="html-fig">Figure 3</a>, but with semi-transparent data points. (<b>a</b>) sampling <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> <mo>[</mo> <mn>0.0</mn> <mo>,</mo> <mn>4.0</mn> <mo>]</mo> </mrow> </semantics></math>; (<b>b</b>) sampling <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> <mo>[</mo> <mn>3.0</mn> <mo>,</mo> <mn>4.0</mn> <mo>]</mo> </mrow> </semantics></math>; (<b>c</b>) sampling <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>∈</mo> <mo>[</mo> <mn>3.57</mn> <mo>,</mo> <mn>4.0</mn> <mo>]</mo> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>4.0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A3
<p>Simplicity bias in (<b>a</b>) the logistic, (<b>b</b>) Gauss map, and (<b>c</b>) sine map, the same as in <a href="#entropy-26-00426-f005" class="html-fig">Figure 5</a>, but with semi-transparent data points.</p>
Full article ">
18 pages, 8692 KiB  
Article
Object Detection and Tracking with YOLO and the Sliding Innovation Filter
by Alexander Moksyakov, Yuandi Wu, Stephen Andrew Gadsden, John Yawney and Mohammad AlShabi
Sensors 2024, 24(7), 2107; https://doi.org/10.3390/s24072107 - 26 Mar 2024
Cited by 3 | Viewed by 2086
Abstract
Object detection and tracking are pivotal tasks in machine learning, particularly within the domain of computer vision technologies. Despite significant advancements in object detection frameworks, challenges persist in real-world tracking scenarios, including object interactions, occlusions, and background interference. Many algorithms have been proposed [...] Read more.
Object detection and tracking are pivotal tasks in machine learning, particularly within the domain of computer vision technologies. Despite significant advancements in object detection frameworks, challenges persist in real-world tracking scenarios, including object interactions, occlusions, and background interference. Many algorithms have been proposed to carry out such tasks; however, most struggle to perform well in the face of disturbances and uncertain environments. This research proposes a novel approach by integrating the You Only Look Once (YOLO) architecture for object detection with a robust filter for target tracking, addressing issues of disturbances and uncertainties. The YOLO architecture, known for its real-time object detection capabilities, is employed for initial object detection and centroid location. In combination with the detection framework, the sliding innovation filter, a novel robust filter, is implemented and postulated to improve tracking reliability in the face of disturbances. Specifically, the sliding innovation filter is implemented to enhance tracking performance by estimating the optimal centroid location in each frame and updating the object’s trajectory. Target tracking traditionally relies on estimation theory techniques like the Kalman filter, and the sliding innovation filter is introduced as a robust alternative particularly suitable for scenarios where a priori information about system dynamics and noise is limited. Experimental simulations in a surveillance scenario demonstrate that the sliding innovation filter-based tracking approach outperforms existing Kalman-based methods, especially in the presence of disturbances. In all, this research contributes a practical and effective approach to object detection and tracking, addressing challenges in real-world, dynamic environments. The comparative analysis with traditional filters provides practical insights, laying the groundwork for future work aimed at advancing multi-object detection and tracking capabilities in diverse applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The sliding innovation filter and its sliding boundary layer [<a href="#B37-sensors-24-02107" class="html-bibr">37</a>].</p>
Full article ">Figure 2
<p>Flow diagram of proposed architecture and methodology.</p>
Full article ">Figure 3
<p>Still frame image from footage of the atrium dataset used in the experimentation.</p>
Full article ">Figure 4
<p>Object detection and tracking with YOLO and estimation theory for the dataset under study. The object of interest is the human within the bounding box.</p>
Full article ">Figure 5
<p>Still frame atrium image with Kalman filter tracking.</p>
Full article ">Figure 6
<p>Still frame atrium image with extended Kalman filter tracking.</p>
Full article ">Figure 7
<p>RMSE plots of the KF (<b>left</b>) and EKF (<b>right</b>).</p>
Full article ">Figure 8
<p>Still frame atrium image with sliding innovation filter tracking.</p>
Full article ">Figure 9
<p>Still frame atrium image with extended sliding innovation filter tracking.</p>
Full article ">Figure 10
<p>RMSE plots of the SIF (<b>left</b>) and ESIF (<b>right</b>).</p>
Full article ">Figure 11
<p>KF-based tracking performance under ‘break 3’ disturbance conditions.</p>
Full article ">Figure 12
<p>SIF-based tracking performance under ‘break 3’ disturbance conditions.</p>
Full article ">Figure 13
<p>RMSE plots of the KF (<b>left</b>) and SIF (<b>right</b>) with the first break.</p>
Full article ">Figure 14
<p>RMSE plots of the KF (<b>left</b>) and SIF (<b>right</b>) with the second break.</p>
Full article ">Figure 15
<p>RMSE plots of the KF (<b>left</b>) and SIF (<b>right</b>) with the third break.</p>
Full article ">
18 pages, 644 KiB  
Article
Applying Learning and Self-Adaptation to Dynamic Scheduling
by Bernhard Werth, Johannes Karder, Michael Heckmann, Stefan Wagner and Michael Affenzeller
Appl. Sci. 2024, 14(1), 49; https://doi.org/10.3390/app14010049 - 20 Dec 2023
Cited by 1 | Viewed by 1907
Abstract
Real-world production scheduling scenarios are often not discrete, separable, iterative tasks but rather dynamic processes where both external (e.g., new orders, delivery shortages) and internal (e.g., machine breakdown, timing uncertainties, human interaction) influencing factors gradually or abruptly impact the production system. Solutions to [...] Read more.
Real-world production scheduling scenarios are often not discrete, separable, iterative tasks but rather dynamic processes where both external (e.g., new orders, delivery shortages) and internal (e.g., machine breakdown, timing uncertainties, human interaction) influencing factors gradually or abruptly impact the production system. Solutions to these problems are often very specific to the application case or rely on simple problem formulations with known and stable parameters. This work presents a dynamic scheduling scenario for a production setup where little information about the system is known a priori. Instead of fully specifying all relevant problem data, the timing and batching behavior of machines are learned by a machine learning ensemble during operation. We demonstrate how a meta-heuristic optimization algorithm can utilize these models to tackle this dynamic optimization problem, compare the dynamic performance of a set of established construction heuristics and meta-heuristics and showcase how models and optimizers interact. The results obtained through an empirical study indicate that the interaction between optimization algorithm and machine learning models, as well as the real-time performance of the overall optimization system, can impact the performance of the production system. Especially in high-load situations, the dynamic algorithms that utilize solutions from previous problem epochs outperform the restarting construction heuristics by up to ~24%. Full article
(This article belongs to the Special Issue Evolutionary Algorithms and Their Real-World Applications)
Show Figures

Figure 1

Figure 1
<p>Asynchronous components of the experimental setup.</p>
Full article ">Figure 2
<p>Scenario: production pipeline.</p>
Full article ">Figure 3
<p>The First-In-First-Out (FIFO) algorithm.</p>
Full article ">Figure 4
<p>Shortest-Job-First (SJF) processing eight jobs.</p>
Full article ">Figure 5
<p>Strict offspring section.</p>
Full article ">Figure 6
<p>Gantt chart and quality curves of a single run (open-ended local search; OELS).</p>
Full article ">Figure 7
<p>Gantt chart and quality curves of a single run (shortest-job-first; SJF).</p>
Full article ">Figure 8
<p>Typical algorithm performance for a problem where tasks arrive faster than they can be processed.</p>
Full article ">Figure 9
<p>Algorithm performance for a problem instance with very high speedup.</p>
Full article ">Figure 10
<p>“Well”-performing models for machines with setup times.</p>
Full article ">Figure 11
<p>“Mediocre”-performing models for machines with setup times.</p>
Full article ">Figure 12
<p>Overall performance of different optimizers measured by finished products.</p>
Full article ">Figure 13
<p>Overall performance of different optimizers measured by finished subtasks.</p>
Full article ">
10 pages, 651 KiB  
Article
Exploring Rehabilitation Provider Experiences of Providing Health Services for People Living with Long COVID in Alberta
by Sidney Horlick, Jacqueline A. Krysa, Katelyn Brehon, Kiran Pohar Manhas, Katharina Kovacs Burns, Kristine Russell, Elizabeth Papathanassoglou, Douglas P. Gross and Chester Ho
Int. J. Environ. Res. Public Health 2023, 20(24), 7176; https://doi.org/10.3390/ijerph20247176 - 13 Dec 2023
Cited by 2 | Viewed by 1879
Abstract
Background: COVID-19 infection can result in persistent symptoms, known as long COVID. Understanding the provider experience of service provision for people with long COVID symptoms is crucial for improving care quality and addressing potential challenges. Currently, there is limited knowledge about the provider [...] Read more.
Background: COVID-19 infection can result in persistent symptoms, known as long COVID. Understanding the provider experience of service provision for people with long COVID symptoms is crucial for improving care quality and addressing potential challenges. Currently, there is limited knowledge about the provider experience of long COVID service delivery. Aim: To explore the provider experience of delivering health services to people living with long COVID at select primary, rehabilitation, and specialty care sites. Design and setting: This study employed qualitative description methodology. Semi-structured interviews were conducted with frontline providers at primary care, rehabilitation, and specialty care sites across Alberta. Participants were interviewed between June and September 2022. Method: Interviews were conducted virtually over zoom, audio-recorded, and transcribed with consent. Iterative inductive qualitative content analysis of transcripts was employed. Relationships between emergent themes were examined for causality or reciprocity, then clustered into content areas and further abstracted into a priori categories through their interpretive joint meaning. Participants: A total of 15 participants across Alberta representing diverse health care disciplines were interviewed. Results: Main themes include: the importance of education for long COVID recognition; the role of symptom acknowledgement in patient-centred long COVID service delivery; the need to develop recovery expectations; and opportunities for improvement of navigation and wayfinding to long COVID services. Conclusions: Provider experience of delivering long COVID care can be used to inform patient-centred service delivery for persons with long COVID symptoms. Full article
(This article belongs to the Section Health Care Sciences & Services)
Show Figures

Figure 1

Figure 1
<p>Conceptual framework: Interplay of Long COVID Education, Acknowledgement, Expectations and System Navigation. The complex interplay between the facets of the model is portrayed. The movement of one component of the model sets the others in motion.</p>
Full article ">
16 pages, 5133 KiB  
Article
A New Linear Model for the Calculation of Routing Metrics in 802.11s Using ns-3 and RStudio
by Juan Ochoa-Aldeán and Carlos Silva-Cárdenas
Computers 2023, 12(9), 172; https://doi.org/10.3390/computers12090172 - 28 Aug 2023
Viewed by 1188
Abstract
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and [...] Read more.
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and the low-power wide area network (LPWAN) tailored for the burgeoning Internet of Things (IoT) landscape. IEEE 802.11s is well known for its facto standard for WMN, which defines the hybrid wireless mesh protocol (HWMP) as a layer-2 routing protocol and airtime link (ALM) as a metric. In this intricate landscape, artificial intelligence (AI) plays a prominent role in the industry, particularly within the technology and telecommunication realms. This study presents a novel methodology for the computation of routing metrics, specifically the ALM. This methodology implements the network simulator ns-3 and the RStudio as a statistical computing environment for data analysis. The former has enabled for the creation of scripts that elicit a variety of scenarios for WMN where information is gathered and stored in databases. The latter (RStudio) takes this information, and at this point, two linear predictions are supported. The first uses linear models (lm) and the second employs general linear models (glm). To conclude this process, statistical tests are applied to the original model, as well as to the new suggested ones. This work substantially contributes in two ways: first, through the methodological tool for the metric calculation of the HWMP protocol that belongs to the IEEE 802.11s standard, using lm and glm for the selection and validation of the model regressors. At this stage the ANOVA and STEPWIZE tools of RStudio are used. The second contribution is a linear predictor that improves the WMN’s performance as a priori mechanism before the use of the ns-3 simulator. The ANCOVA tool of RStudio is employed in the latter. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Figure 1
<p>Multi-radio wireless mesh network [<a href="#B2-computers-12-00172" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>Methodology for computing the performance of the wireless mesh network (WMN) using the new linear model.</p>
Full article ">Figure 3
<p>Methodology for predicting network performance.</p>
Full article ">Figure 4
<p>WMN 802.11s–HWMP in ns-3.</p>
Full article ">Figure 5
<p>STARGAZER of lmesh1 and lmesh2.</p>
Full article ">Figure 6
<p>ANOVA lmesh1–lmesh2.</p>
Full article ">Figure 7
<p>STEPWIZE summary.</p>
Full article ">Figure 8
<p>Adjusted residuals for lmesh1.</p>
Full article ">Figure 9
<p>lmesh3 summary.</p>
Full article ">Figure 10
<p>Adjusted residual for lmesh3.</p>
Full article ">Figure 11
<p>glmesh1 summary.</p>
Full article ">Figure 12
<p>glmesh1 residuals.</p>
Full article ">Figure 13
<p>glmesh2 summary.</p>
Full article ">Figure 14
<p>PDF vs. M_STEP, Mesh = 3 × 3.</p>
Full article ">Figure 15
<p>PDF vs. M_STEP, Mesh = 4 × 4.</p>
Full article ">Figure 16
<p>PDF vs. M_STEP, Mesh = 5 × 5.</p>
Full article ">Figure 17
<p>PDF vs. M_STEP, Mesh = 5 × 5.</p>
Full article ">
18 pages, 965 KiB  
Article
Causality Analysis with Different Probabilistic Distributions Using Transfer Entropy
by Michał J. Falkowski and Paweł D. Domański
Appl. Sci. 2023, 13(10), 5849; https://doi.org/10.3390/app13105849 - 9 May 2023
Cited by 3 | Viewed by 1281
Abstract
This paper presents the results of an analysis of causality detection in a multi-loop control system. The investigation focuses on application of the Transfer Entropy method, which is not commonly used during the exact construction of information and material flow pathways in the [...] Read more.
This paper presents the results of an analysis of causality detection in a multi-loop control system. The investigation focuses on application of the Transfer Entropy method, which is not commonly used during the exact construction of information and material flow pathways in the field of automation. Calculations are performed on simulated multi-loop control system data obtained from a system with a structure known a priori. The model incorporates the possibility of freely changing its parameters and of applying noise with different properties. In addition, a method for determining the entropy transfer between process variables is investigated. The fitting of different variants of the probability distribution functions to the data is crucial for effective evaluation of the Transfer Entropy approach. The obtained results allow for suggestions to be formulated with respect to choosing which probability function the transfer entropy should be based upon. Moreover, we provide a proposal for the design of a causality analysis approach that can reliably obtain information relationships. Full article
(This article belongs to the Special Issue Recent Advances in Nonlinear Vibration and Control)
Show Figures

Figure 1

Figure 1
<p>Simulated multi-loop PID-based control layout.</p>
Full article ">Figure 2
<p>The industrial implementation of feedforward disturbance decoupling.</p>
Full article ">Figure 3
<p>Control error time series for the simulation data with Gaussian noise.</p>
Full article ">Figure 4
<p>Control error time series for the data with Gaussian noise and Cauchy disturbance.</p>
Full article ">Figure 5
<p>The causality diagram of the considered simulated benchmark example.</p>
Full article ">Figure 6
<p>Histograms with PDF fitting for dataset with Gaussian noise: (<b>a</b>) control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math> and (<b>b</b>) control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>Histograms with PDF fitting for dataset with Gaussian noise: (<b>a</b>) control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>3</mn> </msub> </semantics></math> and (<b>b</b>) control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>4</mn> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Histograms with PDF fitting for dataset with Gaussian noise (control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>5</mn> </msub> </semantics></math>).</p>
Full article ">Figure 9
<p>Histograms with PDF fitting for dataset with Gaussian noise and Cauchy disturbance: (<b>a</b>) control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math> and (<b>b</b>) control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 10
<p>Histograms with PDF fitting for dataset with Gaussian noise and Cauchy disturbance: (<b>a</b>) control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>3</mn> </msub> </semantics></math> and (<b>b</b>) control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>4</mn> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>Histograms with PDF fitting for dataset with Gaussian noise and Cauchy disturbance (control error <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>5</mn> </msub> </semantics></math>).</p>
Full article ">Figure 12
<p>Causality diagram for the dataset with Gaussian noise using the TE approach based on the <math display="inline"><semantics> <mi>α</mi> </semantics></math>-stable distribution.</p>
Full article ">Figure 13
<p>Causality diagram for the dataset with both Gaussian noise and Cauchy disturbance using the Transfer Entropy approach based on the Gaussian distribution.</p>
Full article ">Figure 14
<p>Causality diagram for the dataset with both Gaussian noise and Cauchy disturbance using the Transfer Entropy approach based on the <math display="inline"><semantics> <mi>α</mi> </semantics></math>-stable distribution.</p>
Full article ">
24 pages, 2095 KiB  
Article
Determination of Bayesian Cramér–Rao Bounds for Estimating Uncertainties in the Bio-Optical Properties of the Water Column, the Seabed Depth and Composition in a Coastal Environment
by Mireille Guillaume, Audrey Minghelli, Malik Chami and Manchun Lei
Remote Sens. 2023, 15(9), 2242; https://doi.org/10.3390/rs15092242 - 23 Apr 2023
Viewed by 2001
Abstract
The monitoring of coastal areas using remote sensing techniques is an important issue to determine the bio-optical properties of the water column and the seabed composition. New hyperspectral satellite sensors (e.g., PRISMA, DESIS or EnMap) are developed to periodically observe ecosystems. The uncertainties [...] Read more.
The monitoring of coastal areas using remote sensing techniques is an important issue to determine the bio-optical properties of the water column and the seabed composition. New hyperspectral satellite sensors (e.g., PRISMA, DESIS or EnMap) are developed to periodically observe ecosystems. The uncertainties in the retrieved geophysical products remain a key issue to release reliable data useful for the end-users. In this study, an analytical approach based on Information theory is proposed to investigate the Cramér–Rao lower Bounds (CRB) for the uncertainties in the ocean color parameters. Practically, during the inversion process, an a priori knowledge on the estimated parameters is used since their range of variation is supposed to be known. Here, a Bayesian approach is attempted to handle such a priori knowledge. A Bayesian CRB (BCRB) is derived using the Lee et al. semianalytical radiative transfer model dedicated to shallow waters. Both environmental noise and bio-optical parameters are supposed to be random vectors that follow a Gaussian distibution. The calculation of CRB and BCRB is carried out for two hyperspectral images acquired above the French mediterranean coast. The images were obtained from the recently launched hyperspectral sensors, namely the DESIS sensor (DLR Earth Sensing Imaging Spectrometer, German Aerospace Center), and PRISMA (Precursore IpperSpettrale della Mission Applicativa—ASI, Italian Space Adjency) sensor. The comparison between the usual CRB approach, the proposed BCRB approach and experimental errors obtained for the retrieved bathymetry shows the better ability of the BCRB to determine minimum error bounds. Full article
Show Figures

Figure 1

Figure 1
<p>Red-Green-Blue (RGB) image acquired by (<b>a</b>) DESIS sensor (13 June 2021) and (<b>b</b>) PRISMA sensor (14 August 2021).</p>
Full article ">Figure 2
<p><math display="inline"><semantics> <mrow> <mi>L</mi> <mi>i</mi> <mi>t</mi> <mi>t</mi> <mi>o</mi> <mn>3</mn> <mi>D</mi> </mrow> </semantics></math> data: (<b>a</b>) bathymetry between 0 m and 30 m; the red vertical line in column 250 is the location of the profile; (<b>b</b>) depth profile corresponding to the red line, from the beach up to −30 m.</p>
Full article ">Figure 3
<p>Spectral signatures from DESIS and PRISMA images, for the same two pixels, corresponding respectively to a sand seabed area and to a mixed (70%) Posidonia seabed area.</p>
Full article ">Figure 4
<p>Covariancematrices estimated on an homogeneous area of DESIS and PRISMA images.</p>
Full article ">Figure 5
<p>Reflectance spectra of benthic habitat used for the inversion of PRISMA image.</p>
Full article ">Figure 6
<p>PRISMA data: root mean square of the Cramér–Rao and the Bayesian Cramér–Rao bounds as a function of the estimated bathymetry, and empirical error (<math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math>) as a function of the true bathymetry, for the inversion domain <math display="inline"><semantics> <mrow> <mi mathvariant="script">D</mi> <msub> <mi>p</mi> <mrow> <mn>3</mn> <mo>−</mo> <mn>30</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>PRISMA data: root mean square of the Cramér–Rao and Bayesian Cramér–Rao bounds as a function of the estimated bathymetry, and empirical <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math> error as a function of the true bathymetry, for the inversion domain <math display="inline"><semantics> <mrow> <mi mathvariant="script">D</mi> <msub> <mi>p</mi> <mrow> <mn>3</mn> <mo>−</mo> <mn>20</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Reflectance spectra of benthic habitat used for the inversion of DESIS image.</p>
Full article ">Figure 9
<p>DESIS data: root mean square of the Cramér–Rao and the Bayesian Cramér–Rao bounds as a function of the estimated bathymetry, and empirical <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math> error as a function of the true bathymetry, for data in the domain <math display="inline"><semantics> <mrow> <mi mathvariant="script">D</mi> <msub> <mi>d</mi> <mrow> <mn>3</mn> <mo>−</mo> <mn>30</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>DESIS data: root mean square of the Cramér–Rao bounds and the Bayesian Cramér–Rao bounds and empirical error (<math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math>) as a function of the bathymetry, for the domain <math display="inline"><semantics> <mrow> <mi mathvariant="script">D</mi> <msub> <mi>d</mi> <mrow> <mn>3</mn> <mo>−</mo> <mn>20</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Comparison between <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>C</mi> <mi>R</mi> <mi>B</mi> </mrow> </semantics></math> obtained for PRISMA and DESIS sensors, using 6 retrieved parameters and simulated data; (<b>a</b>) for <span class="html-italic">H</span>, (<b>b</b>) for <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>H</mi> <mi>L</mi> <mo>,</mo> <mspace width="3.33333pt"/> <mi>C</mi> <mi>D</mi> <mi>O</mi> <mi>M</mi> <mo>,</mo> <mspace width="3.33333pt"/> <mi>S</mi> <mi>P</mi> <mi>M</mi> </mrow> </semantics></math>, (<b>c</b>) for <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>a</mi> <mn>2</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">
16 pages, 4106 KiB  
Article
CroReLU: Cross-Crossing Space-Based Visual Activation Function for Lung Cancer Pathology Image Recognition
by Yunpeng Liu, Haoran Wang, Kaiwen Song, Mingyang Sun, Yanbin Shao, Songfeng Xue, Liyuan Li, Yuguang Li, Hongqiao Cai, Yan Jiao, Nao Sun, Mingyang Liu and Tianyu Zhang
Cancers 2022, 14(21), 5181; https://doi.org/10.3390/cancers14215181 - 22 Oct 2022
Cited by 7 | Viewed by 2010
Abstract
Lung cancer is one of the most common malignant tumors in human beings. It is highly fatal, as its early symptoms are not obvious. In clinical medicine, physicians rely on the information provided by pathology tests as an important reference for the final [...] Read more.
Lung cancer is one of the most common malignant tumors in human beings. It is highly fatal, as its early symptoms are not obvious. In clinical medicine, physicians rely on the information provided by pathology tests as an important reference for the final diagnosis of many diseases. Therefore, pathology diagnosis is known as the gold standard for disease diagnosis. However, the complexity of the information contained in pathology images and the increase in the number of patients far outpace the number of pathologists, especially for the treatment of lung cancer in less developed countries. To address this problem, we propose a plug-and-play visual activation function (AF), CroReLU, based on a priori knowledge of pathology, which makes it possible to use deep learning models for precision medicine. To the best of our knowledge, this work is the first to optimize deep learning models for pathology image diagnosis from the perspective of AFs. By adopting a unique crossover window design for the activation layer of the neural network, CroReLU is equipped with the ability to model spatial information and capture histological morphological features of lung cancer such as papillary, micropapillary, and tubular alveoli. To test the effectiveness of this design, 776 lung cancer pathology images were collected as experimental data. When CroReLU was inserted into the SeNet network (SeNet_CroReLU), the diagnostic accuracy reached 98.33%, which was significantly better than that of common neural network models at this stage. The generalization ability of the proposed method was validated on the LC25000 dataset with completely different data distribution and recognition tasks in the face of practical clinical needs. The experimental results show that CroReLU has the ability to recognize inter- and intra-class differences in cancer pathology images, and that the recognition accuracy exceeds the extant research work on the complex design of network layers. Full article
Show Figures

Figure 1

Figure 1
<p>Lung adenocarcinoma data: representative pathological images of (<b>a</b>) microinvasive lung adenocarcinoma, (<b>b</b>) invasive lung adenocarcinoma, and (<b>c</b>) normal lung tissue; data enhancement operations: (<b>d</b>) random sample rotation, (<b>e</b>) random flip, and (<b>f</b>) random region masking. (The initial size of the images are 2048 × 1536 and are resized to 224 × 224).</p>
Full article ">Figure 2
<p>Overall experimental data preparation and workflow.</p>
Full article ">Figure 3
<p>Visual activation function integrated into spatial information. (<b>a</b>) receptive field area of pathological image feature map of lung cancer; (<b>b</b>) ReLU with a condition zero; (<b>c</b>) CroReLU with a space parametric condition.</p>
Full article ">Figure 4
<p>In combination with CroRelu’s convolutional network module: (<b>a</b>) Conv-ReLU:Conv-BN-AF; (<b>b</b>) CroReLU-Conv: BN-AF- Conv; (<b>c</b>) Overall architecture of SENet50_CroReLU.</p>
Full article ">Figure 5
<p>The confusion matrix obtained by the model on a private dataset. (<b>a</b>) SENet50; (<b>b</b>) SENet50_CroReLU; (<b>c</b>) MobileNet; (<b>d</b>) MobileNet_CroReLU.</p>
Full article ">Figure 6
<p>Impact of the number share of CroReLU on accuracy on three deep learning models.</p>
Full article ">Figure 7
<p>Dataset LC25000. From left to right: benign lung pathology image (Lung_n), lung adenocarcinoma pathology image (Lung_aca), lung squamous carcinoma pathology image (Lung_scc), benign colon pathology image (Colon_n) and colon adenocarcinoma pathology image (Colon_aca), and the image size is <math display="inline"><semantics> <mrow> <mn>768</mn> <mo>×</mo> <mn>768</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>SENet50 _CroReLU classification results on LC25000 data showing (<b>a</b>) confusion matrix and (<b>b</b>) ROC curves.</p>
Full article ">
17 pages, 4480 KiB  
Article
High-Accuracy Height-Independent 3D VLP Based on Received Signal Strength Ratio
by Yihuai Xu, Xin Hu, Yimao Sun, Yanbing Yang, Lei Zhang, Xiong Deng and Liangyin Chen
Sensors 2022, 22(19), 7165; https://doi.org/10.3390/s22197165 - 21 Sep 2022
Cited by 3 | Viewed by 1770
Abstract
Visible light positioning (VLP) has attracted intensive attention from both academic and industrial communities thanks to its high accuracy, immunity to electromagnetic interference, and low deployment cost. In general, the receiver in a VLP system determines its own position by exploring the received [...] Read more.
Visible light positioning (VLP) has attracted intensive attention from both academic and industrial communities thanks to its high accuracy, immunity to electromagnetic interference, and low deployment cost. In general, the receiver in a VLP system determines its own position by exploring the received signal strength (RSS) from the transmitter according to a pre-built RSS attenuation model. In such model-based methods, the LED’s emission power and the receiver’s height are usually required known and constant parameters to obtain reasonable positioning accuracy. However, the LED’s emission power is normally time-varying due to the fact that the LED’s optical output power is prone to changing with the LED’s temperature, and the receiver’s height is random in a realistic application scenario. To this end, we propose a height-independent three-dimensional (3D) VLP scheme based on the RSS ratio (RSSR), rather than only using RSS. Unlike existing RSS-based VLP methods, our method is able to independently find the horizontal coordinate, i.e., two-dimensional (2D) position, without a priori height information of the receiver, and also avoids the negative effect caused by fluctuation of the LED’s emission power. Moreover, we can further infer the height of the receiver to achieve three-dimensional (3D) positioning by iterating the 2D results back into positioning equations. To quickly verify the proposed scheme, we conduct theoretical analysis with mathematical proof and experimental results with real data, which confirm that the proposed scheme can achieve high position accuracy without known information of the receiver’s height and LED’s emission power. We also implement a VLP prototype with five LED transmitters, and experimental results show that the proposed scheme can achieve very low average errors of 2.73 cm in 2D and 7.20 cm in 3D. Full article
(This article belongs to the Collection Visible Light Communication (VLC))
Show Figures

Figure 1

Figure 1
<p>The basic model of VLP.</p>
Full article ">Figure 2
<p>The diagrams of the main parts of the proposed VLP system: (<b>a</b>) low-cost MCU-based transmitter, (<b>b</b>) PD-based receiver.</p>
Full article ">Figure 3
<p>The experimental setup. (<b>a</b>) The experimental layout, and (<b>b</b>) the hardware design.</p>
Full article ">Figure 4
<p>The sample points of each surface. (<b>a</b>) The original, and (<b>b</b>) the division of ML method.</p>
Full article ">Figure 5
<p>The overall horizontal positioning result; error bars donate S.D.</p>
Full article ">Figure 6
<p>The 2D positioning distribution at different heights: (<b>a</b>) 0 cm, (<b>b</b>) 20 cm, (<b>c</b>) 40 cm.</p>
Full article ">Figure 7
<p>The CDF of horizontal positioning error.</p>
Full article ">Figure 8
<p>Horizontal positioning error at different a priori heights.</p>
Full article ">Figure 9
<p>The overall 3D positioning result; error bars donate S.D.</p>
Full article ">Figure 10
<p>The 3D positioning distribution at different heights: (<b>a</b>) the proposed method, (<b>b</b>) the SLLS-based method, (<b>c</b>) the ML-based method.</p>
Full article ">Figure 11
<p>The CDF of 3D positioning error.</p>
Full article ">
14 pages, 1275 KiB  
Article
Efficient Clustering for Continuous Occupancy Mapping Using a Mixture of Gaussian Processes
by Soohwan Kim and Jonghyuk Kim
Sensors 2022, 22(18), 6832; https://doi.org/10.3390/s22186832 - 9 Sep 2022
Cited by 1 | Viewed by 1667
Abstract
This paper proposes a novel method for occupancy map building using a mixture of Gaussian processes. Gaussian processes have proven to be highly flexible and accurate for a robotic occupancy mapping problem, yet the high computational complexity has been a critical barrier for [...] Read more.
This paper proposes a novel method for occupancy map building using a mixture of Gaussian processes. Gaussian processes have proven to be highly flexible and accurate for a robotic occupancy mapping problem, yet the high computational complexity has been a critical barrier for large-scale applications. We consider clustering the data into small, manageable subsets and applying a mixture of Gaussian processes. One of the problems in clustering is that the number of groups is not known a priori, thus requiring inputs from experts. We propose two efficient clustering methods utilizing (1) a Dirichlet process and (2) geometrical information in the context of occupancy mapping. We will show that the Dirichlet process-based clustering can significantly speed up the training step of the Gaussian process and if geometrical features, such as line features, are available, they can further improve the clustering accuracy. We will provide simulation results, analyze the performance and demonstrate the benefits of the proposed methods. Full article
(This article belongs to the Special Issue Sensors for Occupancy and Indoor Positioning Services)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Simulation data (Robots—red circles; laser beams—black lines; laser hit points—blue points). (<b>b</b>) Single laser scan (Laser hit points are grouped into clusters, with different colors per cluster).</p>
Full article ">Figure 2
<p>Flow chart of our mapping method using a Mixture of Gaussian processes. The inputs and outputs are conceptually visualized to show how data are clustered, how local maps are inferred from the clustered data, and how local maps are merged into a global map.</p>
Full article ">Figure 3
<p>(<b>a</b>) Dirichlet process (DP)-based clustering with 8 groups. (<b>b</b>) Line tracking (LT)-based clustering with 18 groups.</p>
Full article ">Figure 4
<p>Occupancy maps and map uncertainties built with individual Gaussian process experts for training data partitioned with a Dirichlet process mixture model or the line tracking. (<b>left</b>) Training data subsets, (<b>middle</b>) Occupancy maps color-coded by occupancy (red/blue for occupied/empty), and (<b>right</b>) Map Uncertainties color-coded by uncertainty (red/blue for high/low uncertainty). (<b>a</b>) Cluster 4 using DP-clustering; (<b>b</b>) Cluster 5 using DP-clustering; (<b>c</b>) Cluster 3 using LT-clustering; (<b>d</b>) Cluster 13 using LT-clustering.</p>
Full article ">Figure 5
<p>Comparison of occupancy maps between different approaches. (<b>a</b>) Simulation environment, where red and blue colors denote occupied and empty areas, respectively. (<b>b</b>) An occupancy grid map is discrete and sparse due to its independent cell assumption, while a Gaussian process generates (<b>c</b>) a continuous occupancy map with (<b>d</b>) its uncertainty from the same dataset, but suffers the high computational complexity. By utilizing a clustering method such as (<b>e</b>,<b>f</b>) a Dirichlet process (DP) and (<b>g</b>,<b>h</b>) line tracking (LT), we can reduce the computational complexity. However, a Dirichlet process only considers the distribution of points and may mis-cluster them, while the line tracking follows the connectivity of points and generates a better occupancy map with its uncertainty. (<b>a</b>) Ground Truth; (<b>b</b>) Occupancy Grid Map; (<b>c</b>) Occupancy Map (Single GP); (<b>d</b>) Map Uncertainty (Single GP); (<b>e</b>) Occupancy Map (DP-clustering); (<b>f</b>) Map Uncertainty (DP-clustering); (<b>g</b>) Occupancy Map (LT-clustering); (<b>h</b>) Map Uncertainty (LT-clustering).</p>
Full article ">Figure 5 Cont.
<p>Comparison of occupancy maps between different approaches. (<b>a</b>) Simulation environment, where red and blue colors denote occupied and empty areas, respectively. (<b>b</b>) An occupancy grid map is discrete and sparse due to its independent cell assumption, while a Gaussian process generates (<b>c</b>) a continuous occupancy map with (<b>d</b>) its uncertainty from the same dataset, but suffers the high computational complexity. By utilizing a clustering method such as (<b>e</b>,<b>f</b>) a Dirichlet process (DP) and (<b>g</b>,<b>h</b>) line tracking (LT), we can reduce the computational complexity. However, a Dirichlet process only considers the distribution of points and may mis-cluster them, while the line tracking follows the connectivity of points and generates a better occupancy map with its uncertainty. (<b>a</b>) Ground Truth; (<b>b</b>) Occupancy Grid Map; (<b>c</b>) Occupancy Map (Single GP); (<b>d</b>) Map Uncertainty (Single GP); (<b>e</b>) Occupancy Map (DP-clustering); (<b>f</b>) Map Uncertainty (DP-clustering); (<b>g</b>) Occupancy Map (LT-clustering); (<b>h</b>) Map Uncertainty (LT-clustering).</p>
Full article ">Figure 6
<p>Covariance matrices between range observations (hit points, laser beams with returns and with no returns) constructed with integral kernels. Each element of the covariance matrix can be considered as pair-wise similarity between two observations where darker color shows higher similarity. (<b>a</b>) Covariance matrix of sequential observations before clustering. Nearby observations have high similarity, but some far observations also do. Repeated patterns are due to the laser beams spanning horizontally. The observations for the last several rows and columns are acquired from the second robot. After clustering using (<b>b</b>) a Dirichlet process (DP) and (<b>c</b>) line tracking (LT), the observations are grouped into diagonal blocks, which verifies that the clustering results are acceptable. (<b>a</b>) Before Clustering; (<b>b</b>) After DP-clustering; (<b>c</b>) After LT-clustering.</p>
Full article ">Figure 7
<p>Receiver Operating Characteristic (ROC) of occupancy maps built by three different methods.</p>
Full article ">
19 pages, 604 KiB  
Article
Towards Federated Learning with Byzantine-Robust Client Weighting
by Amit Portnoy, Yoav Tirosh and Danny Hendler
Appl. Sci. 2022, 12(17), 8847; https://doi.org/10.3390/app12178847 - 2 Sep 2022
Cited by 4 | Viewed by 1790
Abstract
Federated learning (FL) is a distributed machine learning paradigm where data are distributed among clients who collaboratively train a model in a computation process coordinated by a central server. By assigning a weight to each client based on the proportion of data instances [...] Read more.
Federated learning (FL) is a distributed machine learning paradigm where data are distributed among clients who collaboratively train a model in a computation process coordinated by a central server. By assigning a weight to each client based on the proportion of data instances it possesses, the rate of convergence to an accurate joint model can be greatly accelerated. Some previous works studied FL in a Byzantine setting, in which a fraction of the clients may send arbitrary or even malicious information regarding their model. However, these works either ignore the issue of data unbalancedness altogether or assume that client weights are a priori known to the server, whereas, in practice, it is likely that weights will be reported to the server by the clients themselves and therefore cannot be relied upon. We address this issue for the first time by proposing a practical weight-truncation-based preprocessing method and demonstrating empirically that it is able to strike a good balance between model quality and Byzantine robustness. We also establish analytically that our method can be applied to a randomly selected sample of client weights. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Example plot of data generated by executing Algorithm 3 on unbalanced vector <math display="inline"><semantics> <mi mathvariant="bold-italic">N</mi> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>α</mi> <mo>∗</mo> </msup> <mo>=</mo> <mn>50</mn> <mo>%</mo> </mrow> </semantics></math> (this vector corresponds to the partition used in our experiments; see <a href="#sec3dot1-applsci-12-08847" class="html-sec">Section 3.1</a> for details).</p>
Full article ">Figure 2
<p>Histogram of the sample partitions of the MNIST (<b>left</b>) and Shakespeare (<b>right</b>) datasets.</p>
Full article ">Figure 3
<p>Accuracy by round without any attackers for the Shakespeare experiments. Curves correspond to preprocessing procedures, and columns correspond to different aggregation methods.</p>
Full article ">Figure 4
<p>Accuracy by round under Byzantine attacks for the Shakespeare experiments. Curves correspond to preprocessing procedures, and columns correspond to different aggregation methods. In the two rows of the experiment, the Byzantine clients perform a model negation attack with 1 and 10% attackers, respectively.</p>
Full article ">Figure A1
<p>Accuracy by round without any attackers for the MNIST experiments. Curves correspond to preprocessing procedures, and columns correspond to different aggregation methods.</p>
Full article ">Figure A2
<p>Accuracy by round under Byzantine attacks for the MNIST experiments. In the first two rows, Byzantine clients perform a label shifting attack with 1 and <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math> attackers, respectively. In the last two rows, we repeat the experiment with a model negation attack.</p>
Full article ">
14 pages, 416 KiB  
Article
The Rating Scale Paradox: Semantics Instability versus Information Loss
by Jacopo Giacomelli
Standards 2022, 2(3), 352-365; https://doi.org/10.3390/standards2030024 - 1 Aug 2022
Cited by 1 | Viewed by 1951
Abstract
Rating systems are applied to a wide variety of different contexts as a tool to map a large amount of information to a symbol, or notch, chosen from a finite, ordered set. Such a set is commonly known as the rating scale, and [...] Read more.
Rating systems are applied to a wide variety of different contexts as a tool to map a large amount of information to a symbol, or notch, chosen from a finite, ordered set. Such a set is commonly known as the rating scale, and its elements represent all the different degrees of quality—in some sense—that a given rating system aims to express. This work investigates a simple yet nontrivial paradox in constructing that scale. When the considered quality parameter is continuous, a bijection must exist between a specific partition of its domain and the rating scale. The number of notches and their meanings are commonly defined a priori based on the convenience of the rating system users. However, regarding the partition, the number of subsets and their amplitudes should be chosen a posteriori to minimize the unavoidable information loss due to discretization. Considering the typical case of a creditworthiness rating system based on a logistic regression model, we discuss to what extent this contrast may impact a realistic framework and how a proper rating scale definition may handle it. Indeed, we show that choosing between a priori methods, which privilege the meaning of the rating scale, and a posteriori methods, which minimize information loss, is not strictly necessary. It is possible to mix the two approaches instead, choosing a hybrid criterion tunable according to the rating model’s user needs. Full article
Show Figures

Figure 1

Figure 1
<p>Schematics of the rating and decisional maps.</p>
Full article ">Figure 2
<p>PD distribution across the considered scenarios. The blue dotted line plots the PD expected value. The orange and grey areas represent the ± one standard deviation interval and the 0.5–99.5 percentiles interval, respectively.</p>
Full article ">Figure 3
<p>Master scale’s partition across the considered scenarios. Each panel depicts the effects of choosing a different optimality criterion among the ones defined in <a href="#sec2dot2-standards-02-00024" class="html-sec">Section 2.2</a>.</p>
Full article ">Figure 4
<p>Average PD per notch across the considered scenarios. Each panel depicts the effects of choosing a different optimization criterion among the ones defined in <a href="#sec2dot2-standards-02-00024" class="html-sec">Section 2.2</a>.</p>
Full article ">Figure 5
<p>Exposure obtained across the considered scenarios, by applying the decisional system described in (<a href="#FD21-standards-02-00024" class="html-disp-formula">21</a>) and each of the criteria introduced in <a href="#sec2dot2dot1-standards-02-00024" class="html-sec">Section 2.2.1</a>, <a href="#sec2dot2dot2-standards-02-00024" class="html-sec">Section 2.2.2</a>, <a href="#sec2dot2dot3-standards-02-00024" class="html-sec">Section 2.2.3</a> and <a href="#sec2dot2dot4-standards-02-00024" class="html-sec">Section 2.2.4</a>.</p>
Full article ">Figure 6
<p>Loss per unit of exposure obtained across the considered scenarios by applying the decisional system described in (<a href="#FD21-standards-02-00024" class="html-disp-formula">21</a>) and each of the criteria introduced in <a href="#sec2dot2dot1-standards-02-00024" class="html-sec">Section 2.2.1</a>, <a href="#sec2dot2dot2-standards-02-00024" class="html-sec">Section 2.2.2</a>, <a href="#sec2dot2dot3-standards-02-00024" class="html-sec">Section 2.2.3</a> and <a href="#sec2dot2dot4-standards-02-00024" class="html-sec">Section 2.2.4</a>.</p>
Full article ">Figure 7
<p>Smooth transition of exposure and loss-to-exposure ratio, passing from <math display="inline"><semantics> <msubsup> <mover> <mi mathvariant="bold">s</mi> <mo>¯</mo> </mover> <mi>Fix</mi> <mo>☆</mo> </msubsup> </semantics></math> calibration to <math display="inline"><semantics> <msubsup> <mover> <mi mathvariant="bold">s</mi> <mo>¯</mo> </mover> <mrow> <mi>H</mi> <mi>R</mi> </mrow> <mo>☆</mo> </msubsup> </semantics></math> calibration.</p>
Full article ">
Back to TopTop