Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 21, February
Previous Issue
Volume 20, December
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 21, Issue 1 (January 2019) – 99 articles

Cover Story (view full-size image): A key aspect of the brain activity underlying consciousness and cognition appears to be complex dynamics—meaning that brain activity shows a balance between integration and segregation. There have now been many attempts to build useful measures of complexity, most notably within Integrated Information Theory. In this paper, we describe, using a uniform notation, six different candidate measures of integrated information, then we set out the intuitions behind each, and explore how they behave in a variety of network models. The most striking finding is that the measures all behave very differently: no two measures show consistent agreement across all our analyses. By rigorously comparing these measures, we are able to identify those that better reflect the underlying intuitions of integrated information, which we believe will be of help as these measures continue to be developed and refined. View Paper [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 281 KiB  
Article
Simple Stopping Criteria for Information Theoretic Feature Selection
by Shujian Yu and José C. Príncipe
Entropy 2019, 21(1), 99; https://doi.org/10.3390/e21010099 - 21 Jan 2019
Cited by 8 | Viewed by 4853
Abstract
Feature selection aims to select the smallest feature subset that yields the minimum generalization error. In the rich literature in feature selection, information theory-based approaches seek a subset of features such that the mutual information between the selected features and the class labels [...] Read more.
Feature selection aims to select the smallest feature subset that yields the minimum generalization error. In the rich literature in feature selection, information theory-based approaches seek a subset of features such that the mutual information between the selected features and the class labels is maximized. Despite the simplicity of this objective, there still remain several open problems in optimization. These include, for example, the automatic determination of the optimal subset size (i.e., the number of features) or a stopping criterion if the greedy searching strategy is adopted. In this paper, we suggest two stopping criteria by just monitoring the conditional mutual information (CMI) among groups of variables. Using the recently developed multivariate matrix-based Rényi’s α-entropy functional, which can be directly estimated from data samples, we showed that the CMI among groups of variables can be easily computed without any decomposition or approximation, hence making our criteria easy to implement and seamlessly integrated into any existing information theoretic feature selection methods with a greedy search strategy. Full article
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) shows the the values of mutual information (MI) <math display="inline"><semantics> <mrow> <mi mathvariant="bold">I</mi> <mo>(</mo> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>;</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> and conditional mutual information (CMI) <math display="inline"><semantics> <mrow> <mi mathvariant="bold">I</mi> <mo>(</mo> <mrow> <mo>{</mo> <mi>S</mi> <mo>−</mo> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>}</mo> </mrow> <mo>;</mo> <mi>y</mi> <mo>|</mo> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>)</mo> </mrow> </semantics></math> with respect to different number of selected features, i.e., the size of <math display="inline"><semantics> <msup> <mi>S</mi> <mo>′</mo> </msup> </semantics></math>. <math display="inline"><semantics> <mrow> <mi mathvariant="bold">I</mi> <mo>(</mo> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>;</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> is monotonically increasing, whereas <math display="inline"><semantics> <mrow> <mi mathvariant="bold">I</mi> <mo>(</mo> <mrow> <mo>{</mo> <mi>S</mi> <mo>−</mo> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>}</mo> </mrow> <mo>;</mo> <mi>y</mi> <mo>|</mo> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>)</mo> </mrow> </semantics></math> is monotonically decreasing. (<b>b</b>) shows the terminated points produced by different stopping criteria, namely CMI-heuristic (black solid line), CMI-permutation (black dashed line), <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>MI</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <msup> <mi>χ</mi> <mn>2</mn> </msup> </semantics></math> (green solid line), and MI-permutation (blue solid line). The red curve with the shaded area indicates the average bootstrap classification accuracy with <math display="inline"><semantics> <mrow> <mn>95</mn> <mo>%</mo> </mrow> </semantics></math> confidence interval. In this example, the bootstrap classification accuracy reaches its statistical maximum value with 11 features and CMI-heuristic performs the best.</p>
Full article ">
16 pages, 2336 KiB  
Article
Cooling Effectiveness of a Data Center Room under Overhead Airflow via Entropy Generation Assessment in Transient Scenarios
by Luis Silva-Llanca, Marcelo del Valle, Alfonso Ortega and Andrés J. Díaz
Entropy 2019, 21(1), 98; https://doi.org/10.3390/e21010098 - 21 Jan 2019
Cited by 18 | Viewed by 4968
Abstract
Forecasting data center cooling demand remains a primary thermal management challenge in an increasingly larger global energy-consuming industry. This paper proposes a dynamic modeling approach to evaluate two different strategies for delivering cold air into a data center room. The common cooling method [...] Read more.
Forecasting data center cooling demand remains a primary thermal management challenge in an increasingly larger global energy-consuming industry. This paper proposes a dynamic modeling approach to evaluate two different strategies for delivering cold air into a data center room. The common cooling method provides air through perforated floor tiles by means of a centralized distribution system, hindering flow management at the aisle level. We propose an idealized system such that five overhead heat exchangers are located above the aisle and handle the entire server cooling demand. In one case, the overhead heat exchangers force the airflow downwards into the aisle (Overhead Downward Flow (ODF)); in the other case, the flow is forced to move upwards (Overhead Upward Flow (OUF)). A complete fluid dynamic, heat transfer, and thermodynamic analysis is proposed to model the system’s thermal performance under both steady state and transient conditions. Inside the servers and heat exchangers, the flow and heat transfer processes are modeled using a set of differential equations solved in MATLAB™ 2017a. This solution is coupled with ANSYS-Fluent™ 18, which computes the three-dimensional velocity, temperature, and turbulence on the Airside. The two approaches proposed (ODF and OUF) are evaluated and compared by estimating their cooling effectiveness and the local Entropy Generation. The latter allows identifying the zones within the room responsible for increasing the inefficiencies (irreversibilities) of the system. Both approaches demonstrated similar performance, with a small advantage shown by OUF. The results of this investigation demonstrated a promising approach of data center on-demand cooling scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of the room and boundary conditions for the two approaches: (<b>a</b>) downward flow and (<b>b</b>) upward flow.</p>
Full article ">Figure 2
<p>Two-dimensional drawing for the data center room (dimensions in meters).</p>
Full article ">Figure 3
<p>Schematic representation of the phenomena that destroys entropy during the cooling process.</p>
Full article ">Figure 4
<p>Server average temperature evolution for different grid sizes.</p>
Full article ">Figure 5
<p>Airside temperature and velocity distributions; front and side view, middle plane.</p>
Full article ">Figure 6
<p>Conduction and Dissipation Entropy Generation distribution inside the room for the two approaches. Arrow indicates flow direction in the overhead heat exchanger.</p>
Full article ">Figure 7
<p>Entropy Generation Number <math display="inline"><semantics> <msub> <mi>N</mi> <mi>S</mi> </msub> </semantics></math> as a function of the effectiveness ratio <math display="inline"><semantics> <mrow> <msub> <mi>ξ</mi> <mi>s</mi> </msub> <mo>/</mo> <msub> <mi>ξ</mi> <mi mathvariant="italic">HX</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Mean temperature and mean effectiveness evolution in the servers vs. time.</p>
Full article ">Figure 9
<p>Mean temperature and mean effectiveness in the overhead heat exchangers vs. time.</p>
Full article ">Figure 10
<p>Entropy Generation evolution associated with the Airside (room and aisle).</p>
Full article ">
13 pages, 3497 KiB  
Article
Objective 3D Printed Surface Quality Assessment Based on Entropy of Depth Maps
by Jarosław Fastowicz, Marek Grudziński, Mateusz Tecław and Krzysztof Okarma
Entropy 2019, 21(1), 97; https://doi.org/10.3390/e21010097 - 21 Jan 2019
Cited by 29 | Viewed by 5239
Abstract
A rapid development and growing popularity of additive manufacturing technology leads to new challenging tasks allowing not only a reliable monitoring of the progress of the 3D printing process but also the quality of the printed objects. The automatic objective assessment of the [...] Read more.
A rapid development and growing popularity of additive manufacturing technology leads to new challenging tasks allowing not only a reliable monitoring of the progress of the 3D printing process but also the quality of the printed objects. The automatic objective assessment of the surface quality of the 3D printed objects proposed in the paper, which is based on the analysis of depth maps, allows for determining the quality of surfaces during printing for the devices equipped with the built-in 3D scanners. In the case of detected low quality, some corrections can be made or the printing process may be aborted to save the filament, time and energy. The application of the entropy analysis of the 3D scans allows evaluating the surface regularity independently on the color of the filament in contrast to many other possible methods based on the analysis of visible light images. The results obtained using the proposed approach are encouraging and further combination of the proposed approach with camera-based methods might be possible as well. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

Figure 1
<p>Three devices used for preparation of samples used in experiments: (<b>a</b>) RepRap Pro Ormerod 2; (<b>b</b>) da Vinci 1.0 Pro 3-in-1 (view of inner parts); and (<b>c</b>) Prusa i3.</p>
Full article ">Figure 2
<p>Exemplary photos of 3D printed flat surfaces: high quality sample No. 24 (<b>a</b>); and the illustration of various distortions obtained for lower quality 3D prints (<b>b</b>–<b>i</b>).</p>
Full article ">Figure 3
<p>Illustration of the exemplary obtained STL model of four 3D scanned samples with visible mounting elements.</p>
Full article ">Figure 4
<p>Exemplary lower quality samples: (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>); and their obtained depth maps: (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>).</p>
Full article ">Figure 5
<p>Exemplary high quality samples: (<b>a</b>,<b>c</b>); and their obtained depth maps: (<b>b</b>,<b>d</b>).</p>
Full article ">Figure 6
<p>Results obtained for the entropy calculated for depth maps without preprocessing.</p>
Full article ">Figure 7
<p>Results obtained for the entropy calculated for depth maps after the CLAHE application.</p>
Full article ">Figure 8
<p>Results obtained for the proposed metric calculated for depth maps without preprocessing.</p>
Full article ">Figure 9
<p>Results obtained for the proposed metric calculated for depth maps after the CLAHE application.</p>
Full article ">Figure 10
<p>Exemplary moderate quality samples being problematic for the proposed method.</p>
Full article ">
18 pages, 2918 KiB  
Article
Fault Diagnosis of Rolling Element Bearings with a Two-Step Scheme Based on Permutation Entropy and Random Forests
by Xiaoming Xue, Chaoshun Li, Suqun Cao, Jinchao Sun and Liyan Liu
Entropy 2019, 21(1), 96; https://doi.org/10.3390/e21010096 - 21 Jan 2019
Cited by 29 | Viewed by 4474
Abstract
This study presents a two-step fault diagnosis scheme combined with statistical classification and random forests-based classification for rolling element bearings. Considering the inequality of features sensitivity in different diagnosis steps, the proposed method utilizes permutation entropy and variational mode decomposition to depict vibration [...] Read more.
This study presents a two-step fault diagnosis scheme combined with statistical classification and random forests-based classification for rolling element bearings. Considering the inequality of features sensitivity in different diagnosis steps, the proposed method utilizes permutation entropy and variational mode decomposition to depict vibration signals under single scale and multiscale. In the first step, the permutation entropy features on the single scale of original signals are extracted and the statistical classification model based on Chebyshev’s inequality is constructed to detect the faults with a preliminary acquaintance of the bearing condition. In the second step, vibration signals with fault conditions are firstly decomposed into a collection of intrinsic mode functions by using variational mode decomposition and then multiscale permutation entropy features derived from each mono-component are extracted to identify the specific fault types. In order to improve the classification ability of the characteristic data, the out-of-bag estimation of random forests is firstly employed to reelect and refine the original multiscale permutation entropy features. Then the refined features are considered as the input data to train the random forests-based classification model. Finally, the condition data of bearings with different fault conditions are employed to evaluate the performance of the proposed method. The results indicate that the proposed method can effectively identify the working conditions and fault types of rolling element bearings. Full article
Show Figures

Figure 1

Figure 1
<p>The typical classification model of random forests.</p>
Full article ">Figure 2
<p>System framework of the proposed model.</p>
Full article ">Figure 3
<p>Fault test platform of rolling element bearings.</p>
Full article ">Figure 4
<p>Time domain waveforms and the corresponding envelope spectrums of the vibration signals under different working conditions.</p>
Full article ">Figure 5
<p>The PE distribution of the vibration signals of Case 1.</p>
Full article ">Figure 6
<p>The permutation entropy (PE) distribution of the vibration signals of Case 2.</p>
Full article ">Figure 7
<p>Decomposed results obtained by variational mode decomposition (VMD) and envelope spectrums of the corresponding band- limited intrinsic mode function (BLIMF) components.</p>
Full article ">Figure 8
<p>Dissimilarity and aggregation of the VMD-PE distributions under different fault conditions.</p>
Full article ">Figure 9
<p>Importance evaluation of multiscale permutation entropy (MPE) features based on out-of-bag (OOB) estimation.</p>
Full article ">
17 pages, 710 KiB  
Article
Co-Association Matrix-Based Multi-Layer Fusion for Community Detection in Attributed Networks
by Sheng Luo, Zhifei Zhang, Yuanjian Zhang and Shuwen Ma
Entropy 2019, 21(1), 95; https://doi.org/10.3390/e21010095 - 20 Jan 2019
Cited by 10 | Viewed by 4192
Abstract
Community detection is a challenging task in attributed networks, due to the data inconsistency between network topological structure and node attributes. The problem of how to effectively and robustly fuse multi-source heterogeneous data plays an important role in community detection algorithms. Although some [...] Read more.
Community detection is a challenging task in attributed networks, due to the data inconsistency between network topological structure and node attributes. The problem of how to effectively and robustly fuse multi-source heterogeneous data plays an important role in community detection algorithms. Although some algorithms taking both topological structure and node attributes into account have been proposed in recent years, the fusion strategy is simple and usually adopts the linear combination method. As a consequence of this, the detected community structure is vulnerable to small variations of the input data. In order to overcome this challenge, we develop a novel two-layer representation to capture the latent knowledge from both topological structure and node attributes in attributed networks. Then, we propose a weighted co-association matrix-based fusion algorithm (WCMFA) to detect the inherent community structure in attributed networks by using multi-layer fusion strategies. It extends the community detection method from a single-view to a multi-view style, which is consistent with the thinking model of human beings. Experiments show that our method is superior to the state-of-the-art community detection algorithms for attributed networks. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Figure 1
<p>The framework of weighted co-association matrix-based fusion algorithm (WCMFA).</p>
Full article ">Figure 2
<p>Comparison results with the varying size of <math display="inline"><semantics> <mrow> <mo>|</mo> <mi mathvariant="script">P</mi> <mo>|</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Comparison results with the varying size of <span class="html-italic">N</span>.</p>
Full article ">
13 pages, 991 KiB  
Review
Transients as the Basis for Information Flow in Complex Adaptive Systems
by William Sulis
Entropy 2019, 21(1), 94; https://doi.org/10.3390/e21010094 - 20 Jan 2019
Cited by 6 | Viewed by 3896
Abstract
Information is the fundamental currency of naturally occurring complex adaptive systems, whether they are individual organisms or collective social insect colonies. Information appears to be more important than energy in determining the behavior of these systems. However, it is not the quantity of [...] Read more.
Information is the fundamental currency of naturally occurring complex adaptive systems, whether they are individual organisms or collective social insect colonies. Information appears to be more important than energy in determining the behavior of these systems. However, it is not the quantity of information but rather its salience or meaning which is significant. Salience is not, in general, associated with instantaneous events but rather with spatio-temporal transients of events. This requires a shift in theoretical focus from instantaneous states towards spatio-temporal transients as the proper object for studying information flow in naturally occurring complex adaptive systems. A primitive form of salience appears in simple complex systems models in the form of transient induced global response synchronization (TIGoRS). Sparse random samplings of spatio-temporal transients may induce stable collective responses from the system, establishing a stimulus–response relationship between the system and its environment, with the system parsing its environment into salient and non-salient stimuli. In the presence of TIGoRS, an embedded complex dynamical system becomes a primitive automaton, modeled as a Sulis machine. Full article
(This article belongs to the Special Issue Information Theory in Complex Systems)
Show Figures

Figure 1

Figure 1
<p>The pictures illustrate transient induced global response synchronization (TIGoRS) in a cocktail party automaton. The first two pictures show individual runs under different initial conditions and different low frequency samples of the same stimulus pattern. The third picture shows discordance between the first two runs. The fourth picture shows discordance between the first run and the stimulus. The final two pictures show the distributions of the rules at the end of each run.</p>
Full article ">Figure 2
<p>The graphs depict the Hamming distance between the stimulus and response and efficacy as a function of the stimulus sampling rate in the absence of TIGoRS.</p>
Full article ">Figure 3
<p>The graphs depict the Hamming distance between the stimulus and response and efficacy as a function of the stimulus sampling rate in the presence of TIGoRS.</p>
Full article ">Figure 4
<p>Hamming distance curves for three different automaton classes under the same stimulus.</p>
Full article ">
18 pages, 1206 KiB  
Article
Bayesian Analysis of Femtosecond Pump-Probe Photoelectron-Photoion Coincidence Spectra with Fluctuating Laser Intensities
by Pascal Heim, Michael Rumetshofer, Sascha Ranftl, Bernhard Thaler, Wolfgang E. Ernst, Markus Koch and Wolfgang von der Linden
Entropy 2019, 21(1), 93; https://doi.org/10.3390/e21010093 - 19 Jan 2019
Cited by 2 | Viewed by 4252
Abstract
This paper employs Bayesian probability theory for analyzing data generated in femtosecond pump-probe photoelectron-photoion coincidence (PEPICO) experiments. These experiments allow investigating ultrafast dynamical processes in photoexcited molecules. Bayesian probability theory is consistently applied to data analysis problems occurring in these types of experiments [...] Read more.
This paper employs Bayesian probability theory for analyzing data generated in femtosecond pump-probe photoelectron-photoion coincidence (PEPICO) experiments. These experiments allow investigating ultrafast dynamical processes in photoexcited molecules. Bayesian probability theory is consistently applied to data analysis problems occurring in these types of experiments such as background subtraction and false coincidences. We previously demonstrated that the Bayesian formalism has many advantages, amongst which are compensation of false coincidences, no overestimation of pump-only contributions, significantly increased signal-to-noise ratio, and applicability to any experimental situation and noise statistics. Most importantly, by accounting for false coincidences, our approach allows running experiments at higher ionization rates, resulting in an appreciable reduction of data acquisition times. In addition to our previous paper, we include fluctuating laser intensities, of which the straightforward implementation highlights yet another advantage of the Bayesian formalism. Our method is thoroughly scrutinized by challenging mock data, where we find a minor impact of laser fluctuations on false coincidences, yet a noteworthy influence on background subtraction. We apply our algorithm to data obtained in experiments and discuss the impact of laser fluctuations on the data analysis. Full article
Show Figures

Figure 1

Figure 1
<p>Utterly simplified sketch of a time-resolved photoionization study carried out with a pump-probe setup and a time-of-flight spectrometer. A commercial Ti:sapphire laser system delivers pulses of <math display="inline"> <semantics> <mrow> <mn>800</mn> </mrow> </semantics> </math> nm in center wavelength and <math display="inline"> <semantics> <mrow> <mn>25</mn> </mrow> </semantics> </math> fs in temporal length at a repetition rate of <math display="inline"> <semantics> <mrow> <mn>3</mn> </mrow> </semantics> </math> kHz. The delay stage is used to control the length of the optical path, and hence the time delay. The energy level diagram shows how the electron kinetic energy, given the energy of the states and the photons, identifies the state the system was in at the moment of ionization. A detailed description of the setup can be found in our previous publications [<a href="#B7-entropy-21-00093" class="html-bibr">7</a>,<a href="#B28-entropy-21-00093" class="html-bibr">28</a>].</p>
Full article ">Figure 2
<p>Pump-probe ionization scheme to investigate excited state dynamics in molecules.</p>
Full article ">Figure 3
<p>Simulation with mock data for studying the influence of <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math>-fluctuations on false coincidences. The black lines are the spectra used to generate the data; the green (blue) lines including <math display="inline"> <semantics> <mrow> <mo>±</mo> <mi>σ</mi> </mrow> </semantics> </math> error bands are the reconstructed spectra (not) including <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math>-fluctuations in the reconstruction. The parameters are <math display="inline"> <semantics> <mrow> <msub> <mi>ξ</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>ξ</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mi>p</mi> </msub> <mo>=</mo> <msup> <mn>10</mn> <mn>7</mn> </msup> </mrow> </semantics> </math>. For <math display="inline"> <semantics> <mrow> <msub> <munder> <mi>λ</mi> <mo>̲</mo> </munder> <mn>2</mn> </msub> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics> </math>, differences between the algorithms are negligible even at relatively high <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math>-fluctuations with <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>; see spectra (<b>a</b>,<b>b</b>). When choosing <math display="inline"> <semantics> <mrow> <msub> <munder> <mi>λ</mi> <mo>̲</mo> </munder> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math> (<b>c</b>,<b>d</b>), the algorithm not including <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math>-fluctuations produces small deviations, e.g., underestimation of the false coincidences at the first Gaussian in the fragment spectrum.</p>
Full article ">Figure 4
<p>Simulated test spectra for studying the influence of <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math>-fluctuations on the background subtraction. The parameters are <math display="inline"> <semantics> <mrow> <msub> <munder> <mi>λ</mi> <mo>̲</mo> </munder> <mn>1</mn> </msub> <mo>=</mo> <msub> <munder> <mi>λ</mi> <mo>̲</mo> </munder> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>ξ</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>ξ</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mi>p</mi> </msub> <mo>=</mo> <msup> <mn>10</mn> <mn>7</mn> </msup> </mrow> </semantics> </math>. <math display="inline"> <semantics> <msub> <mi>σ</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>σ</mi> <mn>2</mn> </msub> </semantics> </math> are different for every sub-figure. If <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>σ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math> (<b>a</b>) or <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>σ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math> (<b>b</b>), both algorithms (with (green line) and without (blue line) including <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math>-fluctuations) reconstruct the spectra correctly. <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math> lead to an underestimation of the background when neglecting <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math>-fluctuations (<b>c</b>). Overestimation of the background happens in the case of <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math> (<b>d</b>).</p>
Full article ">
18 pages, 375 KiB  
Article
Remote Sampling with Applications to General Entanglement Simulation
by Gilles Brassard, Luc Devroye and Claude Gravel
Entropy 2019, 21(1), 92; https://doi.org/10.3390/e21010092 - 19 Jan 2019
Cited by 7 | Viewed by 4183
Abstract
We show how to sample exactly discrete probability distributions whose defining parameters are distributed among remote parties. For this purpose, von Neumann’s rejection algorithm is turned into a distributed sampling communication protocol. We study the expected number of bits communicated among the parties [...] Read more.
We show how to sample exactly discrete probability distributions whose defining parameters are distributed among remote parties. For this purpose, von Neumann’s rejection algorithm is turned into a distributed sampling communication protocol. We study the expected number of bits communicated among the parties and also exhibit a trade-off between the number of rounds of the rejection algorithm and the number of bits transmitted in the initial phase. Finally, we apply remote sampling to the simulation of quantum entanglement in its essentially most general form possible, when an arbitrary finite number m of parties share systems of arbitrary finite dimensions on which they apply arbitrary measurements (not restricted to being projective measurements, but restricted to finitely many possible outcomes). In case the dimension of the systems and the number of possible outcomes per party are bounded by a constant, it suffices to communicate an expected O ( m 2 ) bits in order to simulate exactly the outcomes that these measurements would have produced on those systems. Full article
Show Figures

Figure 1

Figure 1
<p>The communication scheme.</p>
Full article ">
32 pages, 4657 KiB  
Article
Quantitative Quality Evaluation of Software Products by Considering Summary and Comments Entropy of a Reported Bug
by Madhu Kumari, Ananya Misra, Sanjay Misra, Luis Fernandez Sanz, Robertas Damasevicius and V.B. Singh
Entropy 2019, 21(1), 91; https://doi.org/10.3390/e21010091 - 19 Jan 2019
Cited by 23 | Viewed by 4833
Abstract
A software bug is characterized by its attributes. Various prediction models have been developed using these attributes to enhance the quality of software products. The reporting of bugs leads to high irregular patterns. The repository size is also increasing with enormous rate, resulting [...] Read more.
A software bug is characterized by its attributes. Various prediction models have been developed using these attributes to enhance the quality of software products. The reporting of bugs leads to high irregular patterns. The repository size is also increasing with enormous rate, resulting in uncertainty and irregularities. These uncertainty and irregularities are termed as veracity in the context of big data. In order to quantify these irregular and uncertain patterns, the authors have appliedentropy-based measures of the terms reported in the summary and the comments submitted by the users. Both uncertainties and irregular patterns have been taken care of byentropy-based measures. In this paper, the authors considered that the bug fixing process does not only depend upon the calendar time, testing effort and testing coverage, but it also depends on the bug summary description and comments. The paper proposed bug dependency-based mathematical models by considering the summary description of bugs and comments submitted by users in terms of the entropy-based measures. The models were validated on different Eclipse project products. The models proposed in the literature have different types of growth curves. The models mainly follow exponential, S-shaped or mixtures of both types of curves. In this paper, the proposed models were compared with the modelsfollowingexponential, S-shaped and mixtures of both types of curves. Full article
Show Figures

Figure 1

Figure 1
<p>A part of the bug report for bug id 139050 of BIRT products of Eclipse projects with its three comments and summary.</p>
Full article ">Figure 2
<p>Block diagram of proposed methodology.</p>
Full article ">Figure 3
<p>Goodness of fit curves of proposed models for BIRT.</p>
Full article ">Figure 4
<p>Goodness of fit curves of proposed models for CDT.</p>
Full article ">Figure 5
<p>Goodness of fit of curves proposed models for Community.</p>
Full article ">Figure 6
<p>Goodness of fit curves of proposed models for EclipseLink.</p>
Full article ">Figure 7
<p>Goodness of fit curves of proposed models for EMF.</p>
Full article ">Figure 8
<p>Goodness of fit curves of proposed models for Equinox product.</p>
Full article ">Figure 9
<p>Goodness of fit curves of proposed models for Orion.</p>
Full article ">Figure 10
<p>Goodness of fit curves of proposed models for Platform.</p>
Full article ">Figure 11
<p>Goodness of fit curves of proposed models for BIRT.</p>
Full article ">Figure 12
<p>Goodness of fit curves of proposed models for CDT.</p>
Full article ">Figure 13
<p>Goodness of fit curves of proposed models for Community.</p>
Full article ">Figure 14
<p>Goodness of fit curves of proposed models for EclipseLink.</p>
Full article ">Figure 15
<p>Goodness of fit curves of proposed models for EMF.</p>
Full article ">Figure 16
<p>Goodness of fit curves of proposed models for Equinox.</p>
Full article ">Figure 17
<p>Goodness of fit curves of proposed models for Orion.</p>
Full article ">Figure 18
<p>Goodness of fit curves of proposed models for Platform.</p>
Full article ">
12 pages, 2789 KiB  
Article
Using Multiscale Entropy to Assess the Efficacy of Local Cooling on Reactive Hyperemia in People with a Spinal Cord Injury
by Fuyuan Liao, Tim D. Yang, Fu-Lien Wu, Chunmei Cao, Ayman Mohamed and Yih-Kuen Jan
Entropy 2019, 21(1), 90; https://doi.org/10.3390/e21010090 - 18 Jan 2019
Cited by 10 | Viewed by 4445
Abstract
Pressure ulcers are one of the most common complications of a spinal cord injury (SCI). Prolonged unrelieved pressure is thought to be the primary causative factor resulting in tissue ischemia and eventually pressure ulcers. Previous studies suggested that local cooling reduces skin ischemia [...] Read more.
Pressure ulcers are one of the most common complications of a spinal cord injury (SCI). Prolonged unrelieved pressure is thought to be the primary causative factor resulting in tissue ischemia and eventually pressure ulcers. Previous studies suggested that local cooling reduces skin ischemia of the compressed soft tissues based on smaller hyperemic responses. However, the effect of local cooling on nonlinear properties of skin blood flow (SBF) during hyperemia is unknown. In this study, 10 wheelchair users with SCI and 10 able-bodied (AB) controls underwent three experimental protocols, each of which included a 10-min period as baseline, a 20-min intervention period, and a 20-min period for recovering SBF. SBF was measured using a laser Doppler flowmetry. During the intervention period, a pressure of 60 mmHg was applied to the sacral skin, while three skin temperature settings were tested, including no temperature change, a decrease by 10 °C, and an increase by 10 °C, respectively. A multiscale entropy (MSE) method was employed to quantify the degree of regularity of blood flow oscillations (BFO) associated with the SBF control mechanisms during baseline and reactive hyperemia. The results showed that under pressure with cooling, skin BFO both in people with SCI and AB controls were more regular at multiple time scales during hyperemia compared to baseline, whereas under pressure with no temperature change and particularly pressure with heating, BFO were more irregular during hyperemia compared to baseline. Moreover, the results of surrogate tests indicated that changes in the degree of regularity of BFO from baseline to hyperemia were only partially attributed to changes in relative amplitudes of endothelial, neurogenic, and myogenic components of BFO. These findings support the use of MSE to assess the efficacy of local cooling on reactive hyperemia and assess the degree of skin ischemia in people with SCI. Full article
(This article belongs to the Special Issue The 20th Anniversary of Entropy - Approximate and Sample Entropy)
Show Figures

Figure 1

Figure 1
<p>Sacral skin blood flow (SBF) responses to a surface pressure of 60 mmHg without skin temperature changes (<b>A</b>), pressure with cooling (<b>B</b>), and pressure with heating (<b>C</b>) in a participant with a spinal cord injury (SCI). p.u., perfusion unit.</p>
Full article ">Figure 2
<p>Multiscale entropy <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> <mo stretchy="false">(</mo> <mi>m</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>τ</mi> <mo>,</mo> <mi>N</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> of the simulated signals, where <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mrow> <mn>0.01</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mrow> <mn>0.03</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mrow> <mn>0.1</mn> </mrow> </msub> </mrow> </semantics></math> represent <math display="inline"><semantics> <mrow> <mi>sin</mi> <mo stretchy="false">(</mo> <mn>2</mn> <mi>π</mi> <mo>⋅</mo> <mn>0.01</mn> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>sin</mi> <mo stretchy="false">(</mo> <mn>2</mn> <mi>π</mi> <mo>⋅</mo> <mn>0.03</mn> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>sin</mi> <mo stretchy="false">(</mo> <mn>2</mn> <mi>π</mi> <mo>⋅</mo> <mn>0.1</mn> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> sampled at 32 Hz, respectively. The length of the signals is <math display="inline"><semantics> <mi>N</mi> </semantics></math> = 9600, corresponding to a 5-min period. In the computation of <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>, the parameters <math display="inline"><semantics> <mi>m</mi> </semantics></math> = 2 and <math display="inline"><semantics> <mi>r</mi> </semantics></math> = 0.2 × SD were used.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> <mo stretchy="false">(</mo> <mi>m</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>τ</mi> <mo>,</mo> <mi>N</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> of blood flow oscillations (BFO) associated with endothelial, neurogenic, and myogenic activities in able-bodied (AB) controls during baseline and hyperemia. Data are represented as mean ± standard error. (<b>A</b>) Under pressure-induced hyperemia, <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> tended to be higher at large scales compared to baseline, but the differences did not reach a significant level (<span class="html-italic">p</span> &gt; 0.05, Wilcoxon signed-rank test). (<b>B</b>) During pressure with cooling-induced hyperemia, <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> was significantly lower at all scales (<span class="html-italic">p</span> &lt; 0.05) compared to baseline. (<b>C</b>) During pressure with heating-induced hyperemia, <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> was significantly higher at the scales <math display="inline"><semantics> <mi>τ</mi> </semantics></math> = 21–73 (<span class="html-italic">p</span> &lt; 0.05) compared to baseline.</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> <mo stretchy="false">(</mo> <mi>m</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>τ</mi> <mo>,</mo> <mi>N</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> of BFO associated with endothelial, neurogenic, and myogenic activities in people with SCI during baseline and hyperemia. Data are represented as mean ± standard error. (<b>A</b>) During pressure-induced hyperemia, <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> was slightly higher at large scales compared to baseline (<span class="html-italic">p</span> &gt; 0.05, Wilcoxon signed-rank test). (<b>B</b>) During pressure with cooling-induced hyperemia, <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> was lower at the scales <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>≥</mo> </mrow> </semantics></math> 23 (<span class="html-italic">p</span> = 0.06) compared to baseline. (<b>C</b>) During pressure with heating-induced hyperemia, <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> was significantly higher at the scales <math display="inline"><semantics> <mi>τ</mi> </semantics></math> = 31–35 compared to baseline (<span class="html-italic">p</span> &lt; 0.05); at the scales <math display="inline"><semantics> <mi>τ</mi> </semantics></math> = 37–65, <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> during hyperemia was also higher compared to baseline (<span class="html-italic">p</span> = 0.06).</p>
Full article ">Figure 5
<p>Relative wavelet amplitudes (<math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mi>r</mi> </msub> </mrow> </semantics></math>) of endothelia, neurogenic, and myogenic oscillations in AB controls and people with SCI during baseline and reactive hyperemia. Data are represented as mean ± standard error. The stars represent <span class="html-italic">p</span> &lt; 0.05 (Wilcoxon signed-rank test). (<b>A</b>,<b>B</b>) The condition of pressure with no temperature change. (<b>C</b>,<b>D</b>) Pressure with cooling. (<b>E</b>,<b>F</b>) Pressure with heating.</p>
Full article ">Figure 6
<p>Comparisons of wavelet amplitude spectrum between filtered SBF signals (containing only endothelial, neurogenic, and myogenic oscillations) from an AB control and surrogate time series. For each segment of the SBF signals during baseline and hyperemia, 30 surrogate time series were generated. The spectra of the surrogate time series are presented as mean ± SD. (<b>A</b>) Under pressure with cooling. (<b>B</b>) Under pressure with heating.</p>
Full article ">Figure 7
<p>Comparisons of <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> <mo stretchy="false">(</mo> <mi>m</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>τ</mi> <mo>,</mo> <mi>N</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> between the filtered SBF signals and surrogate time series (the wavelet amplitude spectra are shown in <a href="#entropy-21-00090-f006" class="html-fig">Figure 6</a>). For each segment of the SBF signals during baseline and hyperemia, <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> was computed for 30 surrogate time series and the results are presented as mean ± SD. (<b>A</b>) Under pressure with cooling. (<b>B</b>) Under pressure with heating.</p>
Full article ">
15 pages, 325 KiB  
Article
Poincaré and Log–Sobolev Inequalities for Mixtures
by André Schlichting
Entropy 2019, 21(1), 89; https://doi.org/10.3390/e21010089 - 18 Jan 2019
Cited by 10 | Viewed by 4404
Abstract
This work studies mixtures of probability measures on R n and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the χ 2 -distance. The estimation [...] Read more.
This work studies mixtures of probability measures on R n and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the χ 2 -distance. The estimation of those constants for a mixture can be far more subtle than it is for its parts. Even mixing Gaussian measures may produce a measure with a Hamiltonian potential possessing multiple wells leading to metastability and large constants in Sobolev type inequalities. In particular, the Poincaré constant stays bounded in the mixture parameter, whereas the log–Sobolev may blow up as the mixture ratio goes to 0 or 1. This observation generalizes the one by Chafaï and Malrieu to the multidimensional case. The behavior is shown for a class of examples to be not only a mere artifact of the method. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
17 pages, 2879 KiB  
Article
Symmetries among Multivariate Information Measures Explored Using Möbius Operators
by David J. Galas and Nikita A. Sakhanenko
Entropy 2019, 21(1), 88; https://doi.org/10.3390/e21010088 - 18 Jan 2019
Cited by 6 | Viewed by 4021
Abstract
Relations between common information measures include the duality relations based on Möbius inversion on lattices, which are the direct consequence of the symmetries of the lattices of the sets of variables (subsets ordered by inclusion). In this paper we use the lattice and [...] Read more.
Relations between common information measures include the duality relations based on Möbius inversion on lattices, which are the direct consequence of the symmetries of the lattices of the sets of variables (subsets ordered by inclusion). In this paper we use the lattice and functional symmetries to provide a unifying formalism that reveals some new relations and systematizes the symmetries of the information functions. To our knowledge, this is the first systematic examination of the full range of relationships of this class of functions. We define operators on functions on these lattices based on the Möbius inversions that map functions into one another, which we call Möbius operators, and show that they form a simple group isomorphic to the symmetric group S3. Relations among the set of functions on the lattice are transparently expressed in terms of the operator algebra, and, when applied to the information measures, can be used to derive a wide range of relationships among diverse information measures. The Möbius operator algebra is then naturally generalized which yields an even wider range of new relationships. Full article
Show Figures

Figure 1

Figure 1
<p>The Hasse diagram of the subset lattice for three variables. The numbers in black are the variable subsets, while the Möbius function <math display="inline"><semantics> <mrow> <mi>μ</mi> <mrow> <mo>(</mo> <mrow> <mi>ν</mi> <mo>,</mo> <mi>τ</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> on this lattice (1 or −1) is indicated in red.</p>
Full article ">Figure 2
<p>The Möbius operators define the duality relationships between the functions on the subset lattice.</p>
Full article ">Figure 3
<p>Diagram of the mappings of the functions on the subset lattice into one another by the operators. The operators <math display="inline"><semantics> <mover accent="true"> <mi>P</mi> <mo>^</mo> </mover> </semantics></math> and <math display="inline"><semantics> <mover accent="true"> <mi>R</mi> <mo>^</mo> </mover> </semantics></math> are: <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>P</mi> <mo>^</mo> </mover> <mo>=</mo> <mover accent="true"> <mi>X</mi> <mo>^</mo> </mover> <mover accent="true"> <mi>M</mi> <mo>^</mo> </mover> <mo>,</mo> <mo> </mo> <mover accent="true"> <mi>R</mi> <mo>^</mo> </mover> <mo>=</mo> <mover accent="true"> <mi>X</mi> <mo>^</mo> </mover> <mover accent="true"> <mi>m</mi> <mo>^</mo> </mover> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The four-variable lattice showing the 4 join-irreducible elements that generate the symmetric deltas as in Equation (16c). Möbius function values are shown on the right, and the red lines connect the elements of the delta function, <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mrow> <mo>(</mo> <mrow> <mn>234</mn> <mo>;</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>, which form a 3D-cube.</p>
Full article ">Figure 5
<p>A simple modifcation of one of the functions by lattice complementation modifies the postion of functions in the mapping diagram. The original diagram is on the left, the result of <math display="inline"><semantics> <mi mathvariant="sans-serif">Δ</mi> </semantics></math> modified by complementation is on the right.</p>
Full article ">Figure 6
<p>Generalized Möbius operator relations. A diagram of the relations among the functions as determined by the operators. The upper two arrows represent the generalized Möbius inversion relations. The function <math display="inline"><semantics> <mrow> <msub> <mi>g</mi> <mi>η</mi> </msub> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is the designation of the function created by the operator <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>η</mi> </msub> </mrow> </semantics></math>. The <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mn>3</mn> </msub> </mrow> </semantics></math> structure is reflected in the similarity with the diagram of <a href="#entropy-21-00088-f003" class="html-fig">Figure 3</a>. Note that when <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mo>∅</mo> </mrow> </semantics></math> the figure becomes identical to <a href="#entropy-21-00088-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 7
<p>Decomposing the 3D-cube Hasse diagram into two squares (2D hypercubes), by passing a plane through the center of the cube three different ways.</p>
Full article ">
13 pages, 6061 KiB  
Article
Parallel Lives: A Local-Realistic Interpretation of “Nonlocal” Boxes
by Gilles Brassard and Paul Raymond-Robichaud
Entropy 2019, 21(1), 87; https://doi.org/10.3390/e21010087 - 18 Jan 2019
Cited by 20 | Viewed by 9248
Abstract
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in [...] Read more.
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in the simplest possible manner. Along the way, we reinterpret the celebrated 1935 argument of Einstein, Podolsky and Rosen, and come to the conclusion that they were right in their questioning the completeness of the Copenhagen version of quantum theory, provided one believes in a local-realistic universe. Throughout our journey, we strive to explain our views from first principles, without expecting mathematical sophistication nor specialized prior knowledge from the reader. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Show Figures

Figure 1

Figure 1
<p>Nonlocal boxes.</p>
Full article ">
11 pages, 584 KiB  
Article
Information Entropy of Tight-Binding Random Networks with Losses and Gain: Scaling and Universality
by C. T. Martínez-Martínez and J. A. Méndez-Bermúdez
Entropy 2019, 21(1), 86; https://doi.org/10.3390/e21010086 - 18 Jan 2019
Cited by 9 | Viewed by 3688
Abstract
We study the localization properties of the eigenvectors, characterized by their information entropy, of tight-binding random networks with balanced losses and gain. The random network model, which is based on Erdős–Rényi (ER) graphs, is defined by three parameters: the network size N, [...] Read more.
We study the localization properties of the eigenvectors, characterized by their information entropy, of tight-binding random networks with balanced losses and gain. The random network model, which is based on Erdős–Rényi (ER) graphs, is defined by three parameters: the network size N, the network connectivity α , and the losses-and-gain strength γ . Here, N and α are the standard parameters of ER graphs, while we introduce losses and gain by including complex self-loops on all vertices with the imaginary amplitude i γ with random balanced signs, thus breaking the Hermiticity of the corresponding adjacency matrices and inducing complex spectra. By the use of extensive numerical simulations, we define a scaling parameter ξ ξ ( N , α , γ ) that fixes the localization properties of the eigenvectors of our random network model; such that, when ξ < 0.1 ( 10 < ξ ), the eigenvectors are localized (extended), while the localization-to-delocalization transition occurs for 0.1 < ξ < 10 . Moreover, to extend the applicability of our findings, we demonstrate that for fixed ξ , the spectral properties (characterized by the position of the eigenvalues on the complex plane) of our network model are also universal; i.e., they do not depend on the specific values of the network parameters. Full article
(This article belongs to the Special Issue Complex Networks from Information Measures)
Show Figures

Figure 1

Figure 1
<p>Average information entropy <math display="inline"><semantics> <mfenced open="&#x2329;" close="&#x232A;"> <mi>S</mi> </mfenced> </semantics></math> normalized to <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mrow> <mi>GOE</mi> </mrow> </msub> <mo>≈</mo> <mi>ln</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>/</mo> <mn>2.07</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> as a function of the connectivity <math display="inline"><semantics> <mi>α</mi> </semantics></math> of Erdős–Rényi tight-binding random networks (of sizes ranging from <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>250</mn> </mrow> </semantics></math>–2000) with balanced losses and gain with strength <math display="inline"><semantics> <mi>γ</mi> </semantics></math>. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>. Each symbol was computed by averaging over <math display="inline"><semantics> <msup> <mn>10</mn> <mn>6</mn> </msup> </semantics></math> eigenvectors.</p>
Full article ">Figure 2
<p>Average information entropy <math display="inline"><semantics> <mfenced open="&#x2329;" close="&#x232A;"> <mi>S</mi> </mfenced> </semantics></math> normalized to <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mrow> <mi>GOE</mi> </mrow> </msub> <mo>≈</mo> <mi>ln</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>/</mo> <mn>2.07</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> as a function of the connectivity <math display="inline"><semantics> <mi>α</mi> </semantics></math> of Erdős–Rényi tight-binding random networks of size <span class="html-italic">N</span> with different loss-and-gain strengths <math display="inline"><semantics> <mi>γ</mi> </semantics></math>. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>250</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>4000</mn> </mrow> </semantics></math>. Insets: enlargements of the boxes around the localization-to-delocalization transition point in main panels. Each symbol was computed by averaging over <math display="inline"><semantics> <msup> <mn>10</mn> <mn>6</mn> </msup> </semantics></math> eigenvectors.</p>
Full article ">Figure 3
<p>Localization-to-delocalization transition point <math display="inline"><semantics> <msup> <mi>α</mi> <mo>∗</mo> </msup> </semantics></math> (defined as the value of <math display="inline"><semantics> <mi>α</mi> </semantics></math> for which <math display="inline"><semantics> <mrow> <mfenced separators="" open="&#x2329;" close="&#x232A;"> <mspace width="3.33333pt"/> <mi>S</mi> <mspace width="3.33333pt"/> </mfenced> <mo>/</mo> <msub> <mi>S</mi> <mrow> <mi>GOE</mi> </mrow> </msub> <mspace width="3.33333pt"/> <mo>≈</mo> <mspace width="3.33333pt"/> <mn>0.5</mn> </mrow> </semantics></math>) as a function of (<b>a</b>) the network size <span class="html-italic">N</span> (for several values of <math display="inline"><semantics> <mi>γ</mi> </semantics></math>) and (<b>b</b>) the loss-and-gain strength <math display="inline"><semantics> <mi>γ</mi> </semantics></math> (for several values of <span class="html-italic">N</span>). In (<b>b</b>), we set <math display="inline"><semantics> <mi>δ</mi> </semantics></math> to −0.98. Dashed lines in (<b>a</b>) and (<b>b</b>) proportional to <math display="inline"><semantics> <msup> <mi>N</mi> <mrow> <mo>−</mo> <mn>0.98</mn> </mrow> </msup> </semantics></math> and <math display="inline"><semantics> <mi>γ</mi> </semantics></math>, respectively, are plotted to guide the eye; see Equations (<a href="#FD5-entropy-21-00086" class="html-disp-formula">5</a>) and (<a href="#FD6-entropy-21-00086" class="html-disp-formula">6</a>).</p>
Full article ">Figure 4
<p>Average Shannon entropy <math display="inline"><semantics> <mfenced open="&#x2329;" close="&#x232A;"> <mi>S</mi> </mfenced> </semantics></math> normalized to <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>GOE</mi> </mrow> </msub> </semantics></math> as a function of the scaling parameter <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> (see Equation (<a href="#FD7-entropy-21-00086" class="html-disp-formula">7</a>)) of Erdős–Rényi tight-binding random networks with losses and gain. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>250</mn> </mrow> </semantics></math> for different values of loss-and-gain strength <math display="inline"><semantics> <mi>γ</mi> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> for different network sizes <span class="html-italic">N</span>, and (<b>c</b>) different combinations of <span class="html-italic">N</span> and <math display="inline"><semantics> <mi>γ</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>Density plots of eigenvalues <math display="inline"><semantics> <mi>λ</mi> </semantics></math> in the complex plane for several combinations of sparsity <math display="inline"><semantics> <mi>α</mi> </semantics></math> and loss-and-gain strengths <math display="inline"><semantics> <mi>γ</mi> </semantics></math>. The network size was set to <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>. The sparsity increases from left to right, while the loss-and-gain strength from top to bottom. To construct each density plot, <math display="inline"><semantics> <msup> <mn>10</mn> <mn>6</mn> </msup> </semantics></math> eigenvalues were used.</p>
Full article ">Figure 6
<p>Density plots of eigenvalues <math display="inline"><semantics> <mi>λ</mi> </semantics></math> in the complex plane for three network sizes <span class="html-italic">N</span> (500, 1000, and 2000) and increasing values of <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> (from top to bottom). To construct each density plot, <math display="inline"><semantics> <msup> <mn>10</mn> <mn>6</mn> </msup> </semantics></math> eigenvalues were used.</p>
Full article ">
12 pages, 1601 KiB  
Article
Performance Analysis of a Proton Exchange Membrane Fuel Cell Based Syngas
by Xiuqin Zhang, Qiubao Lin, Huiying Liu, Xiaowei Chen, Sunqing Su and Meng Ni
Entropy 2019, 21(1), 85; https://doi.org/10.3390/e21010085 - 18 Jan 2019
Cited by 5 | Viewed by 3594
Abstract
External chemical reactors for steam reforming and water gas shift reactions are needed for a proton exchange membrane (PEM) fuel cell system using syngas fuel. For the preheating of syngas and stable steam reforming reaction at 600 °C, residual hydrogen from a fuel [...] Read more.
External chemical reactors for steam reforming and water gas shift reactions are needed for a proton exchange membrane (PEM) fuel cell system using syngas fuel. For the preheating of syngas and stable steam reforming reaction at 600 °C, residual hydrogen from a fuel cell and a certain amount of additional syngas are burned. The combustion temperature is calculated and the molar ratio of the syngas into burner and steam reformer is determined. Based on thermodynamics and electrochemistry, the electric power density and energy conversion efficiency of a PEM fuel cell based syngas are expressed. The effects of the temperature, the hydrogen utilization factor at the anode, and the molar ratio of the syngas into burner and steam reformer on the performance of a PEM fuel cell are discussed. To achieve the maximum power density or efficiency, the key parameters are determined. This manuscript presents the detailed operating process of a PEM fuel cell, the allocation of the syngas for combustion and electric generation, and the feasibility of a PEM fuel cell using syngas. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics III)
Show Figures

Figure 1

Figure 1
<p>The schematic diagram of a proton exchange membrane (PEM) fuel cell based syngas, where abbreviations of HE1, HE2, and HE3 are heat exchangers, SR is a steam reformer, HTS and LTS are water gas shift reactions, PROX is a preferential oxidizer, and AB is an auxiliary burner.</p>
Full article ">Figure 2
<p>Curves of x varying with <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <msub> <mi mathvariant="normal">H</mi> <mn>2</mn> </msub> </mrow> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msubsup> <mi>T</mi> <mi>C</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> = 30 °C.</p>
Full article ">Figure 3
<p>Curves of <math display="inline"><semantics> <mrow> <msubsup> <mi>T</mi> <mi>C</mi> <mrow/> </msubsup> </mrow> </semantics></math> varying with <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <msub> <mi mathvariant="normal">H</mi> <mn>2</mn> </msub> </mrow> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msubsup> <mi>T</mi> <mi>C</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> = 30 °C.</p>
Full article ">Figure 4
<p>The power density varying with <math display="inline"><semantics> <mrow> <msup> <mi>n</mi> <mo>*</mo> </msup> </mrow> </semantics></math>, where <math display="inline"><semantics> <mi>T</mi> </semantics></math> = 70 °C and <math display="inline"><semantics> <mrow> <msubsup> <mi>T</mi> <mi>C</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> = 30 °C. The solid and dash curves are, respectively, the maximum power density and the power density under the maximum efficiency.</p>
Full article ">Figure 5
<p>The hydrogen utilization factor varying with <math display="inline"><semantics> <mrow> <msup> <mi>n</mi> <mo>*</mo> </msup> </mrow> </semantics></math>, where <math display="inline"><semantics> <mi>T</mi> </semantics></math> = 70 °C and <math display="inline"><semantics> <mrow> <msubsup> <mi>T</mi> <mi>C</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> = 30 °C. The solid and dash curves are, respectively, the hydrogen utilization factor under the maximum power density and the maximum efficiency.</p>
Full article ">Figure 6
<p>Curves of x varying with <math display="inline"><semantics> <mrow> <msup> <mi>n</mi> <mo>*</mo> </msup> </mrow> </semantics></math>, where <math display="inline"><semantics> <mi>T</mi> </semantics></math> = 70 °C and <math display="inline"><semantics> <mrow> <msubsup> <mi>T</mi> <mi>C</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> = 30 °C. The solid and dash curves are x under the maximum power density and the maximum efficiency, respectively.</p>
Full article ">Figure 7
<p>The efficiency varying with <math display="inline"><semantics> <mrow> <msup> <mi>n</mi> <mo>*</mo> </msup> </mrow> </semantics></math>, where <math display="inline"><semantics> <mi>T</mi> </semantics></math> = 70 °C and <math display="inline"><semantics> <mrow> <msubsup> <mi>T</mi> <mi>C</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> = 30 °C. The dash and solid curves are, respectively, the maximum efficiency and the efficiency under the maximum power density.</p>
Full article ">
14 pages, 3363 KiB  
Article
Desalination Processes’ Efficiency and Future Roadmap
by Muhammad Wakil Shahzad, Muhammad Burhan, Doskhan Ybyraiymkul and Kim Choon Ng
Entropy 2019, 21(1), 84; https://doi.org/10.3390/e21010084 - 18 Jan 2019
Cited by 63 | Viewed by 8655
Abstract
For future sustainable seawater desalination, the importance of achieving better energy efficiency of the existing 19,500 commercial-scale desalination plants cannot be over emphasized. The major concern of the desalination industry is the inadequate approach to energy efficiency evaluation of diverse seawater desalination processes [...] Read more.
For future sustainable seawater desalination, the importance of achieving better energy efficiency of the existing 19,500 commercial-scale desalination plants cannot be over emphasized. The major concern of the desalination industry is the inadequate approach to energy efficiency evaluation of diverse seawater desalination processes by omitting the grade of energy supplied. These conventional approaches would suffice if the efficacy comparison were to be conducted for the same energy input processes. The misconception of considering all derived energies as equivalent in the desalination industry has severe economic and environmental consequences. In the realms of the energy and desalination system planners, serious judgmental errors in the process selection of green installations are made unconsciously as the efficacy data are either flawed or inaccurate. Inferior efficacy technologies’ implementation decisions were observed in many water-stressed countries that can burden a country’s economy immediately with higher unit energy cost as well as cause more undesirable environmental effects on the surroundings. In this article, a standard primary energy-based thermodynamic framework is presented that addresses energy efficacy fairly and accurately. It shows clearly that a thermally driven process consumes 2.5–3% of standard primary energy (SPE) when combined with power plants. A standard universal performance ratio-based evaluation method has been proposed that showed all desalination processes performance varies from 10–14% of the thermodynamic limit. To achieve 2030 sustainability goals, innovative processes are required to meet 25–30% of the thermodynamic limit. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) World and Gulf Cooperation Countries (GCC) desalination capacities trend from 1985 to 2030 and (<b>b</b>) energy consumption by desalination processes [<a href="#B1-entropy-21-00084" class="html-bibr">1</a>,<a href="#B2-entropy-21-00084" class="html-bibr">2</a>,<a href="#B3-entropy-21-00084" class="html-bibr">3</a>,<a href="#B4-entropy-21-00084" class="html-bibr">4</a>,<a href="#B5-entropy-21-00084" class="html-bibr">5</a>,<a href="#B6-entropy-21-00084" class="html-bibr">6</a>,<a href="#B7-entropy-21-00084" class="html-bibr">7</a>,<a href="#B8-entropy-21-00084" class="html-bibr">8</a>,<a href="#B9-entropy-21-00084" class="html-bibr">9</a>,<a href="#B10-entropy-21-00084" class="html-bibr">10</a>,<a href="#B11-entropy-21-00084" class="html-bibr">11</a>,<a href="#B12-entropy-21-00084" class="html-bibr">12</a>,<a href="#B13-entropy-21-00084" class="html-bibr">13</a>,<a href="#B14-entropy-21-00084" class="html-bibr">14</a>].</p>
Full article ">Figure 2
<p>Combined cycle efficiency and environment impact trend from 1870–2018 [<a href="#B15-entropy-21-00084" class="html-bibr">15</a>,<a href="#B16-entropy-21-00084" class="html-bibr">16</a>,<a href="#B17-entropy-21-00084" class="html-bibr">17</a>,<a href="#B18-entropy-21-00084" class="html-bibr">18</a>,<a href="#B19-entropy-21-00084" class="html-bibr">19</a>,<a href="#B20-entropy-21-00084" class="html-bibr">20</a>,<a href="#B21-entropy-21-00084" class="html-bibr">21</a>,<a href="#B22-entropy-21-00084" class="html-bibr">22</a>].</p>
Full article ">Figure 3
<p>The standard primary energy (SPE) concept to emulate actual desalination processes.</p>
Full article ">Figure 4
<p>Typical combined power and desalination system schematic and state points.</p>
Full article ">Figure 5
<p>Commercial-scale seawater desalination processes performance trend from 1983–2016.</p>
Full article ">Figure 6
<p>Desalination processes development since last 3 decades. The paradigm shift in technology can help to gain a quantum jump in performance.</p>
Full article ">
10 pages, 598 KiB  
Article
Reconstruction of PET Images Using Cross-Entropy and Field of Experts
by Jose Mejia, Alberto Ochoa and Boris Mederos
Entropy 2019, 21(1), 83; https://doi.org/10.3390/e21010083 - 18 Jan 2019
Cited by 4 | Viewed by 3630
Abstract
The reconstruction of positron emission tomography data is a difficult task, particularly at low count rates because Poisson noise has a significant influence on the statistical uncertainty of positron emission tomography (PET) measurements. Prior information is frequently used to improve image quality. In [...] Read more.
The reconstruction of positron emission tomography data is a difficult task, particularly at low count rates because Poisson noise has a significant influence on the statistical uncertainty of positron emission tomography (PET) measurements. Prior information is frequently used to improve image quality. In this paper, we propose the use of a field of experts to model a priori structure and capture anatomical spatial dependencies of the PET images to address the problems of noise and low count data, which make the reconstruction of the image difficult. We reconstruct PET images by using a modified MXE algorithm, which minimizes a objective function with the cross-entropy as a fidelity term, while the field of expert model is incorporated as a regularizing term. Comparisons with the expectation maximization algorithm and a iterative method with a prior penalizing relative differences showed that the proposed method can lead to accurate estimation of the image, especially with acquisitions at low count rate. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

Figure 1
<p>The <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <mn>5</mn> </mrow> </semantics></math> filters obtained by training the oroduct-of-experts model on positron emission tomography (PET) images database. The colors in each frame are proportional to the magnitude of the filter coefficient, using a gray scale.</p>
Full article ">Figure 2
<p>Cylindrical software phantom: (<b>a</b>) ground truth; (<b>b</b>) simulated sinogram data at 30 M counts; and (<b>c</b>) reconstruction with expectation maximization (EM).</p>
Full article ">Figure 3
<p>Cylindrical software phantom: (<b>a</b>) Input sinogram; (<b>b</b>) low count reconstruction with EM; (<b>c</b>) low count reconstruction with CP; and (<b>d</b>) low count reconstruction with the proposed method.</p>
Full article ">Figure 4
<p>Profiles of the different methods with the cylindrical software phantom. Each row shows the same hole, and each column the same method. The maximum of each surface is indicated next to it.</p>
Full article ">Figure 5
<p>Slices of the Digimouse software phantom: ground truth (<b>a</b>,<b>e</b>); low count reconstruction with EM (<b>b</b>,<b>f</b>); low count reconstruction with CP (<b>c</b>,<b>g</b>); and (<b>d</b>,<b>h</b>) low count reconstruction with the proposed method. In (<b>a</b>), the arrow indicates a lesion.</p>
Full article ">Figure 6
<p>ROC analysis evaluated on the Digimouse phantom.</p>
Full article ">Figure 7
<p>Reconstruction of measured data with: (<b>a</b>) EM; (<b>b</b>) CP; and (<b>c</b>) the proposed method.</p>
Full article ">
14 pages, 403 KiB  
Article
Approximating Ground States by Neural Network Quantum States
by Ying Yang, Chengyang Zhang and Huaixin Cao
Entropy 2019, 21(1), 82; https://doi.org/10.3390/e21010082 - 17 Jan 2019
Cited by 5 | Viewed by 4318
Abstract
Motivated by the Carleo’s work (Science, 2017, 355: 602), we focus on finding the neural network quantum statesapproximation of the unknown ground state of a given Hamiltonian H in terms of the best relative error and explore the influences of sum, tensor product, [...] Read more.
Motivated by the Carleo’s work (Science, 2017, 355: 602), we focus on finding the neural network quantum statesapproximation of the unknown ground state of a given Hamiltonian H in terms of the best relative error and explore the influences of sum, tensor product, local unitary of Hamiltonians on the best relative error. Besides, we illustrate our method with some examples. Full article
(This article belongs to the Collection Quantum Information)
Show Figures

Figure 1

Figure 1
<p>Artificial neural network encoding an NNQS. It is a restricted Boltzmann machine architecture that features a set of <span class="html-italic">N</span> visible artificial neurons (blue disks) and a set of <span class="html-italic">M</span> hidden neurons (yellow disks). For each value <math display="inline"><semantics> <msub> <mo>Λ</mo> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <msub> <mi>k</mi> <mn>2</mn> </msub> <mo>…</mo> <msub> <mi>k</mi> <mi>N</mi> </msub> </mrow> </msub> </semantics></math> of the input observable <span class="html-italic">S</span>, the neural network computes the value of the <math display="inline"><semantics> <mrow> <msub> <mo>Ψ</mo> <mrow> <mi>S</mi> <mo>,</mo> <mo>Ω</mo> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>λ</mi> <msub> <mi>k</mi> <mn>1</mn> </msub> </msub> <mo>,</mo> <msub> <mi>λ</mi> <msub> <mi>k</mi> <mn>2</mn> </msub> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>λ</mi> <msub> <mi>k</mi> <mi>N</mi> </msub> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Quantum artificial neural network with parameter <math display="inline"><semantics> <mrow> <mo>Ω</mo> <mo>=</mo> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>W</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Numerically minimize <span class="html-italic">g</span> over <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>x</mi> <mn>8</mn> </msub> </mrow> </semantics></math> by optimization.</p>
Full article ">Figure 4
<p>Numerically minimize <span class="html-italic">ϵ</span> by optimization.</p>
Full article ">Figure 5
<p>Numerically minimize <span class="html-italic">ϵ</span> by optimization.</p>
Full article ">
19 pages, 5969 KiB  
Article
Partial Discharge Fault Diagnosis Based on Multi-Scale Dispersion Entropy and a Hypersphere Multiclass Support Vector Machine
by Haikun Shang, Feng Li and Yingjie Wu
Entropy 2019, 21(1), 81; https://doi.org/10.3390/e21010081 - 17 Jan 2019
Cited by 22 | Viewed by 4402
Abstract
Partial discharge (PD) fault analysis is an important tool for insulation condition diagnosis of electrical equipment. In order to conquer the limitations of traditional PD fault diagnosis, a novel feature extraction approach based on variational mode decomposition (VMD) and multi-scale dispersion entropy (MDE) [...] Read more.
Partial discharge (PD) fault analysis is an important tool for insulation condition diagnosis of electrical equipment. In order to conquer the limitations of traditional PD fault diagnosis, a novel feature extraction approach based on variational mode decomposition (VMD) and multi-scale dispersion entropy (MDE) is proposed. Besides, a hypersphere multiclass support vector machine (HMSVM) is used for PD pattern recognition with extracted PD features. Firstly, the original PD signal is decomposed with VMD to obtain intrinsic mode functions (IMFs). Secondly proper IMFs are selected according to central frequency observation and MDE values in each IMF are calculated. And then principal component analysis (PCA) is introduced to extract effective principle components in MDE. Finally, the extracted principle factors are used as PD features and sent to HMSVM classifier. Experiment results demonstrate that, PD feature extraction method based on VMD-MDE can extract effective characteristic parameters that representing dominant PD features. Recognition results verify the effectiveness and superiority of the proposed PD fault diagnosis method. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Classification model of HMSVM.</p>
Full article ">Figure 2
<p>PD fault diagnosis procedure based on VMD-MDE and HMSVM.</p>
Full article ">Figure 3
<p>PD models.</p>
Full article ">Figure 4
<p>Photograph of experimental setup.</p>
Full article ">Figure 5
<p>The connection diagram of PD experiment. 1—AC power source; 2—step up transformer; 3—resistance; 4—capacitor; 5—high voltage bushing; 6—small bushing; 7—PD model; 8—UHF sensor; 9—current sensor; 10—console.</p>
Full article ">Figure 6
<p>PD signals.</p>
Full article ">Figure 6 Cont.
<p>PD signals.</p>
Full article ">Figure 7
<p>Results of EMD decomposition. (<b>a</b>) IMF of decomposition; (<b>b</b>) Frequency spectrum of decomposition.</p>
Full article ">Figure 8
<p>Results of VMD decomposition. (<b>a</b>) IMF of decomposition; (<b>b</b>) Frequency spectrum of decomposition.</p>
Full article ">Figure 8 Cont.
<p>Results of VMD decomposition. (<b>a</b>) IMF of decomposition; (<b>b</b>) Frequency spectrum of decomposition.</p>
Full article ">Figure 9
<p>MDE variation with scale factors.</p>
Full article ">Figure 10
<p>MDE values of IMFs using VMD and EMD.</p>
Full article ">Figure 11
<p>The variation of contribution rate with principle components.</p>
Full article ">Figure 12
<p>Recognition results using EMD decomposition.</p>
Full article ">Figure 13
<p>Recognition results using VMD decomposition.</p>
Full article ">Figure 14
<p>Recognition results using VMD-MDE method.</p>
Full article ">
15 pages, 626 KiB  
Article
Efficient High-Dimensional Quantum Key Distribution with Hybrid Encoding
by Yonggi Jo, Hee Su Park, Seung-Woo Lee and Wonmin Son
Entropy 2019, 21(1), 80; https://doi.org/10.3390/e21010080 - 17 Jan 2019
Cited by 10 | Viewed by 5372
Abstract
We propose a schematic setup of quantum key distribution (QKD) with an improved secret key rate based on high-dimensional quantum states. Two degrees-of-freedom of a single photon, orbital angular momentum modes, and multi-path modes, are used to encode secret key information. Its practical [...] Read more.
We propose a schematic setup of quantum key distribution (QKD) with an improved secret key rate based on high-dimensional quantum states. Two degrees-of-freedom of a single photon, orbital angular momentum modes, and multi-path modes, are used to encode secret key information. Its practical implementation consists of optical elements that are within the reach of current technologies such as a multiport interferometer. We show that the proposed feasible protocol has improved the secret key rate with much sophistication compared to the previous 2-dimensional protocol known as the detector-device-independent QKD. Full article
(This article belongs to the Special Issue Entropic Uncertainty Relations and Their Applications)
Show Figures

Figure 1

Figure 1
<p>A schematic setup of 3-dimensional quantum key distribution (QKD) with hybrid encoding. Alice uses orbital angular momentum (OAM) modes of a single photon, and Bob controls the phase of each path to encode their information in the single photon. The encoded photon enters into a 3-port interferometer. After single photon interference, a OAM value and existing path of the single photon is measured. SLM: spatial light modulator; BS1: 50:50 beam splitter; BS2: beam splitter of which transmissivity is <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math>; BS3: beam splitter of which transmissivity is <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math>; OAM CT: cyclic transformation of OAM modes.</p>
Full article ">Figure 2
<p>Schematic setups of Bob’s two encoding systems. (<b>a</b>) Bob chooses one path to encode his information by using optical switch; (<b>b</b>) Bob encodes his information by control phase shifters, B1 and B2. Details are described in the maintext. BS1: 50:50 beam splitter; BS2: beam splitter of which transmissivity is <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math>; PS: phase shifter</p>
Full article ">Figure 3
<p>A schematic diagram of experimental setup of three-fold cyclic transformation of OAM modes. There are OAM beam splitters (OAM BSs) which consist of a Mach-Zehnder interferometer with Dove prisms. <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> means relative angle between the two Dove prisms. The first OAM BS (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mi>π</mi> </mrow> </semantics></math>) and the final OAM BS (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mo>−</mo> <mi>π</mi> </mrow> </semantics></math>) change a direction of propagation of a photon whose OAM value is odd and even, respectively. The second OAM BS (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>) separates a photon whose OAM value is 0 and 2. With OAM holograms, the three-fold cyclic transformation of OAM modes <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>}</mo> </mrow> </semantics></math> is accomplished.</p>
Full article ">Figure 4
<p>The secret key rate of the original detector-device-independent QKD (DDI-QKD) (black dotted line), 3<span class="html-italic">d</span>- (red dashed line), 4<span class="html-italic">d</span>- (blue dot-dashed line), and 5<span class="html-italic">d</span>-QKD with hybrid encoding (orange solid line). (<b>a</b>) Plot of the secret key rate <span class="html-italic">r</span> (bits/sifted pulse) vs. state error rate <span class="html-italic">Q</span>; (<b>b</b>) Plot of the secret key rate <span class="html-italic">r</span> (bits/sifted pulse) vs. transmission loss <math display="inline"><semantics> <mi>η</mi> </semantics></math> (dB). Dark count rate of single photon detectors is assumed as <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </semantics></math> per pulse.</p>
Full article ">Figure 5
<p>The secret key rate of 3<span class="html-italic">d</span>-measurement-device-independent QKD (MDI-QKD) (red dashed line) and 3<span class="html-italic">d</span>-QKD with hybrid encoding (black solid line). Plot of the secret key rate <span class="html-italic">R</span> (bits/total pulse) vs. transmission loss <math display="inline"><semantics> <mi>η</mi> </semantics></math> (dB). The secret key rate per total signal is obtained from (the secret key rate per sifted key) × (the signal sifting rate). Details are described in maintext. Dark count rate of single photon detectors is assumed as <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </semantics></math> per pulse.</p>
Full article ">Figure 6
<p>The secret key rate of 3<span class="html-italic">d</span>-QKD hybrid encoding (black solid line) and a prepare-and-measure 3<span class="html-italic">d</span>-QKD (red dashed line). Dark count rate of single photon detectors is assumed as <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </semantics></math> per pulse.</p>
Full article ">
11 pages, 918 KiB  
Article
Quaternion Entropy for Analysis of Gait Data
by Agnieszka Szczęsna
Entropy 2019, 21(1), 79; https://doi.org/10.3390/e21010079 - 17 Jan 2019
Cited by 13 | Viewed by 4429
Abstract
Nonlinear dynamical analysis is a powerful approach to understanding biological systems. One of the most used metrics of system complexities is the Kolmogorov entropy. Long input signals without noise are required for the calculation, which are very hard to obtain in real situations. [...] Read more.
Nonlinear dynamical analysis is a powerful approach to understanding biological systems. One of the most used metrics of system complexities is the Kolmogorov entropy. Long input signals without noise are required for the calculation, which are very hard to obtain in real situations. Techniques allowing the estimation of entropy directly from time signals are statistics like approximate and sample entropy. Based on that, the new measurement for quaternion signal is introduced. This work presents an example of application of a nonlinear time series analysis by using the new quaternion, approximate entropy to analyse human gait kinematic data. The quaternion entropy was applied to analyse the quaternion signal which represents the segments orientations in time during the human gait. The research was aimed at the assessment of the influence of both walking speed and ground slope on the gait control during treadmill walking. Gait data was obtained by the optical motion capture system. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Axis and angle of rotation of femur and foot segments during gait (three strides).</p>
Full article ">Figure 2
<p>Results values of <math display="inline"><semantics> <mi mathvariant="italic">ApQuatEn</mi> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mi mathvariant="italic">mean</mi> <mo>(</mo> <msub> <mi>d</mi> <mi mathvariant="italic">cosine</mi> </msub> <mo>)</mo> </mrow> </semantics></math>) for left and right femur segments.</p>
Full article ">Figure 3
<p>Results values of <math display="inline"><semantics> <mi mathvariant="italic">ApQuatEn</mi> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mi mathvariant="italic">mean</mi> <mo>(</mo> <msub> <mi>d</mi> <mi mathvariant="italic">cosine</mi> </msub> <mo>)</mo> </mrow> </semantics></math>) for left and right tibia segments.</p>
Full article ">Figure 4
<p>Results values of <math display="inline"><semantics> <mi mathvariant="italic">ApQuatEn</mi> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mi mathvariant="italic">mean</mi> <mo>(</mo> <msub> <mi>d</mi> <mi mathvariant="italic">cosine</mi> </msub> <mo>)</mo> </mrow> </semantics></math>) for left and right foot segments.</p>
Full article ">Figure 5
<p>The value of entropy <math display="inline"><semantics> <mi mathvariant="italic">ApQuatEn</mi> </semantics></math> in relation to the length of vector (<span class="html-italic">m</span>) and threshold distance <span class="html-italic">r</span> value for left femur segments.</p>
Full article ">Figure 6
<p>The value of entropy for left femur segments (<span class="html-italic">Normal</span> speed) <math display="inline"><semantics> <mi mathvariant="italic">ApQuatEn</mi> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mi mathvariant="italic">mean</mi> <mo>(</mo> <msub> <mi>d</mi> <mi mathvariant="italic">cosine</mi> </msub> <mo>)</mo> </mrow> </semantics></math>) in relation to data length <span class="html-italic">N</span>.</p>
Full article ">
15 pages, 3969 KiB  
Article
Uncertainty Assessment of Hyperspectral Image Classification: Deep Learning vs. Random Forest
by Majid Shadman Roodposhti, Jagannath Aryal, Arko Lucieer and Brett A. Bryan
Entropy 2019, 21(1), 78; https://doi.org/10.3390/e21010078 - 16 Jan 2019
Cited by 32 | Viewed by 7297
Abstract
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test [...] Read more.
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test data but also incapable of capturing the spatial variation in classification error. Here, we apply and compare two uncertainty assessment techniques that do not rely on test data availability and enable the spatial characterisation of classification accuracy before the validation phase, promoting the assessment of error propagation within the classified imagery products. We compared the performance of emerging deep neural network (DNN) with the popular random forest (RF) technique. Uncertainty assessment was implemented by calculating the Shannon entropy of class probabilities predicted by DNN and RF for every pixel. The classification uncertainties of DNN and RF were quantified for two different hyperspectral image datasets—Salinas and Indian Pines. We then compared the uncertainty against the classification accuracy of the techniques represented by a modified root mean square error (RMSE). The results indicate that considering modified RMSE values for various sample sizes of both datasets, the derived entropy based on the DNN algorithm is a better estimate of classification accuracy and hence provides a superior uncertainty estimate at the pixel level. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

Figure 1
<p>Flowchart of methodology implementation labelled with the main R packages utilized.</p>
Full article ">Figure 2
<p>The best-case scenarios for every pixel representing low uncertainty (<b>a</b>) versus the worst-case scenario denoting high uncertainty (<b>b</b>). The other instances would be intermediate states of these two.</p>
Full article ">Figure 3
<p>Ground truth data of two datasets including the Salinas (<b>a</b>) and the Indian Pines (<b>b</b>). The bottom images represent the location of the train and test data for the Salinas (<b>c</b>) and the Indian Pines (<b>d</b>).</p>
Full article ">Figure 4
<p>Results of uncertainty assessment for DNN (<b>a</b>) and RF (<b>b</b>) using different portions of training sample (S, in %) and mode of correct/incorrect classified test data for the Salinas dataset. The estimated overall accuracy (OA, in %) of the whole classification scheme is also demonstrated for each training sample.</p>
Full article ">Figure 5
<p>The estimated RMSE values of uncertainty assessment for test datasets (<span class="html-italic">y</span>-axis) where the algorithm is trained with different portions of the training sample (<span class="html-italic">x</span>-axis) of Salinas dataset. Dashed lines represent the minimum and maximum RMSE values for each sample size achieved in five consecutive simulation runs.</p>
Full article ">Figure 6
<p>Class entropy/uncertainty (<span class="html-italic">x</span>-axis) versus class accuracy (<span class="html-italic">y</span>-axis) plots of Salinas dataset using DNN (<b>a</b>, left) and RF (<b>b</b>, right) algorithms observed by applying 50% of training data. The bubble sizes represent the frequency of land use class labels while bigger bubbles indicate the higher frequency and vice versa.</p>
Full article ">Figure 7
<p>Results of uncertainty assessment for DNN (<b>a</b>) and RF (<b>b</b>) using different portions of training sample (S, in %) and mode of correct/incorrect classified test data for the Indian Pines dataset. The estimated overall accuracy (OA, in %) of the whole classification scheme is also demonstrated for each training sample.</p>
Full article ">Figure 8
<p>The estimated RMSE values of uncertainty assessment for test datasets (<span class="html-italic">y</span>-axis) where the algorithm is trained with different portions of training sample (<span class="html-italic">x</span>-axis) of Indian Pines dataset. Dashed lines represent the minimum and maximum RMSE values for each sample size achieved in five consecutive simulation runs.</p>
Full article ">Figure 9
<p>Class entropy/uncertainty (<span class="html-italic">x</span>-axis) versus class accuracy (<span class="html-italic">y</span>-axis) plots of Indian Pines dataset using DNN (<b>a</b>, left) and RF (<b>b</b>, right) algorithms observed by applying 50% of training sample size. The bubble sizes represent the frequency of land use class labels while bigger bubbles indicate the higher frequency and vice versa.</p>
Full article ">
13 pages, 259 KiB  
Article
Logical Structures Underlying Quantum Computing
by Federico Holik, Giuseppe Sergioli, Hector Freytes and Angel Plastino
Entropy 2019, 21(1), 77; https://doi.org/10.3390/e21010077 - 16 Jan 2019
Cited by 6 | Viewed by 4117
Abstract
In this work we advance a generalization of quantum computational logics capable of dealing with some important examples of quantum algorithms. We outline an algebraic axiomatization of these structures. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
9 pages, 892 KiB  
Article
Dynamical and Coupling Structure of Pulse-Coupled Networks in Maximum Entropy Analysis
by Zhi-Qin John Xu, Douglas Zhou and David Cai
Entropy 2019, 21(1), 76; https://doi.org/10.3390/e21010076 - 16 Jan 2019
Cited by 2 | Viewed by 3506
Abstract
Maximum entropy principle (MEP) analysis with few non-zero effective interactions successfully characterizes the distribution of dynamical states of pulse-coupled networks in many fields, e.g., in neuroscience. To better understand the underlying mechanism, we found a relation between the dynamical structure, i.e., effective interactions [...] Read more.
Maximum entropy principle (MEP) analysis with few non-zero effective interactions successfully characterizes the distribution of dynamical states of pulse-coupled networks in many fields, e.g., in neuroscience. To better understand the underlying mechanism, we found a relation between the dynamical structure, i.e., effective interactions in MEP analysis, and the anatomical coupling structure of pulse-coupled networks and it helps to understand how a sparse coupling structure could lead to a sparse coding by effective interactions. This relation quantitatively displays how the dynamical structure is closely related to the anatomical coupling structure. Full article
Show Figures

Figure 1

Figure 1
<p>Structure vs. simplified structure.</p>
Full article ">Figure 2
<p>Anatomical structure vs. effective interactions of integrate-and-fire networks. Each row shows a numerical case. In the first column, black arrows and red arrows represent excitatory and inhibitory connections, respectively. In the second column, red and green dots are the strengths of <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> of dependent and independent pairs, respectively. Blue dots and cyan dots are the strengths of <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> of dependent and independent pairs from ten shuffled spike trains, respectively. Each dot is for one <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. The third and fourth columns display absolute effective interaction strengths (blue bars). The corresponding node indexes for each effective interaction are shown in the abscissa. The mean and standard deviation of absolute strengths of each effective interaction of ten shuffled spike trains are also displayed by garnet bars. The simulation time for each network is <math display="inline"><semantics> <mrow> <mn>1.2</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>8</mn> </msup> <mspace width="0.166667em"/> <mi>ms</mi> </mrow> </semantics></math>. The time bin size for analysis is <math display="inline"><semantics> <mrow> <mn>10</mn> <mspace width="0.166667em"/> <mi>ms</mi> </mrow> </semantics></math> [<a href="#B12-entropy-21-00076" class="html-bibr">12</a>,<a href="#B13-entropy-21-00076" class="html-bibr">13</a>]. Independent Poisson inputs for each network are <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0.1</mn> <mspace width="0.166667em"/> <msup> <mi>ms</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>0.1</mn> <mspace width="0.166667em"/> <msup> <mi>ms</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>. The firing rate of each node is about <math display="inline"><semantics> <mrow> <mn>50</mn> <mspace width="0.166667em"/> <mi>Hz</mi> </mrow> </semantics></math>. Parameters are chosen [<a href="#B28-entropy-21-00076" class="html-bibr">28</a>] as <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>ex</mi> </msub> <mo>=</mo> <mn>14</mn> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>in</mi> </msub> <mo>=</mo> <mo>−</mo> <mn>2</mn> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mi>ex</mi> </msup> <mo>=</mo> <mn>2</mn> <mspace width="0.166667em"/> <mi>ms</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mi>in</mi> </msup> <mo>=</mo> <mn>5</mn> <mspace width="0.166667em"/> <mi>ms</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>20</mn> <mspace width="0.166667em"/> <mi>ms</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>ref</mi> </msub> <mo>=</mo> <mn>2</mn> <mspace width="0.166667em"/> <mi>ms</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>ex</mi> </msubsup> <mo>=</mo> <msubsup> <mi>S</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>in</mi> </msubsup> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Non-zero effective interactions in the Erdos-Renyi random networks. We generate 1000 Erdos-Renyi random networks of 100 nodes (the same connection probability but different random samples). The connection probability between two nodes is <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The number of non-zero effective interaction is plotted against effective interaction order. The mean and standard deviation are respectively shown by the black line and shaded area.</p>
Full article ">
11 pages, 4743 KiB  
Article
Effects of Silicon Content on the Microstructures and Mechanical Properties of (AlCrTiZrV)-Six-N High-Entropy Alloy Films
by Jingrui Niu, Wei Li, Ping Liu, Ke Zhang, Fengcang Ma, Xiaohong Chen, Rui Feng and Peter K. Liaw
Entropy 2019, 21(1), 75; https://doi.org/10.3390/e21010075 - 16 Jan 2019
Cited by 10 | Viewed by 5175
Abstract
A series of (AlCrTiZrV)-Six-N films with different silicon contents were deposited on monocrystalline silicon substrates by direct-current (DC) magnetron sputtering. The films were characterized by the X-ray diffractometry (XRD), scanning electron microscopy (SEM), high-resolution transmission electron microscopy (HRTEM), and nano-indentation techniques. [...] Read more.
A series of (AlCrTiZrV)-Six-N films with different silicon contents were deposited on monocrystalline silicon substrates by direct-current (DC) magnetron sputtering. The films were characterized by the X-ray diffractometry (XRD), scanning electron microscopy (SEM), high-resolution transmission electron microscopy (HRTEM), and nano-indentation techniques. The effects of the silicon content on the microstructures and mechanical properties of the films were investigated. The experimental results show that the (AlCrTiZrV)N films grow in columnar grains and present a (200) preferential growth orientation. The addition of the silicon element leads to the disappearance of the (200) peak, and the grain refinement of the (AlCrTiZrV)-Six-N films. Meanwhile, the reticular amorphous phase is formed, thus developing the nanocomposite structure with the nanocrystalline structures encapsulated by the amorphous phase. With the increase of the silicon content, the mechanical properties first increase and then decrease. The maximal hardness and modulus of the film reach 34.3 GPa and 301.5 GPa, respectively, with the silicon content (x) of 8% (volume percent). The strengthening effect of the (AlCrTiZrV)-Six-N film can be mainly attributed to the formation of the nanocomposite structure. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of the AlCrTiZrVSi<sub>x</sub> composite target.</p>
Full article ">Figure 2
<p>X-ray diffraction (XRD) patterns of (AlCrTiZrV)-Si<sub>x</sub>-N films with different silicon contents.</p>
Full article ">Figure 3
<p>Low-magnification cross-sectional transmission electron microscope (TEM) images of (AlCrTiZrV)-Si<sub>x</sub>-N films: (<b>a</b>) (AlCrTiZrV)N; (<b>b</b>) (AlCrTiZrV)-Si<sub>0.08</sub>-N.</p>
Full article ">Figure 4
<p>Cross-sectional SEM images of the (AlCrTiZrV)-Si<sub>x</sub>-N films. (<b>a</b>) (AlCrTiZrV)N; (<b>b</b>) (AlCrTiZrV)-Si<sub>0.04</sub>-N; (<b>c</b>) (AlCrTiZrV)-Si<sub>0.08</sub>-N; (<b>d</b>) (AlCrTiZrV)-Si<sub>0.12</sub>-N; (<b>e</b>) (AlCrTiZrV)-Si<sub>0.16</sub>-N.</p>
Full article ">Figure 5
<p>Cross-sectional HRTEM images and selected-area electron diffraction (SAED) patterns of the (<b>a</b>,<b>c</b>,<b>e</b>) (AlCrTiZrV)N and (<b>b</b>,<b>d</b>,<b>f</b>) (AlCrTiZrV)-Si<sub>0.08</sub>-N films: (<b>a</b>,<b>b</b>) low-magnification HRTEM images; (<b>c</b>,<b>d</b>) high-magnification HRTEM images; (<b>e</b>,<b>f</b>) SAED patterns.</p>
Full article ">Figure 6
<p>Effect of silicon content on mechanical properties of (AlCrTiZrV)-Si<sub>x</sub>-N films.</p>
Full article ">Figure 7
<p>Schematic diagram of the nanocomposite structure of the (AlCrTiZrV)-Si<sub>x</sub>-N films.</p>
Full article ">
10 pages, 2022 KiB  
Article
Thermodynamic Analysis of Entropy Generation Minimization in Thermally Dissipating Flow Over a Thin Needle Moving in a Parallel Free Stream of Two Newtonian Fluids
by Ilyas Khan, Waqar A. Khan, Muhammad Qasim, Idrees Afridi and Sayer O. Alharbi
Entropy 2019, 21(1), 74; https://doi.org/10.3390/e21010074 - 16 Jan 2019
Cited by 21 | Viewed by 4900
Abstract
This article is devoted to study sustainability of entropy generation in an incompressible thermal flow of Newtonian fluids over a thin needle that is moving in a parallel stream. Two types of Newtonian fluids (water and air) are considered in this work. The [...] Read more.
This article is devoted to study sustainability of entropy generation in an incompressible thermal flow of Newtonian fluids over a thin needle that is moving in a parallel stream. Two types of Newtonian fluids (water and air) are considered in this work. The energy dissipation term is included in the energy equation. Here, it is presumed that u (the free stream velocity) is in the positive axial direction (x-axis) and the motion of the thin needle is in the opposite or similar direction as the free stream velocity. The reduced self-similar governing equations are solved numerically with the aid of the shooting technique with the fourth-order-Runge-Kutta method. Using similarity transformations, it is possible to obtain the expression for dimensionless form of the volumetric entropy generation rate and the Bejan number. The effects of Prandtl number, Eckert number and dimensionless temperature parameter are discussed graphically in details for water and air taken as Newtonian fluids. Full article
(This article belongs to the Special Issue Entropy Generation Minimization II)
Show Figures

Figure 1

Figure 1
<p>Flow model and coordinate system.</p>
Full article ">Figure 2
<p>Variation of <span class="html-italic">Ns<sub>h</sub></span> when <span class="html-italic">u</span><sub>∞</sub> &gt; 0 and the <span class="html-italic">ε</span> = 0 for (<b>a</b>) air and (<b>b</b>) water.</p>
Full article ">Figure 3
<p>Variation of <span class="html-italic">Ns<sub>f</sub></span> when <span class="html-italic">u</span><sub>∞</sub> &gt; 0 and the <span class="html-italic">ε</span> &lt; 0 for (<b>a</b>) air and (<b>b</b>) water.</p>
Full article ">Figure 4
<p>Variation of <span class="html-italic">Ns<sub>t</sub></span> when <span class="html-italic">u</span><sub>∞</sub> &gt; 0 and the <span class="html-italic">ε</span> &lt; 0 for (<b>a</b>) air and (<b>b</b>) water.</p>
Full article ">Figure 5
<p>Variation of <span class="html-italic">Be</span> when <span class="html-italic">u</span><sub>∞</sub> &gt; 0 and the <span class="html-italic">ε</span> &lt; 0 for (<b>a</b>) air and (<b>b</b>) water.</p>
Full article ">Figure 6
<p>Variation of <span class="html-italic">Ns<sub>h</sub></span> when (0 &lt; <span class="html-italic">ε</span> &lt; 1) for (<b>a</b>) air and (<b>b</b>) water.</p>
Full article ">Figure 7
<p>Variation of <span class="html-italic">Ns<sub>f</sub></span> when (0 &lt; <span class="html-italic">ε</span> &lt; 1) for (<b>a</b>) air and (<b>b</b>) water.</p>
Full article ">Figure 8
<p>Variation of <span class="html-italic">Ns<sub>t</sub></span> when (0 &lt; <span class="html-italic">ε</span> &lt; 1) for (<b>a</b>) air and (<b>b</b>) water.</p>
Full article ">Figure 9
<p>Variation of <span class="html-italic">Be</span> when (0 &lt; <span class="html-italic">ε</span> &lt; 1) for (<b>a</b>) air and (<b>b</b>) water.</p>
Full article ">
19 pages, 342 KiB  
Article
Negation of Belief Function Based on the Total Uncertainty Measure
by Kangyang Xie and Fuyuan Xiao
Entropy 2019, 21(1), 73; https://doi.org/10.3390/e21010073 - 15 Jan 2019
Cited by 21 | Viewed by 3873
Abstract
The negation of probability provides a new way of looking at information representation. However, the negation of basic probability assignment (BPA) is still an open issue. To address this issue, a novel negation method of basic probability assignment based on total uncertainty measure [...] Read more.
The negation of probability provides a new way of looking at information representation. However, the negation of basic probability assignment (BPA) is still an open issue. To address this issue, a novel negation method of basic probability assignment based on total uncertainty measure is proposed in this paper. The uncertainty of non-singleton elements in the power set is taken into account. Compared with the negation method of a probability distribution, the proposed negation method of BPA differs becausethe BPA of a certain element is reassigned to the other elements in the power set where the weight of reassignment is proportional to the cardinality of intersection of the element and each remaining element in the power set. Notably, the proposed negation method of BPA reduces to the negation of probability distribution as BPA reduces to classical probability. Furthermore, it is proved mathematically that our proposed negation method of BPA is indeed based on the maximum uncertainty. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Reallocation weight of <math display="inline"><semantics> <mrow> <mi>m</mi> <mo stretchy="false">(</mo> <mi>a</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Evolution of basic probability assignment (BPA) as iteration of negation process increases.</p>
Full article ">Figure 3
<p>Uncertainty measured by <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>r</mi> <mi>p</mi> </mrow> </msub> <mrow> <mo stretchy="false">(</mo> <mi>m</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> as iteration of negation process increases.</p>
Full article ">Figure 4
<p>Uncertainty measured by <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>d</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>m</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> as iteration of negation process increases.</p>
Full article ">Figure 5
<p>Evolution of total uncertainty as the iteration of negation process increases.</p>
Full article ">
17 pages, 321 KiB  
Article
A New Efficient Expression for the Conditional Expectation of the Blind Adaptive Deconvolution Problem Valid for the Entire Range ofSignal-to-Noise Ratio
by Monika Pinchas
Entropy 2019, 21(1), 72; https://doi.org/10.3390/e21010072 - 15 Jan 2019
Cited by 7 | Viewed by 3213
Abstract
In the literature, we can find several blind adaptive deconvolution algorithms based on closed-form approximated expressions for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output), involving the maximum entropy density approximation technique. The main drawback of [...] Read more.
In the literature, we can find several blind adaptive deconvolution algorithms based on closed-form approximated expressions for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output), involving the maximum entropy density approximation technique. The main drawback of these algorithms is the heavy computational burden involved in calculating the expression for the conditional expectation. In addition, none of these techniques are applicable for signal-to-noise ratios lower than 7 dB. In this paper, I propose a new closed-form approximated expression for the conditional expectation based on a previously obtained expression where the equalized output probability density function is calculated via the approximated input probability density function which itself is approximated with the maximum entropy density approximation technique. This newly proposed expression has a reduced computational burden compared with the previously obtained expressions for the conditional expectation based on the maximum entropy approximation technique. The simulation results indicate that the newly proposed algorithm with the newly proposed Lagrange multipliers is suitable for signal-to-noise ratio values down to 0 dB and has an improved equalization performance from the residual inter-symbol-interference point of view compared to the previously obtained algorithms based on the conditional expectation obtained via the maximum entropy technique. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the system.</p>
Full article ">Figure 2
<p>Performance comparison between equalization algorithms for a 16QAM source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for a signal-to-noise ratio (SNR) = 10 dB. <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>G</mi> </msub> <mo>=</mo> <mn>7</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>A</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00009</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>A</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00008</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.85</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Performance comparison between equalization algorithms for a 16QAM source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an SNR = 7 dB. <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>G</mi> </msub> <mo>=</mo> <mn>2.5</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>A</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00008</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>A</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00008</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.85</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Performance comparison between equalization algorithms for a 16QAM source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an SNR = 0 dB. <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>G</mi> </msub> <mo>=</mo> <mn>4</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00002</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>6</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Performance comparison between equalization algorithms for a 16QAM source input going through Channel 2. The averaged results were obtained in 50 Monte Carlo trials for an SNR = 7 dB. <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>G</mi> </msub> <mo>=</mo> <mn>2.5</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00008</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.85</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Performance comparison between equalization algorithms for a 16QAM source input going through Channel 3. The averaged results were obtained in 50 Monte Carlo trials for an SNR = 7 dB. <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>G</mi> </msub> <mo>=</mo> <mn>2.5</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00007</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.85</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Performance comparison between equalization algorithms for a 16QAM source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an SNR = 10 dB. <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>G</mi> </msub> <mo>=</mo> <mn>7</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>A</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00009</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>A</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00007</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>7</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Performance comparison between equalization algorithms for a 16QAM source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an SNR = 7 dB. <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>G</mi> </msub> <mo>=</mo> <mn>2.5</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>A</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00008</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>A</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>0.00006</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>B</mi> <mi>N</mi> <mi>E</mi> <mi>W</mi> </mrow> </msub> <mo>=</mo> <mn>6</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math>.</p>
Full article ">
32 pages, 308 KiB  
Editorial
Acknowledgement to Reviewers of Entropy in 2018
by Entropy Editorial Office
Entropy 2019, 21(1), 71; https://doi.org/10.3390/e21010071 - 15 Jan 2019
Viewed by 3751
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
13 pages, 1460 KiB  
Article
The Effect of Cognitive Resource Competition Due to Dual-Tasking on the Irregularity and Control of Postural Movement Components
by Thomas Haid and Peter Federolf
Entropy 2019, 21(1), 70; https://doi.org/10.3390/e21010070 - 15 Jan 2019
Cited by 13 | Viewed by 5271
Abstract
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found [...] Read more.
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found in measures of sway irregularity, we hypothesized (H1) that the irregularity of PMs and their respective control, and the control tightness will display the n-shape. Furthermore, according to the minimal intervention principle (H2) different PMs should be affected differently. Finally, (H3) we expected stronger dual-tasking effects in the older population, due to limited cognitive resources. We measured the kinematics of forty-one healthy volunteers (23 aged 26 ± 3; 18 aged 59 ± 4) performing 80 s tandem stances in five conditions (single-task and auditory n-back task; n = 1–4), and computed sample entropies on PM time-series and two novel measures of control tightness. In the PM most critical for stability, the control tightness decreased steadily, and in contrast to H3, decreased further for the younger group. Nevertheless, we found n-shapes in most variables with differing magnitudes, supporting H1 and H2. These results suggest that the control tightness might deteriorate steadily with increased cognitive load in critical movements despite the otherwise eminent n-shaped relationship. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Figure 1
<p>Visualization of the first six principal movements (PM<sub>1</sub>–PM<sub>6</sub>) of the tandem stance with respective amplification factors (AmpFac). For each PM the minimal and maximal deviation from the mean posture are displayed.</p>
Full article ">Figure 2
<p>Post-hoc analysis and descriptive statistics of the variables that displayed dual-tasking effects. <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>SaEn</mi> </mrow> <mi>k</mi> <mrow> <mi>P</mi> <mi>P</mi> <mo>/</mo> <mi>P</mi> <mi>A</mi> </mrow> </msubsup> </mrow> </semantics></math> stands for sample entropy of the principal position PP or principal acceleration PA of the <span class="html-italic">k</span>th principal movement. N<span class="html-italic"><sub>k</sub></span> and σ<span class="html-italic"><sub>k</sub></span> stand for the number of control interventions and the timing variability of the interventions of the <span class="html-italic">k</span>th component, respectively. Significant post-hoc results are symbolized with asterisks. ST = single task; DT<span class="html-italic"><sub>n</sub></span> = dual task with n-back auditory working task.</p>
Full article ">Figure 3
<p>Descriptive statistics of the variables that displayed dual-tasking age interaction effects. <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>SaEn</mi> </mrow> <mi>k</mi> <mrow> <mi>P</mi> <mi>P</mi> <mo>/</mo> <mi>P</mi> <mi>A</mi> </mrow> </msubsup> </mrow> </semantics></math> stands for sample entropy of the principal position PP or principal acceleration PA of the <span class="html-italic">k</span>th principal movement. Significant post-hoc results are symbolized with asterisks (dual-tasking effects were found only in the older group). ST = single task; DT<span class="html-italic"><sub>n</sub></span> = dual task with n-back auditory working task.</p>
Full article ">Figure 4
<p>Descriptive statistics of the variables that displayed age effects. <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>SaEn</mi> </mrow> <mrow> <mi>P</mi> <mi>P</mi> <mo>/</mo> <mi>P</mi> <mi>A</mi> </mrow> </msup> </mrow> </semantics></math> stands for sample entropy of the principal position PP or principal acceleration PA and PM<span class="html-italic"><sub>k</sub></span> for the <span class="html-italic">k</span>th principal movement. N<span class="html-italic"><sub>k</sub></span> stands for the number of control interventions in the <span class="html-italic">k</span>th component. Significant post-hoc results are symbolized with asterisks. ST = single task; DT<span class="html-italic"><sub>n</sub></span> = dual task with n-back auditory working task.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop