Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 25, July
Previous Issue
Volume 25, May
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 25, Issue 6 (June 2023) – 129 articles

Cover Story (view full-size image): Neural networks (NNs) are very powerful function approximators, achieving super-human performance in a variety of tasks. However, under distribution shifts, they struggle to retain performance over previously learned skills. Sequential Bayesian inference provides a principled framework to update models given new data. We show a number of important caveats when working with sequential Bayesian inference over NN weights for continual learning. For example, applying exact sequential Bayesian inference is extremely difficult in practice. Also, by working with linear models, we show that, despite exact inference, we may still have poor performance with a misspecified model. We conclude that a more fruitful approach to continual learning is to model the data-generating process. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 7993 KiB  
Article
An Improved Toeplitz Approximation Method for Coherent DOA Estimation in Impulsive Noise Environments
by Jiang’an Dai, Tianshuang Qiu, Shengyang Luan, Quan Tian and Jiacheng Zhang
Entropy 2023, 25(6), 960; https://doi.org/10.3390/e25060960 - 20 Jun 2023
Cited by 2 | Viewed by 1643
Abstract
Direction of arrival (DOA) estimation is an important research topic in array signal processing and widely applied in practical engineering. However, when signal sources are highly correlated or coherent, conventional subspace-based DOA estimation algorithms will perform poorly due to the rank deficiency in [...] Read more.
Direction of arrival (DOA) estimation is an important research topic in array signal processing and widely applied in practical engineering. However, when signal sources are highly correlated or coherent, conventional subspace-based DOA estimation algorithms will perform poorly due to the rank deficiency in the received data covariance matrix. Moreover, conventional DOA estimation algorithms are usually developed under Gaussian-distributed background noise, which will deteriorate significantly in impulsive noise environments. In this paper, a novel method is presented to estimate the DOA of coherent signals in impulsive noise environments. A novel correntropy-based generalized covariance (CEGC) operator is defined and proof of boundedness is given to ensure the effectiveness of the proposed method in impulsive noise environments. Furthermore, an improved Toeplitz approximation method combined CEGC operator is proposed to estimate the DOA of coherent sources. Compared to other existing algorithms, the proposed method can avoid array aperture loss and perform more effectively, even in cases of intense impulsive noise and low snapshot numbers. Finally, comprehensive Monte-Carlo simulations are performed to verify the superiority of the proposed method under various impulsive noise conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial spectrograms comparison.</p>
Full article ">Figure 1 Cont.
<p>Spatial spectrograms comparison.</p>
Full article ">Figure 2
<p>Experimental results vs. GSNRs.</p>
Full article ">Figure 3
<p>Experimental results vs. characteristic exponent <span class="html-italic">α</span>.</p>
Full article ">Figure 4
<p>Experimental results vs. number of snapshots.</p>
Full article ">
19 pages, 849 KiB  
Article
Selection of an Insurance Company in Agriculture through Hybrid Multi-Criteria Decision-Making
by Adis Puška, Marija Lukić, Darko Božanić, Miroslav Nedeljković and Ibrahim M. Hezam
Entropy 2023, 25(6), 959; https://doi.org/10.3390/e25060959 - 20 Jun 2023
Cited by 3 | Viewed by 1912
Abstract
Crop insurance is used to reduce risk in agriculture. This research is focused on selecting an insurance company that provides the best policy conditions for crop insurance. A total of five insurance companies that provide crop insurance services in the Republic of Serbia [...] Read more.
Crop insurance is used to reduce risk in agriculture. This research is focused on selecting an insurance company that provides the best policy conditions for crop insurance. A total of five insurance companies that provide crop insurance services in the Republic of Serbia were selected. To choose the insurance company that provides the best policy conditions for farmers, expert opinions were solicited. In addition, fuzzy methods were used to assess the weights of the various criteria and to evaluate insurance companies. The weight of each criterion was determined using a combined approach based on fuzzy LMAW (the logarithm methodology of additive weights) and entropy methods. Fuzzy LMAW was used to determine the weights subjectively through expert ratings, while fuzzy entropy was used to determine the weights objectively. The results of these methods showed that the price criterion received the highest weight. The selection of the insurance company was made using the fuzzy CRADIS (compromise ranking of alternatives, from distance to ideal solution) method. The results of this method showed that the insurance company DDOR offers the best conditions for crop insurance for farmers. These results were confirmed by a validation of the results and sensitivity analysis. Based on all of this, it was shown that fuzzy methods can be used in the selection of insurance companies. Full article
(This article belongs to the Special Issue Entropy Methods for Multicriteria Decision Making)
Show Figures

Figure 1

Figure 1
<p>Validation of the research results.</p>
Full article ">Figure 2
<p>Sensitivity analysis results.</p>
Full article ">
14 pages, 2416 KiB  
Article
Identifying Important Nodes in Trip Networks and Investigating Their Determinants
by Ze-Tao Li, Wei-Peng Nie, Shi-Min Cai, Zhi-Dan Zhao and Tao Zhou
Entropy 2023, 25(6), 958; https://doi.org/10.3390/e25060958 - 20 Jun 2023
Cited by 1 | Viewed by 1683
Abstract
Describing travel patterns and identifying significant locations is a crucial area of research in transportation geography and social dynamics. Our study aims to contribute to this field by analyzing taxi trip data from Chengdu and New York City. Specifically, we investigate the probability [...] Read more.
Describing travel patterns and identifying significant locations is a crucial area of research in transportation geography and social dynamics. Our study aims to contribute to this field by analyzing taxi trip data from Chengdu and New York City. Specifically, we investigate the probability density distribution of trip distance in each city, which enables us to construct long- and short-distance trip networks. To identify critical nodes within these networks, we employ the PageRank algorithm and categorize them using centrality and participation indices. Furthermore, we explore the factors that contribute to their influence and observe a clear hierarchical multi-centre structure in Chengdu’s trip networks, while no such phenomenon is evident in New York City’s. Our study provides insight into the impact of trip distance on important nodes within trip networks in both cities and serves as a reference for distinguishing between long and short taxi trips. Our findings also reveal substantial differences in network structures between the two cities, highlighting the nuanced relationship between network structure and socio-economic factors. Ultimately, our research sheds light on the underlying mechanisms shaping transportation networks in urban areas and offers valuable insights into urban planning and policy making. Full article
(This article belongs to the Special Issue Complexity, Entropy and the Physics of Information)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The administrative map of Chengdu. It divides Chengdu into two regions: the central and suburban areas. The former encompasses six districts, labelled with red numbers, which are 1: Jinniu District; 2: Chenghua District; 3: Jinjiang District; 4: Gaoxin District; 5: Wuhou District; and 6: Qingyang District. The latter includes the remaining districts. (<b>b</b>) The administrative map of NYC. It divides NYC into five distinct regions, with the main area comprising five districts marked by black numbers. These districts are 1: The Bronx; 2: Manhattan; 3: Queens; 4: Brooklyn; and 5: Staten Island.</p>
Full article ">Figure 2
<p>Schematic diagram of constructing the trip network. (<b>a</b>) The division of the administrative map into cells. (<b>b</b>) The connections between cells according to taxi trajectories. (<b>c</b>) The resulting trip network generated by taxi trajectories.</p>
Full article ">Figure 3
<p>(<b>a</b>,<b>b</b>) The linear–logarithmic plot of the distance probability density distribution of the Chengdu and NYC trip network. The horizontal axis is the distance, and the vertical axis is the corresponding probability. The red line indicates the results of fitting the probability density distribution. (<b>c</b>,<b>d</b>) the linear–linear plots of the cumulative distribution for the Chengdu and NYC trip network.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) The linear–logarithmic plots of the distance probability density distribution of the Chengdu and NYC short-distance trip network, respectively. The horizontal axis is the trip distance. The vertical axis is the corresponding probability. The results of fitting the probability density distribution are the colourful lines. (<b>c</b>,<b>d</b>) the logarithmic–logarithmic plots of the distance probability density distribution of the Chengdu and NYC long-distance trip networks, respectively. The horizontal axis is the trip distance. The vertical axis is the corresponding probability. The results of fitting the probability density distribution are the coloured lines.</p>
Full article ">Figure 5
<p>Distribution of hub nodes in Chengdu long- and short-distance trip network. (<b>a</b>) Distribution of internal inclusive hubs (Chengdu_long_Zinclusive) and internal exclusive hubs (Chengdu_long_Zexclusive) in the Chengdu long-distance trip network. (<b>b</b>) Distribution of internal extensive hubs (Chengdu_short_Zextensive) in the Chengdu short-distance trip network. (<b>c</b>) Distribution of external extensive hubs (Chengdu_long_Bextensive) in the Chengdu long-distance trip network. (<b>d</b>) Distribution of external extensive hubs (Chengdu_short_Bextensive) in the Chengdu short-distance trip network.</p>
Full article ">Figure 6
<p>Distribution of hub nodes in NYC long- and short-distance trip network. (<b>a</b>) Distribution of internal inclusive hubs (NYC_long_Zinclusive) and internal extensive hubs (NYC_long_Zextensive) in the NYC long-distance trip network. (<b>b</b>) Distribution of internal extensive hubs (NYC_short_Zextensive) in the NYC short-distance trip network. (<b>c</b>) Distribution of external extensive hubs (NYC_long_Bextensive) in the NYC long-distance trip network. (<b>d</b>) Distribution of external extensive hubs (NYC_short_Bextensive) in the NYC short-distance trip network.</p>
Full article ">
15 pages, 604 KiB  
Article
Finite-Size Relaxational Dynamics of a Spike Random Matrix Spherical Model
by Pedro H. de Freitas Pimenta and Daniel A. Stariolo
Entropy 2023, 25(6), 957; https://doi.org/10.3390/e25060957 - 20 Jun 2023
Viewed by 1288
Abstract
We present a thorough numerical analysis of the relaxational dynamics of the Sherrington–Kirkpatrick spherical model with an additive non-disordered perturbation for large but finite sizes N. In the thermodynamic limit and at low temperatures, the perturbation is responsible for a phase transition [...] Read more.
We present a thorough numerical analysis of the relaxational dynamics of the Sherrington–Kirkpatrick spherical model with an additive non-disordered perturbation for large but finite sizes N. In the thermodynamic limit and at low temperatures, the perturbation is responsible for a phase transition from a spin glass to a ferromagnetic phase. We show that finite-size effects induce the appearance of a distinctive slow regime in the relaxation dynamics, the extension of which depends on the size of the system and also on the strength of the non-disordered perturbation. The long time dynamics are characterized by the two largest eigenvalues of a spike random matrix which defines the model, and particularly by the statistics concerning the gap between them. We characterize the finite-size statistics of the two largest eigenvalues of the spike random matrices in the different regimes, sub-critical, critical, and super-critical, confirming some known results and anticipating others, even in the less studied critical regime. We also numerically characterize the finite-size statistics of the gap, which we hope may encourage analytical work which is lacking. Finally, we compute the finite-size scaling of the long time relaxation of the energy, showing the existence of power laws with exponents that depend on the strength of the non-disordered perturbation in a way that is governed by the finite-size statistics of the gap. Full article
(This article belongs to the Special Issue Non-equilibrium Phase Transitions)
Show Figures

Figure 1

Figure 1
<p>Scaled probability density distributions of an ensemble of <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> spike random matrices with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. The distributions are centered relative to the ensemble average <math display="inline"><semantics> <mover> <msub> <mi>λ</mi> <mn>1</mn> </msub> <mo>¯</mo> </mover> </semantics></math> and <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>θ</mi> </msub> </semantics></math> stands for the predicted standard deviation when <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>. The centered TW distribution <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>W</mi> </mrow> </semantics></math> (<a href="#FD15-entropy-25-00957" class="html-disp-formula">15</a>) and the normal distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math> (<a href="#FD16-entropy-25-00957" class="html-disp-formula">16</a>) have been scaled similarly to the data.</p>
Full article ">Figure 2
<p>Shift of the semicircle as a consequence of the center of mass conservation described in the text. (<b>a</b>) For fixed <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, the semicircle moves to the left proportionally to the perturbation intensity. (<b>b</b>) For fixed <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, the shift depends on the size of the matrix: larger sizes <span class="html-italic">N</span> suffer smaller shifts.</p>
Full article ">Figure 3
<p>Shift of the numerical average of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> relative to the theoretical prediction (<a href="#FD17-entropy-25-00957" class="html-disp-formula">17</a>), for <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math>. The left panel shows results without considering the correction due to the conservation of the center of mass. In the right panel, after inclusion of the correction, the data show a good collapse for growing sizes <span class="html-italic">N</span> and sufficiently small values of <math display="inline"><semantics> <mi>θ</mi> </semantics></math>.</p>
Full article ">Figure 4
<p>Shift of the numerical average of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> relative to the theoretical prediction (<a href="#FD18-entropy-25-00957" class="html-disp-formula">18</a>), for <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>. The left panel shows results without considering the correction due to the conservation of the center of mass. In the right panel, after inclusion of the correction, the data show a good collapse, improving as <span class="html-italic">N</span> grows.</p>
Full article ">Figure 5
<p>Fluctuations of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> in the critical regime.</p>
Full article ">Figure 6
<p>Shift of the numerical average of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> relative to the theoretical prediction (<a href="#FD17-entropy-25-00957" class="html-disp-formula">17</a>). The left panel shows results without considering the correction due to the conservation of the center of mass. In the right panel, after inclusion of the correction, the data show a good collapse for growing sizes <span class="html-italic">N</span> and sufficiently large values of <math display="inline"><semantics> <mi>θ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Data collapse of the fluctuations of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Probability distribution functions of the gap between the two largest eigenvalues for an ensemble of spike random matrices of size <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> and different values of the deterministic term intensity <math display="inline"><semantics> <mi>θ</mi> </semantics></math>. The double log scale shows algebraic behavior at small <span class="html-italic">g</span>.</p>
Full article ">Figure 9
<p>The gap exponent <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>(</mo> <mi>θ</mi> <mo>,</mo> <mi>N</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>The gap parameter <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>(</mo> <mi>θ</mi> <mo>,</mo> <mi>N</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Data collapse of the time decay of the average excess energy for the spike SSK model, according to Equation (<a href="#FD30-entropy-25-00957" class="html-disp-formula">30</a>), for different values of the parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>(</mo> <mi>θ</mi> <mo>,</mo> <mi>N</mi> <mo>)</mo> </mrow> </semantics></math>. The slope of the algebraic regime is governed by the one-parameter scaling function <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>a</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">
10 pages, 251 KiB  
Article
Improving the Performance of Quantum Cryptography by Using the Encryption of the Error Correction Data
by Valeria A. Pastushenko and Dmitry A. Kronberg
Entropy 2023, 25(6), 956; https://doi.org/10.3390/e25060956 - 20 Jun 2023
Cited by 9 | Viewed by 2120
Abstract
Security of quantum key distribution (QKD) protocols rely solely on quantum physics laws, namely, on the impossibility to distinguish between non-orthogonal quantum states with absolute certainty. Due to this, a potential eavesdropper cannot extract full information from the states stored in their quantum [...] Read more.
Security of quantum key distribution (QKD) protocols rely solely on quantum physics laws, namely, on the impossibility to distinguish between non-orthogonal quantum states with absolute certainty. Due to this, a potential eavesdropper cannot extract full information from the states stored in their quantum memory after an attack despite knowing all the information disclosed during classical post-processing stages of QKD. Here, we introduce the idea of encrypting classical communication related to error-correction in order to decrease the amount of information available to the eavesdropper and hence improve the performance of quantum key distribution protocols. We analyze the applicability of the method in the context of additional assumptions concerning the eavesdropper’s quantum memory coherence time and discuss the similarity of our proposition and the quantum data locking (QDL) technique. Full article
16 pages, 464 KiB  
Article
Shannon Entropy and Herfindahl-Hirschman Index as Team’s Performance and Competitive Balance Indicators in Cyclist Multi-Stage Races
by Marcel Ausloos
Entropy 2023, 25(6), 955; https://doi.org/10.3390/e25060955 - 19 Jun 2023
Cited by 3 | Viewed by 2044
Abstract
It seems that one cannot find many papers relating entropy to sport competitions. Thus, in this paper, I use (i) the Shannon intrinsic entropy (S) as an indicator of “teams sporting value” (or “competition performance”) and (ii) the Herfindahl-Hirschman index (HHi) [...] Read more.
It seems that one cannot find many papers relating entropy to sport competitions. Thus, in this paper, I use (i) the Shannon intrinsic entropy (S) as an indicator of “teams sporting value” (or “competition performance”) and (ii) the Herfindahl-Hirschman index (HHi) as a “teams competitive balance” indicator, in the case of (professional) cyclist multi-stage races. The 2022 Tour de France and 2023 Tour of Oman are used for numerical illustrations and discussion. The numerical values are obtained from classical and and new ranking indices which measure the teams “final time”, on one hand, and “final place”, on the other hand, based on the “best three” riders in each stage, but also the corresponding times and places throughout the race, for these finishing riders. The analysis data demonstrate that the constraint, “only the finishing riders count”, makes much sense for obtaining a more objective measure of “team value” and team performance”, at the end of a multi-stage race. A graphical analysis allows us to distinguish various team levels, each exhibiting a Feller-Pareto distribution, thereby indicating self-organized processes. In so doing, one hopefully better relates objective scientific measures to sport team competitions. Moreover, this analysis proposes some paths to elaborate on forecasting through standard probability concepts. Full article
(This article belongs to the Special Issue Selected Featured Papers from Entropy Editorial Board Members)
Show Figures

Figure 1

Figure 1
<p>Best-fit curve to an empirical cubic function of the distributions of final time <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final time <math display="inline"><semantics> <msub> <mi>A</mi> <mi>L</mi> </msub> </semantics></math>, both defined in the main text, of the 22 teams having competed on the Tour de France 2022; the <span class="html-italic">y</span>-axis scale is in days:hours.</p>
Full article ">Figure 2
<p>Distributions of final time <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final place <math display="inline"><semantics> <msub> <mi>A</mi> <mi>L</mi> </msub> </semantics></math>, both defined in the main text, of the 18 teams having competed on the Tour of Oman 2023; in order to emphasize the 2 sub-distributions, the best-fit curve to an empirical cubic function is not shown; the <span class="html-italic">y</span>-axis scale is in days:hours:minutes.</p>
Full article ">Figure 3
<p>Distributions of final place <math display="inline"><semantics> <msub> <mi>P</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final place <math display="inline"><semantics> <msub> <mi>B</mi> <mi>L</mi> </msub> </semantics></math> measures, both defined in the main text, of the 22 teams having competed on the Tour de France 2022; in order to emphasize the four sub-distributions, the best-fit curve to an empirical cubic function is not shown, but the resulting <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math>, if the whole fit was completed, is given for information.</p>
Full article ">Figure 4
<p>Distributions of final place <math display="inline"><semantics> <msub> <mi>P</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final place <math display="inline"><semantics> <msub> <mi>B</mi> <mi>L</mi> </msub> </semantics></math> measures, both defined in the main text, of the 18 teams having competed on the Tour of Oman 2023; the best-fit curve to an empirical cubic function is made on the first 15 teams only.</p>
Full article ">Figure 5
<p>Best-fit curve to an empirical cubic function of the team entropy derived from the final time <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final time <math display="inline"><semantics> <msub> <mi>A</mi> <mi>L</mi> </msub> </semantics></math> measures, both defined in the main text, distributions of the 22 teams having competed on the Tour de France 2022.</p>
Full article ">Figure 6
<p>Team entropy derived from the final time <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final time <math display="inline"><semantics> <msub> <mi>A</mi> <mi>L</mi> </msub> </semantics></math> measures, both defined in the main text, distributions of the 18 teams having competed on the Tour of Oman 2023.</p>
Full article ">Figure 7
<p>Best-fit curve to an empirical cubic function of the team entropy derived from the final place distributions, <math display="inline"><semantics> <msub> <mi>P</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final place <math display="inline"><semantics> <msub> <mi>B</mi> <mi>L</mi> </msub> </semantics></math> measures, as defined in the main text, for the 22 teams having competed on the Tour de France 2022.</p>
Full article ">Figure 8
<p>Best-fit curve to an empirical cubic function of the team entropy derived from the final place distributions, <math display="inline"><semantics> <msub> <mi>P</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final place <math display="inline"><semantics> <msub> <mi>B</mi> <mi>L</mi> </msub> </semantics></math> measures, as defined in the main text, for the 18 teams having competed on the Tour of Oman 2023.</p>
Full article ">Figure 9
<p>Best-fit curve to an empirical cubic function of the team <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>H</mi> <mi>i</mi> </mrow> </semantics></math> derived from the final time <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final time <math display="inline"><semantics> <msub> <mi>A</mi> <mi>L</mi> </msub> </semantics></math> measures, both defined in the main text, distributions of the 22 teams having competed on the Tour of France 2022.</p>
Full article ">Figure 10
<p>Best-fit curve to an empirical cubic function of the team <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>H</mi> <mi>i</mi> </mrow> </semantics></math> derived from the final time <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final time <math display="inline"><semantics> <msub> <mi>A</mi> <mi>L</mi> </msub> </semantics></math> measures, both defined in the main text, distributions of the 18 teams having competed on the Tour of Oman 2023; the fit is on the first 15 teams result.</p>
Full article ">Figure 11
<p>Best-fit curve to an empirical cubic function of the team <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>H</mi> <mi>i</mi> </mrow> </semantics></math> derived from the final place distributions, <math display="inline"><semantics> <msub> <mi>P</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final place <math display="inline"><semantics> <msub> <mi>B</mi> <mi>L</mi> </msub> </semantics></math> measures, as defined in the main text, for the 22 teams having competed on the Tour de France 2022.</p>
Full article ">Figure 12
<p>Best-fit curve to an empirical cubic function of the team <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>H</mi> <mi>i</mi> </mrow> </semantics></math> derived from the final place distributions, <math display="inline"><semantics> <msub> <mi>P</mi> <mi>L</mi> </msub> </semantics></math> and adjusted final place <math display="inline"><semantics> <msub> <mi>B</mi> <mi>L</mi> </msub> </semantics></math> measures, as defined in the main text, for the 18 teams having competed in the Tour of Oman 2023; the fit is on the first 15 teams’ results.</p>
Full article ">
27 pages, 381 KiB  
Article
Uniform Treatment of Integral Majorization Inequalities with Applications to Hermite-Hadamard-Fejér-Type Inequalities and f-Divergences
by László Horváth
Entropy 2023, 25(6), 954; https://doi.org/10.3390/e25060954 - 19 Jun 2023
Cited by 4 | Viewed by 2033
Abstract
In this paper, we present a general framework that provides a comprehensive and uniform treatment of integral majorization inequalities for convex functions and finite signed measures. Along with new results, we present unified and simple proofs of classical statements. To apply our results, [...] Read more.
In this paper, we present a general framework that provides a comprehensive and uniform treatment of integral majorization inequalities for convex functions and finite signed measures. Along with new results, we present unified and simple proofs of classical statements. To apply our results, we deal with Hermite-Hadamard-Fejér-type inequalities and their refinements. We present a general method to refine both sides of Hermite-Hadamard-Fejér-type inequalities. The results of many papers on the refinement of the Hermite-Hadamard inequality, whose proofs are based on different ideas, can be treated in a uniform way by this method. Finally, we establish a necessary and sufficient condition for when a fundamental inequality of f-divergences can be refined by another f-divergence. Full article
(This article belongs to the Special Issue Shannon Entropy: Mathematical View)
16 pages, 1731 KiB  
Article
Improved Recurrence Plots Compression Distance by Learning Parameter for Video Compression Quality
by Tatsumasa Murai and Hisashi Koga
Entropy 2023, 25(6), 953; https://doi.org/10.3390/e25060953 - 19 Jun 2023
Cited by 1 | Viewed by 1531
Abstract
As the Internet-of-Things is deployed widely, many time-series data are generated everyday. Thus, classifying time-series automatically has become important. Compression-based pattern recognition has attracted attention, because it can analyze various data universally with few model parameters. RPCD (Recurrent Plots Compression Distance) is known [...] Read more.
As the Internet-of-Things is deployed widely, many time-series data are generated everyday. Thus, classifying time-series automatically has become important. Compression-based pattern recognition has attracted attention, because it can analyze various data universally with few model parameters. RPCD (Recurrent Plots Compression Distance) is known as a compression-based time-series classification method. First, RPCD transforms time-series data into an image called “Recurrent Plots (RP)”. Then, the distance between two time-series data is determined as the dissimilarity between their RPs. Here, the dissimilarity between two images is computed from the file size, when an MPEG-1 encoder compresses the video, which serializes the two images in order. In this paper, by analyzing the RPCD, we give an important insight that the quality parameter for the MPEG-1 encoding that controls the resolution of compressed videos influences the classification performance very much. We also show that the optimal parameter value depends extremely on the dataset to be classified: Interestingly, the optimal value for one dataset can make the RPCD fall behind a naive random classifier for another dataset. Supported by these insights, we propose an improved version of RPCD named qRPCD, which searches the optimal parameter value by means of cross-validation. Experimentally, qRPCD works superiorly to the original RPCD by about 4% in terms of classification accuracy. Full article
(This article belongs to the Special Issue Information Theory in Image Processing and Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Intra Quantization Matrix <math display="inline"><semantics> <mrow> <mi>Q</mi> <msub> <mi>M</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Zig-zag scanning.</p>
Full article ">Figure 3
<p>Inter-frame Quantization Matrix <math display="inline"><semantics> <mrow> <mi>Q</mi> <msub> <mi>M</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Recurrence plots for sine wave: left: binary image, right: grayscale image.</p>
Full article ">Figure 5
<p>Cross Recurrence Plots between two time-series data <span class="html-italic">x</span> and <span class="html-italic">y</span>.</p>
Full article ">Figure 6
<p>Classification accuracy for various <span class="html-italic">q</span> values.</p>
Full article ">Figure 7
<p>Recurrence plots for different classes in OliveOil.</p>
Full article ">Figure 8
<p>Accuracy for validation data for Beef dataset.</p>
Full article ">Figure 9
<p>Smoothed accuracy for validation data for Beef dataset.</p>
Full article ">
11 pages, 275 KiB  
Article
A Special Relativistic Exploitation of the Second Law of Thermodynamics and Its Non-Relativistic Limit
by Christina Papenfuss
Entropy 2023, 25(6), 952; https://doi.org/10.3390/e25060952 - 17 Jun 2023
Viewed by 1088
Abstract
A thermodynamic process is a solution of the balance equations fulfilling the second law of thermodynamics. This implies restrictions on the constitutive relations. The most general way to exploit these restrictions is the method introduced by Liu. This method is applied here, in [...] Read more.
A thermodynamic process is a solution of the balance equations fulfilling the second law of thermodynamics. This implies restrictions on the constitutive relations. The most general way to exploit these restrictions is the method introduced by Liu. This method is applied here, in contrast to most of the literature on relativistic thermodynamic constitutive theory, which goes back to a relativistic extension of the Thermodynamics of Irreversible Processes. In the present work, the balance equations and the entropy inequality are formulated in the special relativistic four-dimensional form for an observer with four-velocity parallel to the particle current. The restrictions on constitutive functions are exploited in the relativistic formulation. The domain of the constitutive functions, the state space, is chosen to include the particle number density, the internal energy density, the space derivatives of these quantities, and the space derivative of the material velocity for a chosen observer. The resulting restrictions on constitutive functions, as well as the resulting entropy production are investigated in the non-relativistic limit, and relativistic correction terms of the lowest order are derived. The restrictions on constitutive functions and the entropy production in the low energy limit are compared to the results of an exploitation of the non-relativistic balance equations and entropy inequality. In the next order of approximation our results are compared to the Thermodynamics of Irreversible Processes. Full article
(This article belongs to the Special Issue Thermodynamic Constitutive Theory and Its Application)
19 pages, 8211 KiB  
Article
Multi-Focus Image Fusion for Full-Field Optical Angiography
by Yuchan Jie, Xiaosong Li, Mingyi Wang and Haishu Tan
Entropy 2023, 25(6), 951; https://doi.org/10.3390/e25060951 - 16 Jun 2023
Cited by 5 | Viewed by 1580
Abstract
Full-field optical angiography (FFOA) has considerable potential for clinical applications in the prevention and diagnosis of various diseases. However, owing to the limited depth of focus attainable using optical lenses, only information about blood flow in the plane within the depth of field [...] Read more.
Full-field optical angiography (FFOA) has considerable potential for clinical applications in the prevention and diagnosis of various diseases. However, owing to the limited depth of focus attainable using optical lenses, only information about blood flow in the plane within the depth of field can be acquired using existing FFOA imaging techniques, resulting in partially unclear images. To produce fully focused FFOA images, an FFOA image fusion method based on the nonsubsampled contourlet transform and contrast spatial frequency is proposed. Firstly, an imaging system is constructed, and the FFOA images are acquired by intensity-fluctuation modulation effect. Secondly, we decompose the source images into low-pass and bandpass images by performing nonsubsampled contourlet transform. A sparse representation-based rule is introduced to fuse the lowpass images to effectively retain the useful energy information. Meanwhile, a contrast spatial frequency rule is proposed to fuse bandpass images, which considers the neighborhood correlation and gradient relationships of pixels. Finally, the fully focused image is produced by reconstruction. The proposed method significantly expands the range of focus of optical angiography and can be effectively extended to public multi-focused datasets. Experimental results confirm that the proposed method outperformed some state-of-the-art methods in both qualitative and quantitative evaluations. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing II)
Show Figures

Figure 1

Figure 1
<p>F Imaging system. Mouse ear samples were fixed on the OMP in the experiments. <span class="html-italic">T<sub>t</sub></span> (<span class="html-italic">T<sub>t</sub></span> = 1,2, …, <span class="html-italic">χ</span>) represents multiple multi-focus source images. C denotes the container, L denotes the light source, and EZL denotes the electric zoom lens.</p>
Full article ">Figure 2
<p>Schematic diagram of the proposed multi-focus image fusion method.</p>
Full article ">Figure 3
<p>Parts of four groups of source images.</p>
Full article ">Figure 4
<p>Fusion results produced by different methods. (<b>a1</b>,<b>a2</b>) are two randomly selected source images of group A; (<b>b1</b>–<b>h1</b>) show the fused images obtained by MFF-GAN, U2Fusion, SwinFusion, NSCT, CSSA, CPFA, and the proposed method, respectively; and (<b>b2</b>–<b>h2</b>) are the difference images obtained from the corresponding fused images and (<b>a2</b>), respectively. The red box is a magnification of the local details.</p>
Full article ">Figure 5
<p>Fusion results produced by different methods. (<b>a1</b>,<b>a2,i1</b>,<b>i2</b>) are randomly selected source images of groups B and C, respectively, whereas (<b>b1</b>–<b>h1</b>,<b>j1</b>–<b>p1</b>) are the fused images obtained using the MFF-GAN, U2Fusion, SwinFusion, NSCT, CSSA, and CPFA models and the proposed method. (<b>b2</b>–<b>h2</b>) are the difference images obtained by subtracting the corresponding fused image from the source image of (<b>a1</b>), respectively, and (<b>j2</b>–<b>p2</b>) are the difference images obtained by subtracting the corresponding fused image from (<b>i2</b>). The green and blue boxes are enlargements of two local details.</p>
Full article ">Figure 6
<p>Quantitative evaluations of different methods for four groups of the source images.</p>
Full article ">Figure 7
<p>Quantitative evaluations of the intermediate and final fused images obtained by different methods for four groups of source images.</p>
Full article ">Figure 8
<p>Quantitative evaluations of 15 randomly selected intermediate fusion results.</p>
Full article ">
15 pages, 320 KiB  
Article
Dynamics of Fractional Delayed Reaction-Diffusion Equations
by Linfang Liu and Juan J. Nieto
Entropy 2023, 25(6), 950; https://doi.org/10.3390/e25060950 - 16 Jun 2023
Cited by 1 | Viewed by 1327
Abstract
The long-term behavior of the weak solution of a fractional delayed reaction–diffusion equation with a generalized Caputo derivative is investigated. By using the classic Galerkin approximation method and comparison principal, the existence and uniqueness of the solution is proved in the sense of [...] Read more.
The long-term behavior of the weak solution of a fractional delayed reaction–diffusion equation with a generalized Caputo derivative is investigated. By using the classic Galerkin approximation method and comparison principal, the existence and uniqueness of the solution is proved in the sense of weak solution. In addition, the global attracting set of the considered system is obtained, with the help of the Sobolev embedding theorem and Halanay inequality. Full article
20 pages, 754 KiB  
Article
Hierarchical Wilson–Cowan Models and Connection Matrices
by W. A. Zúñiga-Galindo and B. A. Zambrano-Luna
Entropy 2023, 25(6), 949; https://doi.org/10.3390/e25060949 - 16 Jun 2023
Cited by 2 | Viewed by 1525
Abstract
This work aims to study the interplay between the Wilson–Cowan model and connection matrices. These matrices describe cortical neural wiring, while Wilson–Cowan equations provide a dynamical description of neural interaction. We formulate Wilson–Cowan equations on locally compact Abelian groups. We show that the [...] Read more.
This work aims to study the interplay between the Wilson–Cowan model and connection matrices. These matrices describe cortical neural wiring, while Wilson–Cowan equations provide a dynamical description of neural interaction. We formulate Wilson–Cowan equations on locally compact Abelian groups. We show that the Cauchy problem is well posed. We then select a type of group that allows us to incorporate the experimental information provided by the connection matrices. We argue that the classical Wilson–Cowan model is incompatible with the small-world property. A necessary condition to have this property is that the Wilson–Cowan equations be formulated on a compact group. We propose a p-adic version of the Wilson–Cowan model, a hierarchical version in which the neurons are organized into an infinite rooted tree. We present several numerical simulations showing that the p-adic version matches the predictions of the classical version in relevant experiments. The p-adic version allows the incorporation of the connection matrices into the Wilson–Cowan model. We present several numerical simulations using a neural network model that incorporates a p-adic approximation of the connection matrix of the cat cortex. Full article
(This article belongs to the Special Issue New Trends in Theoretical and Mathematical Physics)
Show Figures

Figure 1

Figure 1
<p>The rooted tree associated with the group <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> <mo>/</mo> <msup> <mn>2</mn> <mn>3</mn> </msup> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>. The elements of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> <mo>/</mo> <msup> <mn>2</mn> <mn>3</mn> </msup> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math> have the form <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <msub> <mi>i</mi> <mn>0</mn> </msub> <mo>+</mo> <msub> <mi>i</mi> <mn>1</mn> </msub> <mn>2</mn> <mo>+</mo> <msub> <mi>i</mi> <mn>2</mn> </msub> <msup> <mn>2</mn> <mn>2</mn> </msup> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <mspace width="0.277778em"/> <msub> <mi>i</mi> <mn>0</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>i</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mn>2</mn> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>}</mo> </mrow> </mrow> </semantics></math>. The distance satisfies <math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mo form="prefix">log</mo> <mn>2</mn> </msub> <msub> <mfenced separators="" open="|" close="|"> <mi>i</mi> <mo>−</mo> <mi>j</mi> </mfenced> <mn>2</mn> </msub> <mo>=</mo> </mrow> </semantics></math> level of the first common ancestor of <span class="html-italic">i</span>, <span class="html-italic">j</span>.</p>
Full article ">Figure 2
<p>Heat map of function <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math>; see (<a href="#FD18-entropy-25-00949" class="html-disp-formula">18</a>). Here, <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mn>0</mn> <mo>)</mo> <mo>=</mo> <mi>ϕ</mi> <mo>(</mo> <mn>1</mn> <mo>)</mo> <mo>=</mo> <mi>ϕ</mi> <mo>(</mo> <mn>7</mn> <mo>)</mo> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> is white; <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mn>2</mn> <mo>)</mo> <mo>=</mo> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math> is black; and <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> is red for <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>≠</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>7</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>An approximation of <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. The time axis goes from 0 to 100 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The figure shows the response of the network to a brief localized stimulus (the pulse given in (<a href="#FD19-entropy-25-00949" class="html-disp-formula">19</a>)). The response is also a pulse. This result is consistent with the numerical results in [<a href="#B2-entropy-25-00949" class="html-bibr">2</a>] (Section 2.2.1, Figure 3).</p>
Full article ">Figure 4
<p>An approximation of <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. The time axis goes from 0 to 200 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The figure shows the response of the network to a maintained stimulus (see (<a href="#FD19-entropy-25-00949" class="html-disp-formula">19</a>)). The response is a pulse train. This result is consistent with the numerical results in [<a href="#B2-entropy-25-00949" class="html-bibr">2</a>] (Section 2.2.5, Figure 7).</p>
Full article ">Figure 5
<p>An approximation of <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mo>−</mo> <mn>30</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. The time axis goes from 0 to 100 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The figure shows the response of the network to a maintained stimulus (see (<a href="#FD19-entropy-25-00949" class="html-disp-formula">19</a>) and (<a href="#FD20-entropy-25-00949" class="html-disp-formula">20</a>)). The response is a pulse train in space and time. This result is consistent with the numerical results in [<a href="#B2-entropy-25-00949" class="html-bibr">2</a>] (Section 2.2.7, Figure 9).</p>
Full article ">Figure 6
<p>An approximation of <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>h</mi> <mo>˜</mo> </mover> <mi>E</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>≡</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>; the kernels <math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> </semantics></math> are as in Simulation 1, and <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>E</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is as in (<a href="#FD21-entropy-25-00949" class="html-disp-formula">21</a>). The time axis goes from 0 to 60 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The first figure is the stimuli, and the second figure is the response of the network.</p>
Full article ">Figure 7
<p>An approximation of <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>E</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>≡</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>; the kernels <math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> </semantics></math> are as in Simulation 1, and <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>E</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is as in (<a href="#FD22-entropy-25-00949" class="html-disp-formula">22</a>). The time axis goes from 0 to 60 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The first figure is the stimuli, and the second figure is the response of the network.</p>
Full article ">Figure 8
<p>The left matrix is the connection matrix of the cat cortex. The right matrix corresponds to a discretization of the kernel <math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>E</mi> <mi>E</mi> </mrow> </msub> </semantics></math> used in Simulation 1.</p>
Full article ">Figure 9
<p>Three <span class="html-italic">p</span>-adic approximations for the connection matrix of the cat cortex. We take <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>. The first approximation uses <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; the second, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>; and the last, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>We use <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, and the time axis goes from 0 to 150 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The left image uses <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; the right one uses <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>; and the central one uses <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">
13 pages, 2290 KiB  
Article
A Novel Evidence Combination Method Based on Improved Pignistic Probability
by Xin Shi, Fei Liang, Pengjie Qin, Liang Yu and Gaojie He
Entropy 2023, 25(6), 948; https://doi.org/10.3390/e25060948 - 16 Jun 2023
Cited by 3 | Viewed by 1484
Abstract
Evidence theory is widely used to deal with the fusion of uncertain information, but the fusion of conflicting evidence remains an open question. To solve the problem of conflicting evidence fusion in single target recognition, we proposed a novel evidence combination method based [...] Read more.
Evidence theory is widely used to deal with the fusion of uncertain information, but the fusion of conflicting evidence remains an open question. To solve the problem of conflicting evidence fusion in single target recognition, we proposed a novel evidence combination method based on an improved pignistic probability function. Firstly, the improved pignistic probability function could redistribute the probability of multi-subset proposition according to the weight of single subset propositions in a basic probability assignment (BPA), which reduces the computational complexity and information loss in the conversion process. The combination of the Manhattan distance and evidence angle measurements is proposed to extract evidence certainty and obtain mutual support information between each piece of evidence; then, entropy is used to calculate the uncertainty of the evidence and the weighted average method is used to correct and update the original evidence. Finally, the Dempster combination rule is used to fuse the updated evidence. Verified by the analysis results of single-subset proposition and multi-subset proposition highly conflicting evidence examples, compared to the Jousselme distance method, the Lance distance and reliability entropy combination method, and the Jousselme distance and uncertainty measure combination method, our approach achieved better convergence and the average accuracy was improved by 0.51% and 2.43%. Full article
Show Figures

Figure 1

Figure 1
<p>The flow graph of the proposed method.</p>
Full article ">Figure 2
<p>Fusion results of multi-subset proposition conflicting examples.</p>
Full article ">Figure 3
<p>Fusion results chart of a comparison of different methods of fusion of several pieces of single-subset proposition evidence.</p>
Full article ">Figure 4
<p>Fusion results of multi-subset proposition conflict.</p>
Full article ">Figure 5
<p>Fusion results chart of a comparison of different methods on the fusion of several pieces of multi-subset proposition evidence.</p>
Full article ">
26 pages, 4874 KiB  
Article
Increasing Extractable Work in Small Qubit Landscapes
by Unnati Akhouri, Sarah Shandera and Gaukhar Yesmurzayeva
Entropy 2023, 25(6), 947; https://doi.org/10.3390/e25060947 - 16 Jun 2023
Viewed by 1294
Abstract
An interesting class of physical systems, including those associated with life, demonstrates the ability to hold thermalization at bay and perpetuate states of high free-energy compared to a local environment. In this work we study quantum systems with no external sources or sinks [...] Read more.
An interesting class of physical systems, including those associated with life, demonstrates the ability to hold thermalization at bay and perpetuate states of high free-energy compared to a local environment. In this work we study quantum systems with no external sources or sinks for energy, heat, work, or entropy that allow for high free-energy subsystems to form and persist. We initialize systems of qubits in mixed, uncorrelated states and evolve them subject to a conservation law. We find that four qubits make up the minimal system for which these restricted dynamics and initial conditions allow an increase in extractable work for a subsystem. On landscapes of eight co-evolving qubits, interacting in randomly selected subsystems at each step, we demonstrate that restricted connectivity and an inhomogeneous distribution of initial temperatures both lead to landscapes with longer intervals of increasing extractable work for individual qubits. We demonstrate the role of correlations that develop on the landscape in enabling a positive change in extractable work. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
Show Figures

Figure 1

Figure 1
<p>Pictorial representation of the action of qubit machines, where individual square blocks represent thermal qubits with density matrix <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>i</mi> </msub> </semantics></math> as given in Equation (<a href="#FD1-entropy-25-00947" class="html-disp-formula">1</a>). Each collection of boxes highlighted in blue contains both a block with a black border, representing the qubit that evolves under the stochastic process (the “actor” that activates the machine), as well as the qubits that contribute to the definition of the process (the machine). The map for evolution is denoted by <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Φ</mi> <mo>(</mo> <mo>.</mo> <mo>|</mo> <msub> <mi>ρ</mi> <mi>ref</mi> </msub> <mo>,</mo> <msub> <mi>ρ</mi> <mrow> <mi>en</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>ρ</mi> <mrow> <mi>en</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <mo>…</mo> <mo>;</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> defined by the initial density matrices for the qubits in the machine and the unitary operation that couples the qubits, labeled by <math display="inline"><semantics> <mi>θ</mi> </semantics></math>. The top half of the diagram shows the evolution of the reference temperature qubit <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>ρ</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> </semantics></math>, and the bottom shows the evolution of the qubit system of interest, the “actor” qubit <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>ρ</mi> <mi>sys</mi> </msub> <mo>)</mo> </mrow> </semantics></math>. The other diverse qubits, represented by <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>en</mi> </msub> </semantics></math> in the machine, <span class="html-italic">enable</span> non-trivial transformations. Three qubits are required for a non-trivial evolution of the reference temperature. Four are required for a subsystem that has an increase in extractable work after evolution.</p>
Full article ">Figure 2
<p>Symmetric connectivities considered on the landscape of eight qubits. The nodes represent the qubits and the links connect the qubits to the others they can directly interact with under unitary evolution.</p>
Full article ">Figure 3
<p>Connectivity (<b>a</b>) and circuit diagram (<b>b</b>) (for the first six steps of the evolution) for the messenger-qubit system. The two subsystems of size four that initially interact together are {<math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>4</mn> </msub> </mrow> </semantics></math>} and {<math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mn>6</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>7</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>8</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>5</mn> </msub> </mrow> </semantics></math>}, but <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mn>4</mn> </msub> <mspace width="4.pt"/> <mi>and</mi> <mspace width="4.pt"/> <msub> <mi>Q</mi> <mn>5</mn> </msub> </mrow> </semantics></math> are then exchanged so that at the next step, the subsystems that interact together are {<math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>5</mn> </msub> </mrow> </semantics></math>} and {<math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mn>6</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>7</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>8</mn> </msub> <mo>,</mo> <msub> <mi>Q</mi> <mn>4</mn> </msub> </mrow> </semantics></math>}. Compared to the connectivities shown in <a href="#entropy-25-00947-f002" class="html-fig">Figure 2</a>, the messenger-qubit system has asymmetric structure since the messenger qubits can participate in interactions with six other qubits of the landscape, while the members of subsystems only interact with four qubits on the landscape.</p>
Full article ">Figure 4
<p>Quantum circuit for the first six steps of an example eight-qubit landscape with connectivity six (<b>a</b>) and five (<b>b</b>). The colors denote the four-qubit subsystem grouping that is randomly chosen to undergo a unitary evolution. We use a periodic boundary so that the first qubit, <math display="inline"><semantics> <msub> <mi mathvariant="script">Q</mi> <mn>1</mn> </msub> </semantics></math>, is connected to last qubit, <math display="inline"><semantics> <msub> <mi mathvariant="script">Q</mi> <mn>8</mn> </msub> </semantics></math>. The degree of connectivity manifests in the number of possible groupings. Panel (<b>a</b>) shows the four possible groupings allowed for connectivity six, and (<b>b</b>) shows that more restricted connectivity results in fewer possible groupings. In the paper we also consider the connectivity seven, full connectivity for an eight-qubit landscape, which allows grouping any four qubits into a subsystem.</p>
Full article ">Figure 5
<p>Evolution of the temperature of the 8 qubits under 500 steps in energy subspace <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> for an angle <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, under different levels of connectivity. The hot qubit starts with a population fraction of <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>, and the colder qubits start at a population fraction of <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>. The diffusion for a connectivity of seven follows a more gradual trend towards looking homogeneous, whereas for the landscape of connectivity five and six, and the messenger system, pockets of hot and cold regions develop on the landscape.</p>
Full article ">Figure 6
<p>The change in extractable work from each qubit across 500 steps corresponding to the landscapes shown in <a href="#entropy-25-00947-f005" class="html-fig">Figure 5</a>. The change in work is computed between two consecutive steps and thus a positive change corresponds to change with respect to the previous step. Fully connected landscape of connectivity seven shows dilution of <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math> pockets, while the restricted connectivity show persistent instances of <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math>. Furthermore, the restriction in connectivity results in slow diffusion of energy from the hot qubit onto the landscape resulting in higher magnitude for extractable work early on on the landscape (as is depicted by the colors of the swatch).</p>
Full article ">Figure 7
<p>(<b>a</b>) Box plots showing the distribution in the percent of steps for which <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math> for the qubits starting cold and hot, under 500 steps for the different levels of connectivity, for 100 trials. (<b>b</b>) Total <math display="inline"><semantics> <mrow> <mfrac> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> </mrow> <mrow> <mo>〈</mo> <mi>T</mi> <mo>〉</mo> </mrow> </mfrac> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math> as a function of the fraction of hot qubits on the landscape, where <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>T</mi> <mo>〉</mo> </mrow> </semantics></math> is the average initial temperature on the whole landscape. The fit for the plot for the degree of connectivity six is proportional to <math display="inline"><semantics> <msup> <mi>e</mi> <mrow> <mo>−</mo> <mn>0.25</mn> <mi>x</mi> </mrow> </msup> </semantics></math>, where <span class="html-italic">x</span> is the initial number of hot qubits on landscape.</p>
Full article ">Figure 8
<p>The distribution, over 50 trials, of the percent of steps for which <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math> for a qubit starting of as cold and hot. All landscapes have connectivity seven but undergo evolution via different types of unitary rotations: simultaneous swap of two pairs of qubits, two-qubit conditional swap and two-qubit unconditional swap. A random unitary belonging to the class was chosen at the initialization step and then applied to each landscape in random subsystems for 500 steps.</p>
Full article ">Figure 9
<p>(<b>a</b>) Log-linear scale histogram (normalized to relative frequency) of different length intervals for which the qubit consecutively exhibits <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math>. The histogram shows data for 100 trials of 500 steps each for three types of connectivity: fully connected (seven), connectivity six, five and a messenger-qubit system where two subsystem of size four make up the landscape and under each iteration a messenger qubit is exchanged between the subsystems. The landscape is initialized with a hot qubit and seven cold qubits. The horizontal axis shows the number of steps in the interval over which <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math> persists and the vertical axis shows the percent of total steps that occur within intervals of that length. (<b>b</b>) Log–log scale plot showing development of <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math> of different lengths under 500 steps for 100 trials for three types of connectivity. Up until <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math> of the evolution, the fully connected landscape with connectivity seven shows more instances of <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math> than the other two landscapes, but in the later part of evolution, sparsely connected landscapes—connectivity six and messenger-qubit system—dominate by showing that more instances of connected landscape show more instances of <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math> that are long-lived.</p>
Full article ">Figure 10
<p>Time evolution of the extractable work divided by mean initial temperature of the landscape, <math display="inline"><semantics> <mfrac> <msup> <mi>W</mi> <mi>ex</mi> </msup> <msub> <mrow> <mo>〈</mo> <mi>T</mi> <mo>〉</mo> </mrow> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mfrac> </semantics></math>, for a cold (<b>a</b>) and hot (<b>b</b>) one-qubit subsystem on the landscape. The dark gray line shows the extractable work from a qubit on a landscape that undergoes subsequent collisions with qubits at the initial mean temperature of the landscape. The purple, orange, yellow and blue lines show the time evolution of normalized extractable work of one-qubit subsystems on closed landscapes where the allowed interactions occur in subsystems chosen under several different connectivity constraints as indicated in the legend. In the collisional model there is a monotonic decrease in extractable work, while in all three closed landscapes revivals of work extraction can be observed.</p>
Full article ">Figure 11
<p>Here, we show the evolution of the change in the ambient temperature, Equation (<a href="#FD27-entropy-25-00947" class="html-disp-formula">27</a>), the change in relative entropy and change in extractable work for an initially cold qubit that is evolving on a landscape with and without correlations kept. Panel (<b>a</b>) shows values for the landscape where all the correlations are retained. Panel (<b>b</b>) shows the evolution when any correlations formed are erased at each step. We see that on a landscape where correlations are erased, all three parameters for <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math> show an overall diminishing trend whereas for the closed landscape with correlations, the generation of <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <msup> <mi>W</mi> <mi>ex</mi> </msup> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math> persists.</p>
Full article ">Figure A1
<p>The left panel shows contours of the maximum entanglement that can develop under energy-preserving unitaries acting on two qubit systems initialized in a product of Gibbs states (Equation (<a href="#FD1-entropy-25-00947" class="html-disp-formula">1</a>)). The initial excited state populations are labelled <math display="inline"><semantics> <msub> <mi>p</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>p</mi> <mn>2</mn> </msub> </semantics></math>. The contour labels indicate the maximum possible value of concurrence possible for the initial state, which develops after rotating by an angle <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mfrac> <mi>π</mi> <mn>4</mn> </mfrac> </mrow> </semantics></math>. On the right, we show the space of accessible linear entropies and concurrence for all initial states of the closed two-qubit system. The shaded region in blue shows the range of concurrence for states in our system, given a fixed linear entropy. The curve in green is the “frontier" line [<a href="#B61-entropy-25-00947" class="html-bibr">61</a>] (green) characterizing the states that for a given mixedness or linear entropy are maximally entangled. The shaded region in yellow shows an example three-qubit calculation that demonstrates that the range of concurrence for a given linear entropy can be larger when the two qubits are embedded in a larger system. For more details, see <a href="#secAdot2-entropy-25-00947" class="html-sec">Section Appendix A.2</a>.</p>
Full article ">
20 pages, 5199 KiB  
Article
Combined Gaussian Mixture Model and Pathfinder Algorithm for Data Clustering
by Huajuan Huang, Zepeng Liao, Xiuxi Wei and Yongquan Zhou
Entropy 2023, 25(6), 946; https://doi.org/10.3390/e25060946 - 16 Jun 2023
Cited by 1 | Viewed by 2536
Abstract
Data clustering is one of the most influential branches of machine learning and data analysis, and Gaussian Mixture Models (GMMs) are frequently adopted in data clustering due to their ease of implementation. However, there are certain limitations to this approach that need to [...] Read more.
Data clustering is one of the most influential branches of machine learning and data analysis, and Gaussian Mixture Models (GMMs) are frequently adopted in data clustering due to their ease of implementation. However, there are certain limitations to this approach that need to be acknowledged. GMMs need to determine the cluster numbers manually, and they may fail to extract the information within the dataset during initialization. To address these issues, a new clustering algorithm called PFA-GMM has been proposed. PFA-GMM is based on GMMs and the Pathfinder algorithm (PFA), and it aims to overcome the shortcomings of GMMs. The algorithm automatically determines the optimal number of clusters based on the dataset. Subsequently, PFA-GMM considers the clustering problem as a global optimization problem for getting trapped in local convergence during initialization. Finally, we conducted a comparative study of our proposed clustering algorithm against other well-known clustering algorithms using both synthetic and real-world datasets. The results of our experiments indicate that PFA-GMM outperformed the competing approaches. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Figure 1
<p>Clustering results on synthetic dataset-Aggregation.</p>
Full article ">Figure 2
<p>Clustering results for the eight methods on synthetic dataset-Compound.</p>
Full article ">Figure 3
<p>Clustering results for the eight methods on synthetic dataset-Pathbased.</p>
Full article ">Figure 4
<p>Clustering results for the eight methods on synthetic dataset-Four Lines.</p>
Full article ">Figure 5
<p>Clustering results for the eight methods on synthetic dataset-Jain.</p>
Full article ">Figure 6
<p>Clustering results for the eight methods on synthetic dataset-S2.</p>
Full article ">
12 pages, 1043 KiB  
Article
Influence of Removing Leaf Node Neighbors on Network Controllability
by Chengpei Wu, Siyi Xu, Zhuoran Yu and Junli Li
Entropy 2023, 25(6), 945; https://doi.org/10.3390/e25060945 - 15 Jun 2023
Viewed by 1727
Abstract
From the perspective of network attackers, finding attack sequences that can cause significant damage to network controllability is an important task, which also helps defenders improve robustness during network constructions. Therefore, developing effective attack strategies is a key aspect of research on network [...] Read more.
From the perspective of network attackers, finding attack sequences that can cause significant damage to network controllability is an important task, which also helps defenders improve robustness during network constructions. Therefore, developing effective attack strategies is a key aspect of research on network controllability and its robustness. In this paper, we propose a Leaf Node Neighbor-based Attack (LNNA) strategy that can effectively disrupt the controllability of undirected networks. The LNNA strategy targets the neighbors of leaf nodes, and when there are no leaf nodes in the network, the strategy attacks the neighbors of nodes with a higher degree to produce the leaf nodes. Results from simulations on synthetic and real-world networks demonstrate the effectiveness of the proposed method. In particular, our findings suggest that removing neighbors of low-degree nodes (i.e., nodes with degree 1 or 2) can significantly reduce the controllability robustness of networks. Thus, protecting such low-degree nodes and their neighbors during network construction can lead to networks with improved controllability robustness. Full article
(This article belongs to the Topic Complex Systems and Network Science)
Show Figures

Figure 1

Figure 1
<p>Controllability robustness of networks with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Controllability robustness of networks with <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Controllability robustness of real-world networks.</p>
Full article ">Figure 4
<p>Visualization of targeted nodes in the attack process of LNNA.</p>
Full article ">Figure 5
<p>The proportion of attacked node types in the three networks under LNNA, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">
17 pages, 338 KiB  
Review
Irreversible Geometrothermodynamics of Open Systems in Modified Gravity
by Miguel A. S. Pinto, Tiberiu Harko and Francisco S. N. Lobo
Entropy 2023, 25(6), 944; https://doi.org/10.3390/e25060944 - 15 Jun 2023
Cited by 3 | Viewed by 1116
Abstract
In this work, we explore the formalism of the irreversible thermodynamics of open systems and the possibility of gravitationally generated particle production in modified gravity. More specifically, we consider the scalar–tensor representation of f(R,T) gravity, in which the [...] Read more.
In this work, we explore the formalism of the irreversible thermodynamics of open systems and the possibility of gravitationally generated particle production in modified gravity. More specifically, we consider the scalar–tensor representation of f(R,T) gravity, in which the matter energy–momentum tensor is not conserved due to a nonminimal curvature–matter coupling. In the context of the irreversible thermodynamics of open systems, this non-conservation of the energy–momentum tensor can be interpreted as an irreversible flow of energy from the gravitational sector to the matter sector, which, in general, could result in particle creation. We obtain and discuss the expressions for the particle creation rate, the creation pressure, and the entropy and temperature evolutions. Applied together with the modified field equations of scalar–tensor f(R,T) gravity, the thermodynamics of open systems lead to a generalization of the ΛCDM cosmological paradigm, in which the particle creation rate and pressure are considered effectively as components of the cosmological fluid energy–momentum tensor. Thus, generally, modified theories of gravity in which these two quantities do not vanish provide a macroscopic phenomenological description of particle production in the cosmological fluid filling the Universe and also lead to the possibility of cosmological models that start from empty conditions and gradually build up matter and entropy. Full article
(This article belongs to the Special Issue Geometrothermodynamics and Its Applications)
13 pages, 6404 KiB  
Article
Software-Defined Networking Orchestration for Interoperable Key Management of Quantum Key Distribution Networks
by Dong-Hi Sim, Jongyoon Shin and Min Hyung Kim
Entropy 2023, 25(6), 943; https://doi.org/10.3390/e25060943 - 15 Jun 2023
Cited by 3 | Viewed by 2235
Abstract
This paper demonstrates the use of software-defined networking (SDN) orchestration to integrate regionally separated networks in which different network parts use incompatible key management systems (KMSs) managed by different SDN controllers to ensure end-to-end QKD service provisioning to deliver the QKD keys between [...] Read more.
This paper demonstrates the use of software-defined networking (SDN) orchestration to integrate regionally separated networks in which different network parts use incompatible key management systems (KMSs) managed by different SDN controllers to ensure end-to-end QKD service provisioning to deliver the QKD keys between geographically different QKD networks. The study focuses on scenarios in which different parts of the network are managed separately by different SDN controllers, requiring an SDN orchestrator to coordinate and manage these controllers. In practical network deployments, operators often utilize multiple vendors for their network equipment. This practice also enables the expansion of the QKD network’s coverage by interconnecting various QKD networks equipped with devices from different vendors. However, as coordinating different parts of the QKD network is a complex task, this paper proposes the implementation of an SDN orchestrator which acts as a central entity to manage multiple SDN controllers, ensuring end-to-end QKD service provisioning to address this challenge. For instance, when there are multiple border nodes to interconnect different networks, the SDN orchestrator calculates the path in advance for the end-to-end delivery of keys between initiating and target applications belonging to different networks. This path selection requires the SDN orchestrator to gather information from each SDN controller managing the respective parts of the QKD network. This work shows the practical implementation of SDN orchestration for interoperable KMS in commercial QKD networks in South Korea. By employing an SDN orchestrator, it becomes possible to coordinate multiple SDN controllers and ensure the efficient and secure delivery of QKD keys between different QKD networks with varying vendor equipment. Full article
Show Figures

Figure 1

Figure 1
<p>Network topology of the testbed.</p>
Full article ">Figure 2
<p>QKD node configuration and QKD/KMS/encryptor links.</p>
Full article ">Figure 3
<p>QKD node in SK Telecom in Seoul.</p>
Full article ">Figure 4
<p>QBER (left side) and key rate (right side) of each link in the testbed. (<b>a</b>) Link between Seoul and Pangyo. (<b>b</b>) Link between Seoul and Suwon. (<b>c</b>) Link between Suwon and Pangyo.</p>
Full article ">Figure 5
<p>A use case for the interoperable key management with a border node, i.e., Site 2, between two different QKD networks with each different vendor. Examples of Site 2 are border nodes in Pangyo and Daejeon in <a href="#entropy-25-00943-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 6
<p>An example of the creation of the service_link with the multi-segment for the use case of delivering the key between two QKD networks via the border node is depicted.</p>
Full article ">Figure 7
<p>An example of the YANG data model within one QKD network for an interoperable KMS.</p>
Full article ">Figure 8
<p>Sequence diagram of application registrations for interworking with RPC for the connectivity services.</p>
Full article ">
21 pages, 2415 KiB  
Article
Statistical Analysis of Plasma Dynamics in Gyrokinetic Simulations of Stellarator Turbulence
by Aristeides D. Papadopoulos, Johan Anderson, Eun-jin Kim, Michail Mavridis and Heinz Isliker
Entropy 2023, 25(6), 942; https://doi.org/10.3390/e25060942 - 15 Jun 2023
Cited by 1 | Viewed by 1436
Abstract
A geometrical method for assessing stochastic processes in plasma turbulence is investigated in this study. The thermodynamic length methodology allows using a Riemannian metric on the phase space; thus, distances between thermodynamic states can be computed. It constitutes a geometric methodology to understand [...] Read more.
A geometrical method for assessing stochastic processes in plasma turbulence is investigated in this study. The thermodynamic length methodology allows using a Riemannian metric on the phase space; thus, distances between thermodynamic states can be computed. It constitutes a geometric methodology to understand stochastic processes involved in, e.g., order–disorder transitions, where a sudden increase in distance is expected. We consider gyrokinetic simulations of ion-temperature-gradient (ITG)-mode-driven turbulence in the core region of the stellarator W7-X with realistic quasi-isodynamic topologies. In gyrokinetic plasma turbulence simulations, avalanches, e.g., of heat and particles, are often found, and in this work, a novel method for detection is investigated. This new method combines the singular spectrum analysis algorithm with a hierarchical clustering method such that the time series is decomposed into two parts: useful physical information and noise. The informative component of the time series is used for the calculation of the Hurst exponent, the information length, and the dynamic time. Based on these measures, the physical properties of the time series are revealed. Full article
(This article belongs to the Special Issue Energy Transfer and Dissipation in Plasma Turbulence)
Show Figures

Figure 1

Figure 1
<p>Variation of the value of the normalized magnetic field <span class="html-italic">B</span> with respect to <span class="html-italic">z</span> coordinate as defined for <span class="html-small-caps">GENE</span> for both flux tubes.</p>
Full article ">Figure 2
<p>Ion heat flux <math display="inline"><semantics> <msub> <mi>Q</mi> <mi>i</mi> </msub> </semantics></math> in gyro-Bohm units <math display="inline"><semantics> <msub> <mi>Q</mi> <mrow> <mi>g</mi> <mi>B</mi> </mrow> </msub> </semantics></math> for the cases without density gradients.</p>
Full article ">Figure 3
<p>Validation of the SSA method by applying it on the time series <span class="html-italic">y</span>, where <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mi>x</mi> <mo>+</mo> <mo form="prefix">sin</mo> <mfenced separators="" open="(" close=")"> <mn>2</mn> <mi>π</mi> <mi>x</mi> </mfenced> <mo>+</mo> <mi>n</mi> </mrow> </semantics></math> and <span class="html-italic">n</span> is white Gaussian noise. Obviously, <span class="html-italic">y</span> contains a noise part, a trend part, and an oscillatory part. All three components are correctly identified by the SSA method.</p>
Full article ">Figure 4
<p>Information length (<span class="html-italic">L</span>) as a function of time samples, where running windows of samples have been used with different lengths <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> </mrow> </semantics></math> (<b>a</b>) for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>21</mn> </mrow> </semantics></math>, (<b>b</b>) for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>101</mn> </mrow> </semantics></math>, and (<b>c</b>) for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>201</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4 Cont.
<p>Information length (<span class="html-italic">L</span>) as a function of time samples, where running windows of samples have been used with different lengths <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> </mrow> </semantics></math> (<b>a</b>) for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>21</mn> </mrow> </semantics></math>, (<b>b</b>) for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>101</mn> </mrow> </semantics></math>, and (<b>c</b>) for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>201</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mi>τ</mi> </mrow> </semantics></math> as a function of time samples for various values of the Hurst exponent <span class="html-italic">H</span> of <a href="#entropy-25-00942-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 6
<p>Case 1: (<b>a</b>) Hierarchical clustering results and (<b>b</b>) dynamic time calculations for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>201</mn> </mrow> </semantics></math>. Here, nc is the maximal number of clusters, as defined in <a href="#sec3-entropy-25-00942" class="html-sec">Section 3</a>.</p>
Full article ">Figure 7
<p>Case 2: (<b>a</b>) Hierarchical clustering results and (<b>b</b>) dynamic time calculations for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>201</mn> </mrow> </semantics></math>. Here, nc is the maximal number of clusters, as defined in <a href="#sec3-entropy-25-00942" class="html-sec">Section 3</a>.</p>
Full article ">Figure 8
<p>Case 3: (<b>a</b>) Hierarchical clustering results and (<b>b</b>) dynamic time calculation for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>201</mn> </mrow> </semantics></math>. Here, nc is the maximal number of clusters, as defined in <a href="#sec3-entropy-25-00942" class="html-sec">Section 3</a>.</p>
Full article ">Figure 9
<p>Case 4: (<b>a</b>) Hierarchical clustering results and (<b>b</b>) dynamic time calculations for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>201</mn> </mrow> </semantics></math>. Here, nc is the maximal number of clusters, as defined in <a href="#sec3-entropy-25-00942" class="html-sec">Section 3</a>.</p>
Full article ">Figure 10
<p>Case 5: (<b>a</b>) Hierarchical clustering results and (<b>b</b>) dynamic time calculations for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>201</mn> </mrow> </semantics></math>. Here, nc is the maximal number of clusters, as defined in <a href="#sec3-entropy-25-00942" class="html-sec">Section 3</a>.</p>
Full article ">Figure 11
<p>Case 6: (<b>a</b>) Hierarchical clustering results and (<b>b</b>) dynamic time calculations for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>L</mi> <mo>=</mo> <mn>201</mn> </mrow> </semantics></math>. Here, nc is the maximal number of clusters, as defined in <a href="#sec3-entropy-25-00942" class="html-sec">Section 3</a>.</p>
Full article ">
23 pages, 5818 KiB  
Article
The Structure Entropy-Based Node Importance Ranking Method for Graph Data
by Shihu Liu and Haiyan Gao
Entropy 2023, 25(6), 941; https://doi.org/10.3390/e25060941 - 15 Jun 2023
Cited by 2 | Viewed by 1714
Abstract
Due to its wide application across many disciplines, how to make an efficient ranking for nodes in graph data has become an urgent topic. It is well-known that most classical methods only consider the local structure information of nodes, but ignore the global [...] Read more.
Due to its wide application across many disciplines, how to make an efficient ranking for nodes in graph data has become an urgent topic. It is well-known that most classical methods only consider the local structure information of nodes, but ignore the global structure information of graph data. In order to further explore the influence of structure information on node importance, this paper designs a structure entropy-based node importance ranking method. Firstly, the target node and its associated edges are removed from the initial graph data. Next, the structure entropy of graph data can be constructed by considering the local and global structure information at the same time, in which case all nodes can be ranked. The effectiveness of the proposed method was tested by comparing it with five benchmark methods. The experimental results show that the structure entropy-based node importance ranking method performs well on eight real-world datasets. Full article
Show Figures

Figure 1

Figure 1
<p>The connected graph with 12 nodes and 14 edges.</p>
Full article ">Figure 2
<p>The graph data after removing node <math display="inline"><semantics> <msub> <mi>v</mi> <mn>5</mn> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>The graph data after removing node <math display="inline"><semantics> <msub> <mi>v</mi> <mn>3</mn> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>The curves of <span class="html-italic">CCDF</span> on (<b>a</b>) <span class="html-italic">CONT</span> and (<b>b</b>) <span class="html-italic">LESM</span> datasets.</p>
Full article ">Figure 5
<p>The curves of <span class="html-italic">CCDF</span> on (<b>a</b>) <span class="html-italic">POLB</span>; (<b>b</b>) <span class="html-italic">ADJN</span>; (<b>c</b>) <span class="html-italic">FOOT</span>; and (<b>d</b>) <span class="html-italic">NETS</span> datasets.</p>
Full article ">Figure 6
<p>The curves of <span class="html-italic">CCDF</span> on (<b>a</b>) <span class="html-italic">EMAI</span> and (<b>b</b>) <span class="html-italic">HAMS</span> datasets.</p>
Full article ">Figure 7
<p>The curves of <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> and <math display="inline"><semantics> <mi>τ</mi> </semantics></math> on (<b>a</b>,<b>b</b>) <span class="html-italic">CONT</span> and (<b>c</b>,<b>d</b>) <span class="html-italic">LESM</span> datasets.</p>
Full article ">Figure 8
<p>The curves of <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> and <math display="inline"><semantics> <mi>τ</mi> </semantics></math> on (<b>a</b>,<b>b</b>) <span class="html-italic">POLB</span> and (<b>c</b>,<b>d</b>) <span class="html-italic">ADJN</span> datasets.</p>
Full article ">Figure 9
<p>The curves of <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> and <math display="inline"><semantics> <mi>τ</mi> </semantics></math> on (<b>a</b>,<b>b</b>) <span class="html-italic">FOOT</span> and (<b>c</b>,<b>d</b>) <span class="html-italic">NETS</span> datasets.</p>
Full article ">Figure 10
<p>The curves of <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> and <math display="inline"><semantics> <mi>τ</mi> </semantics></math> on (<b>a</b>,<b>b</b>) <span class="html-italic">EMAI</span> and (<b>c</b>,<b>d</b>) <span class="html-italic">HAMS</span> datasets.</p>
Full article ">Figure 11
<p>The propagation ability of seeds on (<b>a</b>) <span class="html-italic">CONT</span> and (<b>b</b>) <span class="html-italic">LESM</span> datasets.</p>
Full article ">Figure 12
<p>The propagation ability of seeds on (<b>a</b>) <span class="html-italic">POLB</span>; (<b>b</b>) <span class="html-italic">ADJN</span>; (<b>c</b>) <span class="html-italic">FOOT</span>; and (<b>d</b>) <span class="html-italic">NETS</span> datasets.</p>
Full article ">Figure 13
<p>The propagation ability of seeds on (<b>a</b>) <span class="html-italic">EMAI</span> and (<b>b</b>) <span class="html-italic">HAMS</span> datasets.</p>
Full article ">
13 pages, 327 KiB  
Article
Exploring a New Application of Construct Specification Equations (CSEs) and Entropy: A Pilot Study with Balance Measurements
by Jeanette Melin, Helena Fridberg, Eva Ekvall Hansson, Daniel Smedberg and Leslie Pendrill
Entropy 2023, 25(6), 940; https://doi.org/10.3390/e25060940 - 15 Jun 2023
Cited by 2 | Viewed by 1283
Abstract
Both construct specification equations (CSEs) and entropy can be used to provide a specific, causal, and rigorously mathematical conceptualization of item attributes in order to provide fit-for-purpose measurements of person abilities. This has been previously demonstrated for memory measurements. It can also be [...] Read more.
Both construct specification equations (CSEs) and entropy can be used to provide a specific, causal, and rigorously mathematical conceptualization of item attributes in order to provide fit-for-purpose measurements of person abilities. This has been previously demonstrated for memory measurements. It can also be reasonably expected to be applicable to other kinds of measures of human abilities and task difficulty in health care, but further exploration is needed about how to incorporate qualitative explanatory variables in the CSE formulation. In this paper we report two case studies exploring the possibilities of advancing CSE and entropy to include human functional balance measurements. In case study I, physiotherapists have formulated a CSE for balance task difficulty by principal component regression of empirical balance task difficulty values from Berg’s Balance Scale transformed using the Rasch model. In case study II, four balance tasks of increasing difficulty due to diminishing bases of support and vision were briefly investigated in relation to entropy as a measure of the amount of information and order as well as physical thermodynamics. The pilot study has explored both methodological and conceptual possibilities and concerns to be considered in further work. The results should not be considered as fully comprehensive or absolute, but rather open up for further discussion and investigations to advance measurements of person balance ability in clinical practice, research, and trials. Full article
(This article belongs to the Special Issue Applications of Entropy in Health Care)
9 pages, 285 KiB  
Article
Correspondence between the Energy Equipartition Theorem in Classical Mechanics and Its Phase-Space Formulation in Quantum Mechanics
by Esteban Marulanda, Alejandro Restrepo and Johans Restrepo
Entropy 2023, 25(6), 939; https://doi.org/10.3390/e25060939 - 15 Jun 2023
Viewed by 1384
Abstract
In classical physics, there is a well-known theorem in which it is established that the energy per degree of freedom is the same. However, in quantum mechanics, due to the non-commutativity of some pairs of observables and the possibility of having non-Markovian dynamics, [...] Read more.
In classical physics, there is a well-known theorem in which it is established that the energy per degree of freedom is the same. However, in quantum mechanics, due to the non-commutativity of some pairs of observables and the possibility of having non-Markovian dynamics, the energy is not equally distributed. We propose a correspondence between what is known as the classical energy equipartition theorem and its counterpart in the phase-space formulation in quantum mechanics based on the Wigner representation. Further, we show that in the high-temperature regime, the classical result is recovered. Full article
(This article belongs to the Collection Foundations of Statistical Mechanics)
17 pages, 10798 KiB  
Article
Attention-Based Spatial–Temporal Convolution Gated Recurrent Unit for Traffic Flow Forecasting
by Qingyong Zhang, Wanfeng Chang, Conghui Yin, Peng Xiao, Kelei Li and Meifang Tan
Entropy 2023, 25(6), 938; https://doi.org/10.3390/e25060938 - 14 Jun 2023
Cited by 3 | Viewed by 1871
Abstract
Accurate traffic flow forecasting is very important for urban planning and traffic management. However, this is a huge challenge due to the complex spatial–temporal relationships. Although the existing methods have researched spatial–temporal relationships, they neglect the long periodic aspects of traffic flow data, [...] Read more.
Accurate traffic flow forecasting is very important for urban planning and traffic management. However, this is a huge challenge due to the complex spatial–temporal relationships. Although the existing methods have researched spatial–temporal relationships, they neglect the long periodic aspects of traffic flow data, and thus cannot attain a satisfactory result. In this paper, we propose a novel model Attention-Based Spatial–Temporal Convolution Gated Recurrent Unit (ASTCG) to solve the traffic flow forecasting problem. ASTCG has two core components: the multi-input module and the STA-ConvGru module. Based on the cyclical nature of traffic flow data, the data input to the multi-input module are divided into three parts, near-neighbor data, daily-periodic data, and weekly-periodic data, thus enabling the model to better capture the time dependence. The STA-ConvGru module, formed by CNN, GRU, and attention mechanism, can capture both temporal and spatial dependencies of traffic flow. We evaluate our proposed model using real-world datasets and experiments show that the ASTCG model outperforms the state-of-the-art model. Full article
Show Figures

Figure 1

Figure 1
<p>The spatial–temporal correlation of traffic flow.</p>
Full article ">Figure 2
<p>Spatial–temporal information structure of traffic flow.</p>
Full article ">Figure 3
<p>ASTCG architecture.</p>
Full article ">Figure 4
<p>ConvGRU module structure.</p>
Full article ">Figure 5
<p>STA-ConvGRU module structure.</p>
Full article ">Figure 6
<p>Evaluation metrics of different models on the PEMS04 dataset.</p>
Full article ">Figure 7
<p>Evaluation metrics of different models on the PEMS08 dataset.</p>
Full article ">Figure 8
<p>Traffic flow visualization of three spatial–temporal prediction models on the PEMS04 dataset. (<b>a</b>) One-day traffic flow visualization at node 104; (<b>b</b>) One-week traffic flow visualization at node 104.</p>
Full article ">Figure 9
<p>Traffic flow visualization of three spatial–temporal prediction models on the PEMS08 dataset. (<b>a</b>) One-day traffic flow visualization at node 58; (<b>b</b>) One-week traffic flow visualization at node 58.</p>
Full article ">Figure 10
<p>Prediction performance of the four models at node 307 of the PEMS04 dataset.</p>
Full article ">Figure 11
<p>Prediction performance of the four models at node 100 of the PEMS08 dataset.</p>
Full article ">
12 pages, 2476 KiB  
Article
Neural Network-Based Prediction for Secret Key Rate of Underwater Continuous-Variable Quantum Key Distribution through a Seawater Channel
by Yun Mao, Yiwu Zhu, Hui Hu, Gaofeng Luo, Jinguang Wang, Yijun Wang and Ying Guo
Entropy 2023, 25(6), 937; https://doi.org/10.3390/e25060937 - 14 Jun 2023
Viewed by 1300
Abstract
Continuous-variable quantum key distribution (CVQKD) plays an important role in quantum communications, because of its compatible setup for optical implementation with low cost. For this paper, we considered a neural network approach to predicting the secret key rate of CVQKD with discrete modulation [...] Read more.
Continuous-variable quantum key distribution (CVQKD) plays an important role in quantum communications, because of its compatible setup for optical implementation with low cost. For this paper, we considered a neural network approach to predicting the secret key rate of CVQKD with discrete modulation (DM) through an underwater channel. A long-short-term-memory-(LSTM)-based neural network (NN) model was employed, in order to demonstrate performance improvement when taking into account the secret key rate. The numerical simulations showed that the lower bound of the secret key rate could be achieved for a finite-size analysis, where the LSTM-based neural network (NN) was much better than that of the backward-propagation-(BP)-based neural network (NN). This approach helped to realize the fast derivation of the secret key rate of CVQKD through an underwater channel, indicating that it can be used for improving performance in practical quantum communications. Full article
(This article belongs to the Special Issue Quantum Communication and Quantum Key Distribution)
Show Figures

Figure 1

Figure 1
<p>Scenario diagram of DM-CVQKD through an underwater channel: (<b>a</b>) Underwater environment. PBS: Polarization Beam Splitter, BS: Beam Splitter, AM: Amplitude Modulator, PM: Phase Modulator, PD: Photo Diode; (<b>b</b>) Bayesian optimization; (<b>c</b>) The trained neural network.</p>
Full article ">Figure 2
<p>The internal structure of a cell of the LSTM-based NN. The internal state <math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math> of the previous moment, the external state <math display="inline"><semantics> <msub> <mi>h</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math>, and the network input <math display="inline"><semantics> <msub> <mi>x</mi> <mi>t</mi> </msub> </semantics></math> of the current moment are used as the input of the cell, and the current internal state <math display="inline"><semantics> <msub> <mi>C</mi> <mi>t</mi> </msub> </semantics></math> and external state <math display="inline"><semantics> <msub> <mi>h</mi> <mi>t</mi> </msub> </semantics></math> are obtained as the output of the cell by gate operations, i.e., forget gate, input gate, and output gate, respectively.</p>
Full article ">Figure 3
<p>Training results of the LSTM-based NN: (<b>a</b>) Error histogram of prediction. The number of samples vs relative error of the train dataset; (<b>b</b>) The training set predicted and expected values. The red line is the predicted value, and the blue line is the expected value.</p>
Full article ">Figure 4
<p>The secret key rate as a function of the transmission distance for the given photon cutoff numbers. The black dashed line is <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>, the blue dashed line is <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math>, and the green line is <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>. The key rate float is about <math display="inline"><semantics> <mrow> <mn>0.55</mn> <mo>%</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <msub> <mi>N</mi> <mi>c</mi> </msub> </semantics></math> from 12 to 15, and <math display="inline"><semantics> <mrow> <mn>0.2</mn> <mo>%</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <msub> <mi>N</mi> <mi>c</mi> </msub> </semantics></math> from 15 to 20. For the increased <math display="inline"><semantics> <msub> <mi>N</mi> <mi>c</mi> </msub> </semantics></math>, the secret key rate is not obviously improved, but the computation time increases significantly.</p>
Full article ">Figure 5
<p>Effects of excess noise <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> on the secret key rate. The lines from top to bottom indicate the excess noise <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>∈</mo> <mo>{</mo> <mn>0.005</mn> <mo>,</mo> <mn>0.01</mn> <mo>,</mo> <mn>0.015</mn> <mo>,</mo> <mn>0.02</mn> <mo>,</mo> <mn>0.025</mn> <mo>,</mo> <mn>0.03</mn> <mo>,</mo> <mn>0.035</mn> <mo>,</mo> <mn>0.04</mn> <mo>}</mo> </mrow> </semantics></math>. We set the amplitude <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.66</mn> </mrow> </semantics></math>, post-selection parameter <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, depth = 100 m, and reconciliation efficiency <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Effects of excess noise <math display="inline"><semantics> <mi>ξ</mi> </semantics></math> on the secret key rate with the given post-selection. The solid line indicates <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, and the dashed line indicates <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Prediction results of the NN-based CVQKD. Solid lines represent the initial value with photon cutoff method, the hollow dotted line represents LSTM-based NN, and the solid dotted line represents BP-based NN. The pink line represents a depth of 70 m, and the red line represents a depth of 90 m.</p>
Full article ">
2 pages, 181 KiB  
Editorial
Entropy-Based Statistics and Their Applications
by Zhiyi Zhang
Entropy 2023, 25(6), 936; https://doi.org/10.3390/e25060936 - 14 Jun 2023
Cited by 2 | Viewed by 1447
Abstract
During the last few decades, research activity in modeling the properties of random systems via entropies has grown noticeably across a wide spectrum of fields [...] Full article
(This article belongs to the Special Issue Entropy-Based Statistics and Their Applications)
18 pages, 1064 KiB  
Article
Topic Discovery and Hotspot Analysis of Sentiment Analysis of Chinese Text Using Information-Theoretic Method
by Changlu Zhang, Haojie Fan, Jian Zhang, Qiong Yang and Liqian Tang
Entropy 2023, 25(6), 935; https://doi.org/10.3390/e25060935 - 13 Jun 2023
Cited by 1 | Viewed by 1789
Abstract
Currently, sentiment analysis is a research hotspot in many fields such as computer science and statistical science. Topic discovery of the literature in the field of text sentiment analysis aims to provide scholars with a quick and effective understanding of its research trends. [...] Read more.
Currently, sentiment analysis is a research hotspot in many fields such as computer science and statistical science. Topic discovery of the literature in the field of text sentiment analysis aims to provide scholars with a quick and effective understanding of its research trends. In this paper, we propose a new model for the topic discovery analysis of literature. Firstly, the FastText model is applied to calculate the word vector of literature keywords, based on which cosine similarity is applied to calculate keyword similarity, to carry out the merging of synonymous keywords. Secondly, the hierarchical clustering method based on the Jaccard coefficient is used to cluster the domain literature and count the literature volume of each topic. Thirdly, the information gain method is applied to extract the high information gain characteristic words of various topics, based on which the connotation of each topic is condensed. Finally, by conducting a time series analysis of the literature, a four-quadrant matrix of topic distribution is constructed to compare the research trends of each topic within different stages. The 1186 articles in the field of text sentiment analysis from 2012 to 2022 can be divided into 12 categories. By comparing and analyzing the topic distribution matrices of the two phases of 2012 to 2016 and 2017 to 2022, it is found that the various categories of topics have obvious research development changes in different phases. The results show that: ① Among the 12 categories, online opinion analysis of social media comments represented by microblogs is one of the current hot topics. ② The integration and application of methods such as sentiment lexicon, traditional machine learning and deep learning should be enhanced. ③ Semantic disambiguation of aspect-level sentiment analysis is one of the current difficult problems this field faces. ④ Research on multimodal sentiment analysis and cross-modal sentiment analysis should be promoted. Full article
(This article belongs to the Special Issue Information-Theoretic Methods in Data Analytics)
Show Figures

Figure 1

Figure 1
<p>The overall framework of the topic discovery model.</p>
Full article ">Figure 2
<p>Structure of the FastText model.</p>
Full article ">Figure 3
<p>Hierarchical clustering dendrogram.</p>
Full article ">Figure 4
<p>Two-stage-based sentiment analysis topic distribution matrix.</p>
Full article ">
22 pages, 369 KiB  
Article
Genetic Algebras Associated with ξ(a)-Quadratic Stochastic Operators
by Farrukh Mukhamedov, Izzat Qaralleh, Taimun Qaisar and Mahmoud Alhaj Hasan
Entropy 2023, 25(6), 934; https://doi.org/10.3390/e25060934 - 13 Jun 2023
Cited by 4 | Viewed by 1207
Abstract
The present paper deals with a class of ξ(a)-quadratic stochastic operators, referred to as QSOs, on a two-dimensional simplex. It investigates the algebraic properties of the genetic algebras associated with ξ(a)-QSOs. Namely, the associativity, characters [...] Read more.
The present paper deals with a class of ξ(a)-quadratic stochastic operators, referred to as QSOs, on a two-dimensional simplex. It investigates the algebraic properties of the genetic algebras associated with ξ(a)-QSOs. Namely, the associativity, characters and derivations of genetic algebras are studied. Moreover, the dynamics of these operators are also explored. Specifically, we focus on a particular partition that results in nine classes, which are further reduced to three nonconjugate classes. Each class gives rise to a genetic algebra denoted as Ai, and it is shown that these algebras are isomorphic. The investigation then delves into analyzing various algebraic properties within these genetic algebras, such as associativity, characters, and derivations. The conditions for associativity and character behavior are provided. Furthermore, a comprehensive analysis of the dynamic behavior of these operators is conducted. Full article
(This article belongs to the Special Issue New Trends in Theoretical and Mathematical Physics)
Show Figures

Figure 1

Figure 1
<p>Trajectory when <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Trajectory when <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Trajectory when <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Trajectory when <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </semantics></math>.</p>
Full article ">
13 pages, 660 KiB  
Article
Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
by Nida Sardar, Sundas Khan, Arend Hintze and Priyanka Mehra
Entropy 2023, 25(6), 933; https://doi.org/10.3390/e25060933 - 13 Jun 2023
Cited by 2 | Viewed by 1988
Abstract
Deep learning models have achieved an impressive performance in a variety of tasks, but they often suffer from overfitting and are vulnerable to adversarial attacks. Previous research has shown that dropout regularization is an effective technique that can improve model generalization and robustness. [...] Read more.
Deep learning models have achieved an impressive performance in a variety of tasks, but they often suffer from overfitting and are vulnerable to adversarial attacks. Previous research has shown that dropout regularization is an effective technique that can improve model generalization and robustness. In this study, we investigate the impact of dropout regularization on the ability of neural networks to withstand adversarial attacks, as well as the degree of “functional smearing” between individual neurons in the network. Functional smearing in this context describes the phenomenon that a neuron or hidden state is involved in multiple functions at the same time. Our findings confirm that dropout regularization can enhance a network’s resistance to adversarial attacks, and this effect is only observable within a specific range of dropout probabilities. Furthermore, our study reveals that dropout regularization significantly increases the distribution of functional smearing across a wide range of dropout rates. However, it is the fraction of networks with lower levels of functional smearing that exhibit greater resilience against adversarial attacks. This suggests that, even though dropout improves robustness to fooling, one should instead try to decrease functional smearing. Full article
(This article belongs to the Special Issue Signal and Information Processing in Networks)
Show Figures

Figure 1

Figure 1
<p>Illustration of the difference between separate feature detectors and sparsely distributed functions. Panel (<b>A</b>): On the top, the input layer of a neural network is connected to the hidden layer and the output layer. Here we assume a classification task to distinguish three numerals: 1, 2, and 3. On the left side, we have 5 feature detectors, which can be convolutional kernels or just nodes in the hidden layer detecting a feature. The orange, red, and purple sections depict how those detectors can contribute to the outputs. On the left, we find three distinct modules, one for each numeral, but they do not rely on the same features. On the right, we find overlapping modules, where each of the functional modules involves features other modules also rely on. Panel (<b>B</b>) shows an illustration of a functional association matrix. Squares in white show high functional involvements between the categorized class and the nodes of the hidden layer (feature detectors, respectively), and black squares indicate no functional involvement between a node and a function.</p>
Full article ">Figure 2
<p>The figure illustrates FGSM fooling. The three images of the numeral 6 as <span class="html-italic">original</span> data, the <span class="html-italic">perturbation</span> computed by the FGSM fooling in red positive in blue negative values, and the resulting <span class="html-italic">fooled</span> image where the perturbation is applied to the original image. The plot on the right further showcases the classification accuracy across different values of <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>, which represents the magnitude of the perturbation ranging from <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0.0</mn> <mo>,</mo> <mn>0.3</mn> <mo>]</mo> </mrow> </semantics></math>. The data from the network that could be fooled the easiest are shown as <span class="html-italic">worst</span> in black, and the most robust network is shown in green as <span class="html-italic">best</span>.</p>
Full article ">Figure 3
<p>Illustration of relay information. Panel (<b>A</b>) depicts a neural network with an input image showing a 0 from the MNIST dataset. Information propagates through the network, represented by the gray or black arrows (where black arrows are assumed relevant), leading to the green circled hidden nodes being the relevant relays (<math display="inline"><semantics> <msub> <mi>Y</mi> <mi>R</mi> </msub> </semantics></math>) and the irrelevant nodes being circled orange (<math display="inline"><semantics> <msub> <mi>Y</mi> <mn>0</mn> </msub> </semantics></math>). Panel (<b>B</b>) displays <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>R</mi> </msub> <mo>=</mo> <mi>H</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>in</mi> </msub> <mo>;</mo> <msub> <mi>X</mi> <mi>out</mi> </msub> <mo>;</mo> <msub> <mi>Y</mi> <mi>R</mi> </msub> <mo>|</mo> <msub> <mi>Y</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> as the green surface in the information-theoretic Venn diagram, which also encompasses the input and output variables <math display="inline"><semantics> <msub> <mi>X</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>X</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </semantics></math>. Panel (<b>C</b>) details how the <math display="inline"><semantics> <msub> <mi>I</mi> <mi>R</mi> </msub> </semantics></math> for each set of nodes of different sizes is calculated (points). The greedy algorithm commences at the largest set size and continues to successively remove the least significant node, yielding a sequence of nodes of increasing importance (red dots). The <math display="inline"><semantics> <msub> <mi>I</mi> <mi>A</mi> </msub> </semantics></math> for each node is represented by the information loss upon its removal.</p>
Full article ">Figure 4
<p>Properties of networks with different hidden layer sizes (10 to 30 in increments of 2) trained on the MNIST task. Results from 30 replicates, each trained for 20 epochs, are shown. In panel (<b>A</b>), training accuracy is in black, and test accuracy is in red. The shadows indicate the 95% confidence intervals. In panel (<b>B</b>), fooling robustness is in black (left <span class="html-italic">Y</span> axis), and smearing is in red (right <span class="html-italic">Y</span> axis). Again, shadows indicate 95% confidence intervals.</p>
Full article ">Figure 5
<p>This figure shows the effect of varying dropout values on the robustness of a deep neural network to adversarial attacks. The <span class="html-italic">y</span>-axis represents the mean accuracy of the model after an attack, while the <span class="html-italic">x</span>-axis represents the dropout value used during training. The figure suggests that the model is more robust to attacks at dropout values between 0.035 and 0.05. The Mann–Whitney U test was used to analyze the significance of the results: + indicate significantly higher robustness, - for significantly lower (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 6
<p>Mean training (black) and testing (red) accuracy of neural networks trained with different levels of dropout (<span class="html-italic">x</span> axis). The same networks used in <a href="#entropy-25-00933-f005" class="html-fig">Figure 5</a> were used here. Shadows in black and red behind the solid lines (almost negligible) indicate 95% confidence intervals. Each panel shows the accuracy for each of the ten numerals tested. When measuring the accuracy for a numeral, the dataset contained 500 images of the indicated numeral, and 50 images of other numerals.</p>
Full article ">Figure 7
<p>Correlation between functional smearing, dropout, and robustness to fooling. Panel (<b>A</b>) shows the average functional smearing over the dropout rate (black line) measured for all trained networks (individual results for each network are shown as black dots). The gray shadow shows the 95% confidence intervals. The + indicates where the distribution of functional smeardness at different dropout levels is significantly larger than the distribution when no dropout is applied, and the dashed line shows the average (<span class="html-italic">p</span>-value of a Mann–Whitney U test <math display="inline"><semantics> <mrow> <mn>0.001</mn> </mrow> </semantics></math> for dropout <math display="inline"><semantics> <mrow> <mn>0.075</mn> </mrow> </semantics></math>, in the other cases <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>). Panel (<b>B</b>) shows the correlation between the robustness to fooling (<span class="html-italic">y</span>-axis) over the degree of smearing (<span class="html-italic">x</span>-axis), again for all trained neural networks. The dashed line shows a linear fit to the data, and the significance of the correlation is <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>9</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Correlation coefficient between Hessian matrix diagonals (<math display="inline"><semantics> <msub> <mi>H</mi> <mi>c</mi> </msub> </semantics></math>) obtained from models tested on the 10 different MNIST numeral categories. Panel (<b>A</b>) presents the mean <math display="inline"><semantics> <mover accent="true"> <msub> <mi>H</mi> <mi>c</mi> </msub> <mo stretchy="false">¯</mo> </mover> </semantics></math> averaged over all 100 trained models (black line) for each degree of dropout (<span class="html-italic">x</span>-axis). The gray shadow indicates the 95% confidence intervals. Panel (<b>B</b>) displays the correlation between <math display="inline"><semantics> <msub> <mi>H</mi> <mi>c</mi> </msub> </semantics></math> and robustness for all trained networks as a scatter plot. The dashed line shows a linear fit to the data, and the correlation coefficient is <math display="inline"><semantics> <mrow> <mn>0.42</mn> </mrow> </semantics></math> with a <span class="html-italic">p</span>-value virtually zero (&lt;<math display="inline"><semantics> <mrow> <mn>0.9</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>50</mn> </mrow> </msup> </mrow> </semantics></math>). Panel (<b>C</b>) illustrates the correlation between <math display="inline"><semantics> <msub> <mi>H</mi> <mi>c</mi> </msub> </semantics></math> (again for all tested models) and their functional smearing (<span class="html-italic">x</span>-axis). The dashed line shows a linear fit to the data, and the correlation coefficient is <math display="inline"><semantics> <mrow> <mn>0.08</mn> </mrow> </semantics></math> with a <span class="html-italic">p</span>-value of <math display="inline"><semantics> <mrow> <mn>0.003</mn> </mrow> </semantics></math>, indicating a very weak correlation.</p>
Full article ">
18 pages, 1882 KiB  
Article
Unsupervised Low-Light Image Enhancement Based on Generative Adversarial Network
by Wenshuo Yu, Liquan Zhao and Tie Zhong
Entropy 2023, 25(6), 932; https://doi.org/10.3390/e25060932 - 13 Jun 2023
Cited by 2 | Viewed by 2633
Abstract
Low-light image enhancement aims to improve the perceptual quality of images captured under low-light conditions. This paper proposes a novel generative adversarial network to enhance low-light image quality. Firstly, it designs a generator consisting of residual modules with hybrid attention modules and parallel [...] Read more.
Low-light image enhancement aims to improve the perceptual quality of images captured under low-light conditions. This paper proposes a novel generative adversarial network to enhance low-light image quality. Firstly, it designs a generator consisting of residual modules with hybrid attention modules and parallel dilated convolution modules. The residual module is designed to prevent gradient explosion during training and to avoid feature information loss. The hybrid attention module is designed to make the network pay more attention to useful features. A parallel dilated convolution module is designed to increase the receptive field and capture multi-scale information. Additionally, a skip connection is utilized to fuse shallow features with deep features to extract more effective features. Secondly, a discriminator is designed to improve the discrimination ability. Finally, an improved loss function is proposed by incorporating pixel loss to effectively recover detailed information. The proposed method demonstrates superior performance in enhancing low-light images compared to seven other methods. Full article
(This article belongs to the Special Issue Deep Learning Models and Applications to Computer Vision)
Show Figures

Figure 1

Figure 1
<p>Proposed generator. (<b>Part A</b>) the top-down network, (<b>Part B</b>) the bottom-up network.</p>
Full article ">Figure 2
<p>Proposed residual module with a hybrid attention mechanism.</p>
Full article ">Figure 3
<p>Proposed parallel dilated convolution module.</p>
Full article ">Figure 4
<p>Proposed discriminator.</p>
Full article ">Figure 5
<p>Low-light image enhancement results on the no-reference dataset.</p>
Full article ">Figure 6
<p>Low-light image enhancement results in the full-reference image test set.</p>
Full article ">
15 pages, 1261 KiB  
Article
Collective Dynamics, Diversification and Optimal Portfolio Construction for Cryptocurrencies
by Nick James and Max Menzies
Entropy 2023, 25(6), 931; https://doi.org/10.3390/e25060931 - 13 Jun 2023
Cited by 11 | Viewed by 1916
Abstract
Since its conception, the cryptocurrency market has been frequently described as an immature market, characterized by significant swings in volatility and occasionally described as lacking rhyme or reason. There has been great speculation as to what role it plays in a diversified portfolio. [...] Read more.
Since its conception, the cryptocurrency market has been frequently described as an immature market, characterized by significant swings in volatility and occasionally described as lacking rhyme or reason. There has been great speculation as to what role it plays in a diversified portfolio. For instance, is cryptocurrency exposure an inflationary hedge or a speculative investment that follows broad market sentiment with amplified beta? We have recently explored similar questions with a clear focus on the equity market. There, our research revealed several noteworthy dynamics such as an increase in the market’s collective strength and uniformity during crises, greater diversification benefits across equity sectors (rather than within them), and the existence of a “best value” portfolio of equities. In essence, we can now contrast any potential signatures of maturity we identify in the cryptocurrency market and contrast these with the substantially larger, older and better-established equity market. This paper aims to investigate whether the cryptocurrency market has recently exhibited similar mathematical properties as the equity market. Instead of relying on traditional portfolio theory, which is grounded in the financial dynamics of equity securities, we adjust our experimental focus to capture the presumed behavioral purchasing patterns of retail cryptocurrency investors. Our focus is on collective dynamics and portfolio diversification in the cryptocurrency market, and examining whether previously established results in the equity market hold in the cryptocurrency market and to what extent. The results reveal nuanced signatures of maturity related to the equity market, including the fact that correlations collectively spike around exchange collapses, and identify an ideal portfolio size and spread across different groups of cryptocurrencies. Full article
(This article belongs to the Special Issue Signatures of Maturity in Cryptocurrency Market)
Show Figures

Figure 1

Figure 1
<p>Normalized leading eigenvalue <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>λ</mi> <mo>˜</mo> </mover> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> of the cross-correlation matrix as a function of time, for (<b>a</b>) the entire collection of cryptocurrencies and (<b>b</b>) the ten deciles. Like the equity market, collective correlations spike during market crises, such as COVID-19, and the collapse of exchanges BitMEX and FTX.</p>
Full article ">Figure 2
<p>Uniformity <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> of the leading eigenvector <math display="inline"><semantics> <msub> <mi mathvariant="bold">v</mi> <mn>1</mn> </msub> </semantics></math> of the cross-correlation matrix as a function of time, for (<b>a</b>) the entire collection of cryptocurrencies and (<b>b</b>) the ten deciles. The results are dramatically different compared to the equity market, with numerous deciles exhibiting strikingly low uniformity scores over time.</p>
Full article ">Figure 3
<p>Results of hierarchical clustering applied to (<a href="#FD6-entropy-25-00931" class="html-disp-formula">6</a>) between ordered pairs <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math>. A large majority cluster confirms the finding of <a href="#entropy-25-00931-t002" class="html-table">Table 2</a> that the (4,4) portfolio is closely similar to the full (10,4) portfolio in its diversification benefit over time.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop