Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 18, November
Previous Issue
Volume 18, September
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 18, Issue 10 (October 2016) – 37 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
3004 KiB  
Article
A Robust Sparse Adaptive Filtering Algorithm with a Correntropy Induced Metric Constraint for Broadband Multi-Path Channel Estimation
by Yingsong Li, Zhan Jin, Yanyan Wang and Rui Yang
Entropy 2016, 18(10), 380; https://doi.org/10.3390/e18100380 - 24 Oct 2016
Cited by 23 | Viewed by 5950
Abstract
A robust sparse least-mean mixture-norm (LMMN) algorithm is proposed, and its performance is appraised in the context of estimating a broadband multi-path wireless channel. The proposed algorithm is implemented via integrating a correntropy-induced metric (CIM) penalty into the conventional LMMN algorithm to modify [...] Read more.
A robust sparse least-mean mixture-norm (LMMN) algorithm is proposed, and its performance is appraised in the context of estimating a broadband multi-path wireless channel. The proposed algorithm is implemented via integrating a correntropy-induced metric (CIM) penalty into the conventional LMMN algorithm to modify the basic cost function, which is denoted as the CIM-based LMMN (CIM-LMMN) algorithm. The proposed CIM-LMMN algorithm is derived in detail within the kernel framework. The updating equation of CIM-LMMN can provide a zero attractor to attract the non-dominant channel coefficients to zeros, and it also gives a tradeoff between the sparsity and the estimation misalignment. Moreover, the channel estimation behavior is investigated over a broadband sparse multi-path wireless channel, and the simulation results are compared with the least mean square/fourth (LMS/F), least mean square (LMS), least mean fourth (LMF) and the recently-developed sparse channel estimation algorithms. The channel estimation performance obtained from the designated sparse channel estimation demonstrates that the CIM-LMMN algorithm outperforms the recently-developed sparse LMMN algorithms and the relevant sparse channel estimation algorithms. From the results, we can see that our CIM-LMMN algorithm is robust and is superior to these mentioned algorithms in terms of both the convergence speed rate and the channel estimation misalignment for estimating a sparse channel. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

Figure 1
<p>Convergence comparisons of the proposed CIM-LMMN algorithm with previously-reported sparse channel estimation algorithms.</p>
Full article ">Figure 2
<p>Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS/F algorithm for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS/F algorithm for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS/F algorithm for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS algorithm for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS algorithm for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 7
<p>Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS algorithm for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>Channel estimation behavior of the CIM-LMMN algorithm compared with the LMF algorithm for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 9
<p>Channel estimation behavior of the CIM-LMMN algorithm compared with the LMF algorithm for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p>Channel estimation behavior of the CIM-LMMN algorithm compared with the LMF algorithm for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 11
<p>Tracking behavior of the proposed CIM-LMMN algorithm.</p>
Full article ">
1170 KiB  
Article
A Novel Sequence-Based Feature for the Identification of DNA-Binding Sites in Proteins Using Jensen–Shannon Divergence
by Truong Khanh Linh Dang, Cornelia Meckbach, Rebecca Tacke, Stephan Waack and Mehmet Gültas
Entropy 2016, 18(10), 379; https://doi.org/10.3390/e18100379 - 24 Oct 2016
Cited by 5 | Viewed by 6977
Abstract
The knowledge of protein-DNA interactions is essential to fully understand the molecular activities of life. Many research groups have developed various tools which are either structure- or sequence-based approaches to predict the DNA-binding residues in proteins. The structure-based methods usually achieve good results, [...] Read more.
The knowledge of protein-DNA interactions is essential to fully understand the molecular activities of life. Many research groups have developed various tools which are either structure- or sequence-based approaches to predict the DNA-binding residues in proteins. The structure-based methods usually achieve good results, but require the knowledge of the 3D structure of protein; while sequence-based methods can be applied to high-throughput of proteins, but require good features. In this study, we present a new information theoretic feature derived from Jensen–Shannon Divergence (JSD) between amino acid distribution of a site and the background distribution of non-binding sites. Our new feature indicates the difference of a certain site from a non-binding site, thus it is informative for detecting binding sites in proteins. We conduct the study with a five-fold cross validation of 263 proteins utilizing the Random Forest classifier. We evaluate the functionality of our new features by combining them with other popular existing features such as position-specific scoring matrix (PSSM), orthogonal binary vector (OBV), and secondary structure (SS). We notice that by adding our features, we can significantly boost the performance of Random Forest classifier, with a clear increment of sensitivity and Matthews correlation coefficient (MCC). Full article
(This article belongs to the Special Issue Entropy on Biosignals and Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>DNA-binding sites in proto-oncobenic transcription factor MYC-MAX protein complex (PDB-Entry 1NKP). Green spheres denote positions of the DNA-binding sites in both proteins which are detected by RF classifier either using the existing features (<math display="inline"> <semantics> <msub> <mi mathvariant="normal">f</mi> <mi>PSSM</mi> </msub> </semantics> </math>, <math display="inline"> <semantics> <msub> <mi mathvariant="normal">f</mi> <mi>OBV</mi> </msub> </semantics> </math>, and <math display="inline"> <semantics> <msub> <mi mathvariant="normal">f</mi> <mi>SS</mi> </msub> </semantics> </math>) alone or combining our new features with these existing features together. Purple spheres show the localization of additional binding sites which were only found by RF classifier using our new features with existing features. Moreover, there are further three binding sites in MYC protein and one binding site in MAX protein, shown with yellow spheres, that could not be identified by the classifier.</p>
Full article ">
7083 KiB  
Article
Second Law Analysis of Nanofluid Flow within a Circular Minichannel Considering Nanoparticle Migration
by Mehdi Bahiraei and Navid Cheraghi Kazerooni
Entropy 2016, 18(10), 378; https://doi.org/10.3390/e18100378 - 21 Oct 2016
Cited by 3 | Viewed by 4901
Abstract
In the current research, entropy generation for the water–alumina nanofluid flow is studied in a circular minichannel for the laminar regime under constant wall heat flux in order to evaluate irreversibilities arising from friction and heat transfer. To this end, simulations are carried [...] Read more.
In the current research, entropy generation for the water–alumina nanofluid flow is studied in a circular minichannel for the laminar regime under constant wall heat flux in order to evaluate irreversibilities arising from friction and heat transfer. To this end, simulations are carried out considering the particle migration effects. Due to particle migration, the nanoparticles incorporate non-uniform distribution at the cross-section of the pipe, such that the concentration is larger at central areas. The concentration non-uniformity increases by augmenting the mean concentration, particle size, and Reynolds number. The rates of entropy generation are evaluated both locally and globally (integrated). The obtained results show that particle migration changes the thermal and frictional entropy generation rates significantly, particularly at high Reynolds numbers, large concentrations, and coarser particles. Hence, this phenomenon should be considered in examinations related to energy in the field of nanofluids. Full article
(This article belongs to the Special Issue Limits to the Second Law of Thermodynamics: Experiment and Theory)
Show Figures

Figure 1

Figure 1
<p>Nusselt number (Nu) obtained from the present work compared to valid data [<a href="#B37-entropy-18-00378" class="html-bibr">37</a>] for pure water.</p>
Full article ">Figure 2
<p>Concentration distribution at a cross-section of the tube for different mean concentrations at <span class="html-italic">Re</span> = 2000 and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 3
<p>Concentration distribution for different Reynolds numbers at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 4
<p>Concentration distribution at the tube cross-section for different sizes of nanoparticles at <span class="html-italic">Re</span> = 2000 and <span class="html-italic">ϕ<sub>m</sub></span> = 5%.</p>
Full article ">Figure 5
<p>Thermal entropy generation rate for different concentrations at <span class="html-italic">Re</span> = 2000 and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 6
<p>Thermal conductivity at a tube cross-section for different concentrations.</p>
Full article ">Figure 7
<p>Temperature gradient at a tube cross-section for different concentrations.</p>
Full article ">Figure 8
<p>Profiles of thermal entropy generation rate at two different cross-sections for <span class="html-italic">Re</span> = 2000, <span class="html-italic">d<sub>p</sub></span> = 90 nm, and <span class="html-italic">ϕ<sub>m</sub></span> = 5%.</p>
Full article ">Figure 9
<p>Frictional entropy generation rate for different concentrations at <span class="html-italic">Re</span> = 2000 and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 10
<p>Effect of concentration change at <span class="html-italic">Re</span> = 2000 and <span class="html-italic">d<sub>p</sub></span> = 90 nm on: (<b>a</b>) viscosity; (<b>b</b>) velocity; (<b>c</b>) temperature.</p>
Full article ">Figure 11
<p>Velocity gradient at tube cross-section for different concentrations.</p>
Full article ">Figure 12
<p>Total entropy generation rate in terms of the concentration at <span class="html-italic">Re</span> = 2000 and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 13
<p>Thermal entropy generation rate for different Reynolds numbers at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 14
<p>Temperature gradient for different Reynolds numbers at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 15
<p>Frictional entropy generation rate for different Reynolds numbers at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 16
<p>Velocity gradient for different Reynolds numbers at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 17
<p>Total entropy generation rate for different Reynolds numbers at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 18
<p>Effect of nanoparticle size on thermal entropy generation rate at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">Re</span> = 2000.</p>
Full article ">Figure 19
<p>Frictional entropy generation rate at <span class="html-italic">ϕ</span><span class="html-italic"><sub>m</sub></span> = 5% and <span class="html-italic">Re</span> = 2000 for different particle sizes.</p>
Full article ">Figure 20
<p>Velocity gradient at different particle sizes.</p>
Full article ">Figure 21
<p>Nanofluid viscosity at different particle sizes.</p>
Full article ">Figure 22
<p>Total entropy generation rate for different nanoparticle sizes at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">Re</span> = 2000.</p>
Full article ">Figure 23
<p>Local Bejan number for different particle sizes at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">Re</span> = 2000.</p>
Full article ">Figure 24
<p>Thermal entropy generation rate for two uniform and non-uniform models at <span class="html-italic">ϕ<sub>m</sub></span> = 5%, <span class="html-italic">Re</span> = 2000, and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 25
<p>Frictional entropy generation rate for two models (namely, uniform and non-uniform) at <span class="html-italic">ϕ<sub>m</sub></span> = 5%, <span class="html-italic">Re</span> = 2000, and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 26
<p>Velocity profiles for uniform and non-uniform models.</p>
Full article ">Figure 27
<p>Total entropy generation rates obtained from the uniform and non-uniform models at <span class="html-italic">ϕ<sub>m</sub></span> = 5%, <span class="html-italic">Re</span> = 2000, and <span class="html-italic">d<sub>p</sub></span> = 90 nm.</p>
Full article ">Figure 28
<p>Thermal entropy generation rate obtained from uniform and non-uniform models for <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">d<sub>p</sub></span> = 90 nm at: (<b>a</b>) <span class="html-italic">Re</span> = 200; (<b>b</b>) <span class="html-italic">Re</span> = 1000.</p>
Full article ">Figure 29
<p>Frictional entropy generation rate obtained from uniform and non-uniform models for <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">d<sub>p</sub></span> = 90 nm at: (<b>a</b>) <span class="html-italic">Re</span> = 200; (<b>b</b>) <span class="html-italic">Re</span> = 1000.</p>
Full article ">Figure 30
<p>Entropy generation rates obtained from the uniform and non-uniform models for particle size of 10 nm at <span class="html-italic">ϕ<sub>m</sub></span> = 5% and <span class="html-italic">Re</span> = 2000: (<b>a</b>) thermal; (<b>b</b>) frictional.</p>
Full article ">Figure 31
<p>Variations of entropy generation rates of base fluid upon adding particles of 90 nm size with concentration of 5% at <span class="html-italic">Re</span> = 2000: (<b>a</b>) thermal; (<b>b</b>) frictional; (<b>c</b>) total.</p>
Full article ">Figure 32
<p>Variations of entropy generation rates of base fluid upon adding particles of 90 nm size with concentration of 5% at <span class="html-italic">Re</span> = 200: (<b>a</b>) thermal; (<b>b</b>) frictional; (<b>c</b>) total.</p>
Full article ">Figure 33
<p>Entropy generation rates for different concentrations at <span class="html-italic">Re</span> = 2000 and <span class="html-italic">d<sub>p</sub></span> = 90 nm for two cases with and without particle migration (non-uniform and uniform, respectively).</p>
Full article ">Figure 34
<p>Entropy generation rates considering particle migration at <span class="html-italic">Re</span> = 2000 and <span class="html-italic">ϕ<sub>m</sub></span> = 5% for different particle sizes.</p>
Full article ">
17446 KiB  
Article
Isothermal Oxidation of Aluminized Coatings on High-Entropy Alloys
by Che-Wei Tsai, Kuen-Cheng Sung, Kzauki Kasai and Hideyuki Murakami
Entropy 2016, 18(10), 376; https://doi.org/10.3390/e18100376 - 20 Oct 2016
Cited by 6 | Viewed by 5193
Abstract
The isothermal oxidation resistance of Al0.2Co1.5CrFeNi1.5Ti0.3 high-entropy alloy is analyzed and the microstructural evolution of the oxide layer is studied. The limited aluminum, about 3.6 at %, leads to the non-continuous alumina. The present alloy is [...] Read more.
The isothermal oxidation resistance of Al0.2Co1.5CrFeNi1.5Ti0.3 high-entropy alloy is analyzed and the microstructural evolution of the oxide layer is studied. The limited aluminum, about 3.6 at %, leads to the non-continuous alumina. The present alloy is insufficient for severe circumstances only due to chromium oxide that is 10 μm after 1173 K for 360 h. Thus, the aluminized high-entropy alloys (HEAs) are further prepared by the industrial packing cementation process at 1273 K and 1323 K. The aluminizing coating is 50 μm at 1273 K after 5 h. The coating growth is controlled by the diffusion of aluminum. The interdiffusion zone reveals two regions that are the Ti-, Co-, Ni-rich area and the Fe-, Cr-rich area. The oxidation resistance of aluminizing HEA improves outstandingly, and sustains at 1173 K and 1273 K for 441 h without any spallation. The alumina at the surface and the stable interface contribute to the performance of this Al0.2Co1.5CrFeNi1.5Ti0.3 alloy. Full article
(This article belongs to the Special Issue High-Entropy Alloys and High-Entropy-Related Materials)
Show Figures

Figure 1

Figure 1
<p>The isothermal oxidation of Al<sub>0.2</sub>Co<sub>1.5</sub>CrFeNi<sub>1.5</sub>Ti<sub>0.3</sub> alloys: (<b>a</b>) surface morphology of samples at 1273 K for 180 h; (<b>b</b>) the weight gain curves at 1273 K and 1173 K.</p>
Full article ">Figure 2
<p>The X-ray curves of isothermal oxidation Al<sub>0.2</sub>Co<sub>1.5</sub>CrFeNi<sub>1.5</sub>Ti<sub>0.3</sub> alloys at 1173 K for various times.</p>
Full article ">Figure 3
<p>SEM-EDS mapping of isothermal oxidation samples at 1173 K for various times: (<b>a</b>) 1 h; (<b>b</b>) 10 h; (<b>c</b>) 100 h; and (<b>d</b>) 360 h.</p>
Full article ">Figure 4
<p>The thicknesses of the oxidation and interdiffusion layer are shown after the isothermal test. The numbers on the figure are the summation of these three regions.</p>
Full article ">Figure 5
<p>The microstructure of the aluminizing layer is shown, and the compositional profiles of each element are also inserted after the aluminizing process composition at (<b>a</b>) 1273 K and (<b>b</b>) 1323 K.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>b</b>) The surface morphology after aluminizing coating at 1273 K; (<b>c</b>) the EDS mapping of each element by cross-section.</p>
Full article ">Figure 7
<p>The X-ray curve of the aluminization layer at 1273 K.</p>
Full article ">Figure 8
<p>The isothermal oxidation resistance of the Al<sub>0.2</sub>Co<sub>1.5</sub>CrFeNi<sub>1.5</sub>Ti<sub>0.3</sub> alloy after aluminizing at 1173 K, 1273 K and 1373 K.</p>
Full article ">
345 KiB  
Article
Non-Asymptotic Confidence Sets for Circular Means
by Thomas Hotz, Florian Kelma and Johannes Wieditz
Entropy 2016, 18(10), 375; https://doi.org/10.3390/e18100375 - 20 Oct 2016
Cited by 3 | Viewed by 4776
Abstract
The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that [...] Read more.
The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics)
Show Figures

Figure 1

Figure 1
<p>The construction for the test of the hypothesis <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <msup> <mi mathvariant="sans-serif">S</mi> <mn>1</mn> </msup> <mo>,</mo> </mrow> </semantics> </math> or equivalently <math display="inline"> <semantics> <mrow> <mi mathvariant="bold">E</mi> <mi>Z</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 2
<p>The construction for the test of the hypothesis <math display="inline"> <semantics> <mrow> <mi mathvariant="bold">E</mi> <mi>Z</mi> <mo>=</mo> <mi>λ</mi> <mi>ζ</mi> </mrow> </semantics> </math> with <math display="inline"> <semantics> <mrow> <mi>λ</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>The critical <math display="inline"> <semantics> <msub> <mover accent="true"> <mi>Z</mi> <mo stretchy="false">¯</mo> </mover> <mi>n</mi> </msub> </semantics> </math> regarding the rejection of <span class="html-italic">ζ</span>. <math display="inline"> <semantics> <msub> <mi>δ</mi> <mi>H</mi> </msub> </semantics> </math> bounds the angle between <math display="inline"> <semantics> <msub> <mover accent="true"> <mi>μ</mi> <mo stretchy="false">^</mo> </mover> <mi>n</mi> </msub> </semantics> </math> and any accepted <math display="inline"> <semantics> <mrow> <mi>ζ</mi> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 4
<p>Two points of equal mass at <math display="inline"> <semantics> <mrow> <mo>±</mo> <msup> <mn>10</mn> <mo>∘</mo> </msup> </mrow> </semantics> </math> and their Euclidean mean.</p>
Full article ">Figure 5
<p>Three points placed asymmetrically with different masses and their Euclidean mean.</p>
Full article ">Figure 6
<p>Ant data ( <span class="html-fig-inline" id="entropy-18-00375-i002"> <img alt="Entropy 18 00375 i002" src="/entropy/entropy-18-00375/article_deploy/html/images/entropy-18-00375-i002.png"/></span>) placed at increasing radii to visually resolve ties; in addition, the circular mean direction ( <span class="html-fig-inline" id="entropy-18-00375-i003"> <img alt="Entropy 18 00375 i003" src="/entropy/entropy-18-00375/article_deploy/html/images/entropy-18-00375-i003.png"/></span>) as well as confidence sets <math display="inline"> <semantics> <msub> <mi>C</mi> <mi>H</mi> </msub> </semantics> </math> ( <span class="html-fig-inline" id="entropy-18-00375-i004"> <img alt="Entropy 18 00375 i004" src="/entropy/entropy-18-00375/article_deploy/html/images/entropy-18-00375-i004.png"/></span>), <math display="inline"> <semantics> <msub> <mi>C</mi> <mi>V</mi> </msub> </semantics> </math> ( <span class="html-fig-inline" id="entropy-18-00375-i005"> <img alt="Entropy 18 00375 i005" src="/entropy/entropy-18-00375/article_deploy/html/images/entropy-18-00375-i005.png"/></span>), and <math display="inline"> <semantics> <msub> <mi>C</mi> <mi>A</mi> </msub> </semantics> </math> ( <span class="html-fig-inline" id="entropy-18-00375-i006"> <img alt="Entropy 18 00375 i006" src="/entropy/entropy-18-00375/article_deploy/html/images/entropy-18-00375-i006.png"/></span>) are shown.</p>
Full article ">
4763 KiB  
Article
On the Virtual Cell Transmission in Ultra Dense Networks
by Xiaopeng Zhu, Jie Zeng, Xin Su, Chiyang Xiao, Jing Wang and Lianfen Huang
Entropy 2016, 18(10), 374; https://doi.org/10.3390/e18100374 - 20 Oct 2016
Cited by 11 | Viewed by 5057
Abstract
Ultra dense networks (UDN) are identified as one of the key enablers for 5G, since they can provide an ultra high spectral reuse factor exploiting proximal transmissions. By densifying the network infrastructure equipment, it is highly possible that each user will have one [...] Read more.
Ultra dense networks (UDN) are identified as one of the key enablers for 5G, since they can provide an ultra high spectral reuse factor exploiting proximal transmissions. By densifying the network infrastructure equipment, it is highly possible that each user will have one or more dedicated serving base station antennas, introducing the user-centric virtual cell paradigm. However, due to irregular deployment of a large amount of base station antennas, the interference environment becomes rather complex, thus introducing severe interferences among different virtual cells. This paper focuses on the downlink transmission scheme in UDN where a large number of users and base station antennas is uniformly spread over a certain area. An interference graph is first created based on the large-scale fadings to give a potential description of the interference relationship among the virtual cells. Then, base station antennas and users in the virtual cells within the same maximally-connected component are grouped together and merged into one new virtual cell cluster, where users are jointly served via zero-forcing (ZF) beamforming. A multi-virtual-cell minimum mean square error precoding scheme is further proposed to mitigate the inter-cluster interference. Additionally, the interference alignment framework is proposed based on the low complexity virtual cell merging to eliminate the strong interference between different virtual cells. Simulation results show that the proposed interference graph-based virtual cell merging approach can attain the average user spectral efficiency performance of the grouping scheme based on virtual cell overlapping with a smaller virtual cell size and reduced signal processing complexity. Besides, the proposed user-centric transmission scheme greatly outperforms the BS-centric transmission scheme (maximum ratio transmission (MRT)) in terms of both the average user spectral efficiency and edge user spectral efficiency. What is more, interference alignment based on the low complexity virtual cell merging can achieve much better performance than ZF and MRT precoding in terms of average user spectral efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the downlink ultra dense network (UDN) based on virtual cell merging. Red dashed ellipses denote the virtual cells before virtual cell merging, and blue solid ellipses are the ones after virtual cell merging. User equipments (UEs) in the same blue solid ellipse are jointly served by the remote base station antennas (RBAs) therein.</p>
Full article ">Figure 2
<p>Illustration of the interference graph-based virtual cell merging. Circles denote the initial cells; numbers on the edges denote the potential interference strength metric. Different colors stand for the result of virtual cell merging, where cells with the same color belong to the same merged virtual cell cluster.</p>
Full article ">Figure 3
<p>Interference alignment algorithm with one cluster. Dashed line arrows denote the interference from Virtual Cell 3 (V3) to V1 and V2. Dotted dashed line arrows denote the interference from V2 to V1 and V3. Solid line arrows denote the interference from V1 to V2 and V3. At the two-dimensional coordinate system, the interference from other virtual cells can be aligned into a lower dimensional subspace at each receiver.</p>
Full article ">Figure 4
<p>Average user spectral efficiency of maximum ratio transmission (MRT), ZF-virtual cell overlapping (VCMO) and MVC-MMSE-VCMO versus initial virtual cell size <math display="inline"> <semantics> <msub> <mi>N</mi> <mn>0</mn> </msub> </semantics> </math>, with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Average user spectral efficiency of MRT, ZF-virtual cell merging based on the interference graph (VCMG) and MVC-MMSE-VCMG versus relative interference graph threshold <span class="html-italic">φ</span>, with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Average user spectral efficiency of ZF-VCMO, MVC-MMSE-VCMO, ZF-VCMG and MVC-MMSE-VCMG versus the average cluster size, with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 7
<p>Average user spectral efficiency of ZF-VCMO, MVC-MMSE-VCMO, ZF-VCMG and MVC-MMSE-VCMG versus the maximal cluster size, with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>Cumulative distribution of user spectral efficiency for different transmission schemes. The results of <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics> </math> are shown.</p>
Full article ">Figure 9
<p>Average user spectral efficiency versus <math display="inline"> <semantics> <mrow> <mi>P</mi> <mo>/</mo> <msup> <mi>σ</mi> <mn>2</mn> </msup> </mrow> </semantics> </math> for different interference management algorithms, with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>51</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <msub> <mi>N</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>.</p>
Full article ">
1337 KiB  
Correction
Correction: Jacobsen, C.S., et al. Continuous Variable Quantum Key Distribution with a Noisy Laser. Entropy 2015, 17, 4654–4663
by Christian S. Jacobsen, Tobias Gehring and Ulrik L. Andersen
Entropy 2016, 18(10), 373; https://doi.org/10.3390/e18100373 - 20 Oct 2016
Viewed by 3259 Show Figures

Figure 1

Figure 1
<p>Contour plots of the secure key generation rate for varying preparation noise in shot-noise units (SNUs) and transmission <span class="html-italic">T</span> for (<b>a</b>) reverse reconciliation and (<b>b</b>) direct reconciliation. The error reconciliation efficiency was set to <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>95</mn> <mo>%</mo> </mrow> </semantics> </math>, the modulation variance was 32 SNUs, and the channel excess noise <math display="inline"> <semantics> <mrow> <mn>0</mn> <mo>.</mo> <mn>11</mn> </mrow> </semantics> </math>. The dashed lines indicate the minimal possible transmission of a channel where a positive secret key rate can still be obtained, in the ideal case for <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, no channel excess noise, and in the limit of high modulation variance. (a) For no preparation noise (<math display="inline"> <semantics> <mrow> <mi>κ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>), the rate decreases asymptotically to zero as the transmission approaches zero. When the preparation noise increases, the security of reverse reconciliation is quickly compromised, to the point where almost unity transmission is required to achieve security. (b) For heterodyne detection and no preparation noise, the rate goes to zero at about <math display="inline"> <semantics> <mrow> <mn>79</mn> <mo>%</mo> </mrow> </semantics> </math> transmission, due to the extra unit of vacuum introduced by heterodyne detection. The plot shows the robustness of direct reconciliation to preparation noise.</p>
Full article ">Figure 2
<p>Measured data and theory curves for different levels of preparation noise using (<b>a</b>) reverse reconciliation and (<b>b</b>) direct reconciliation in the post-processing. Error reconciliation efficiency <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>95</mn> <mo>%</mo> </mrow> </semantics> </math>. Due to our simulation of losses (see main text), the error bars on the channel loss are negligibly small, and thus not shown in the figure.</p>
Full article ">
4329 KiB  
Article
Point Information Gain and Multidimensional Data Analysis
by Renata Rychtáriková, Jan Korbel, Petr Macháček, Petr Císař, Jan Urban and Dalibor Štys
Entropy 2016, 18(10), 372; https://doi.org/10.3390/e18100372 - 19 Oct 2016
Cited by 14 | Viewed by 5946
Abstract
We generalize the point information gain (PIG) and derived quantities, i.e., point information gain entropy (PIE) and point information gain entropy density (PIED), for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these [...] Read more.
We generalize the point information gain (PIG) and derived quantities, i.e., point information gain entropy (PIE) and point information gain entropy density (PIED), for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these methods for the analysis of multidimensional datasets. We demonstrate the main properties of PIE/PIED spectra for the real data with the examples of several images and discuss further possible utilizations in other fields of data processing. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p><math display="inline"> <semantics> <msub> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mi>α</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics> </math>-transformations of the discretized Lévy (<b>a</b>), Cauchy (<b>b</b>), and Gauss (<b>c</b>) distribution at <span class="html-italic">α</span> = 0.99. The deviation from the monotone dependency in the Gauss distribution is due to the digital rounding.</p>
Full article ">Figure 2
<p><math display="inline"> <semantics> <msubsup> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mi>α</mi> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics> </math>-transformations of the discretized Lévy distribution at <span class="html-italic">α</span> = <math display="inline"> <semantics> <mrow> <mo>{</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> <mo>,</mo> <mn>0</mn> <mo>.</mo> <mn>99</mn> <mo>,</mo> <mn>1</mn> <mo>.</mo> <mn>5</mn> <mo>,</mo> <mn>2</mn> <mo>.</mo> <mn>0</mn> <mo>,</mo> <mn>2</mn> <mo>.</mo> <mn>5</mn> <mo>,</mo> <mn>4</mn> <mo>.</mo> <mn>0</mn> <mo>}</mo> </mrow> </semantics> </math> (from (<b>a</b>) to (<b>f</b>)).</p>
Full article ">Figure 3
<p><math display="inline"> <semantics> <msubsup> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mn>0</mn> <mo>.</mo> <mn>99</mn> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics> </math>-transformations of the texmos2.s512 image [<a href="#B26-entropy-18-00372" class="html-bibr">26</a>]. Original image (<b>a</b>) and information images calculated from the whole image (<b>b</b>), a cross around each pixel (<b>c</b>), and squares of the side of 5, 15, and 29 px, respectively, with the centered examined pixel (<b>d–f</b>).</p>
Full article ">Figure 4
<p>Histograms of <math display="inline"> <semantics> <msubsup> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mn>0</mn> <mo>.</mo> <mn>99</mn> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics> </math>-transformations of the texmos2.s512 image [<a href="#B26-entropy-18-00372" class="html-bibr">26</a>]. Original image (<b>a</b>), original values <math display="inline"> <semantics> <msubsup> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mn>0</mn> <mo>.</mo> <mn>99</mn> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics> </math> calculated from the whole image (<b>b</b>), original values <math display="inline"> <semantics> <msubsup> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mn>0</mn> <mo>.</mo> <mn>99</mn> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics> </math> calculated from a cross whose shanks intersect in the examined pixel (<b>c</b>), <math display="inline"> <semantics> <msubsup> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mn>0</mn> <mo>.</mo> <mn>99</mn> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics> </math>-transformed images calculated from the whole image (<b>d</b>), and <math display="inline"> <semantics> <msubsup> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mn>0</mn> <mo>.</mo> <mn>99</mn> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics> </math>-transformed images calculated from a cross around each pixel (<b>e</b>). Colors in the original and globally (whole image) transformed histograms correspond to the intensity levels with the identical frequencies of occurrences in the original image.</p>
Full article ">Figure 5
<p><math display="inline"> <semantics> <msubsup> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mn>0</mn> <mo>.</mo> <mn>99</mn> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics> </math>-transformations of the 4.1.07 image [<a href="#B26-entropy-18-00372" class="html-bibr">26</a>]. Original image (<b>a</b>) and information images calculated from the whole image (<b>b</b>), a cross around each pixel (<b>c</b>), and circles of the diameter of 5, 17, and 30 px, respectively, with the centered examined pixel (<b>d–f</b>).</p>
Full article ">Figure 6
<p>Spectra <math display="inline"> <semantics> <msub> <mi>H</mi> <mi>α</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi mathvariant="sans-serif">Ξ</mi> <mi>α</mi> </msub> </semantics> </math> for global information and different local surroundings of a unifractal (texmos2.s512 [<a href="#B26-entropy-18-00372" class="html-bibr">26</a>], column (<b>a</b>)) and multifracal (wd950112 [<a href="#B31-entropy-18-00372" class="html-bibr">31</a>], column (<b>b</b>)) image at <span class="html-italic">α</span> = <math display="inline"> <semantics> <mrow> <mo>{</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mn>0</mn> <mo>.</mo> <mn>9</mn> <mo>,</mo> <mn>0</mn> <mo>.</mo> <mn>99</mn> <mo>,</mo> <mn>1</mn> <mo>.</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mn>4</mn> <mo>.</mo> <mn>0</mn> <mo>}</mo> </mrow> </semantics> </math>.</p>
Full article ">
7021 KiB  
Article
Study on the Stability and Entropy Complexity of an Energy-Saving and Emission-Reduction Model with Two Delays
by Jing Wang and Yuling Wang
Entropy 2016, 18(10), 371; https://doi.org/10.3390/e18100371 - 19 Oct 2016
Cited by 5 | Viewed by 3878
Abstract
In this paper, we build a model of energy-savings and emission-reductions with two delays. In this model, it is assumed that the interaction between energy-savings and emission-reduction and that between carbon emissions and economic growth are delayed. We examine the local stability and [...] Read more.
In this paper, we build a model of energy-savings and emission-reductions with two delays. In this model, it is assumed that the interaction between energy-savings and emission-reduction and that between carbon emissions and economic growth are delayed. We examine the local stability and the existence of a Hopf bifurcation at the equilibrium point of the system. By employing System Complexity Theory, we also analyze the impact of delays and the feedback control on stability and entropy of the system are analyzed from two aspects: single delay and double delays. In numerical simulation section, we test the theoretical analysis by using means bifurcation diagram, the largest Lyapunov exponent diagrams, attractor, time-domain plot, Poincare section plot, power spectrum, entropy diagram, 3-D surface chart and 4-D graph, the simulation results demonstrating that the inappropriate changes of delays and the feedback control will result in instability and fluctuation of carbon emissions. Finally, the bifurcation control is achieved by using the method of variable feedback control. Hence, we conclude that the greater the value of the control parameter, the better the effect of the bifurcation control. The results will provide for the development of energy-saving and emission-reduction policies. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Figure 1
<p>The influence of <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> on the stability of Equation (19) when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>. (<b>a</b>) bifurcation diagram; (<b>b</b>) the largest Lyapunov exponent plot.</p>
Full article ">Figure 2
<p>Time-domain plot when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> <mo>&lt;</mo> <msub> <mi>τ</mi> <mrow> <mn>10</mn> </mrow> </msub> <mo>=</mo> <mn>0.4618</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>The EE attractor when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> <mo>&lt;</mo> <msub> <mi>τ</mi> <mrow> <mn>10</mn> </mrow> </msub> <mo>=</mo> <mn>0.4618</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.7</mn> <mo>,</mo> <mn>0.9</mn> <mo stretchy="false">)</mo> </mrow> </semantics> </math>. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 3 Cont.
<p>The EE attractor when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> <mo>&lt;</mo> <msub> <mi>τ</mi> <mrow> <mn>10</mn> </mrow> </msub> <mo>=</mo> <mn>0.4618</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.7</mn> <mo>,</mo> <mn>0.9</mn> <mo stretchy="false">)</mo> </mrow> </semantics> </math>. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Equation (19) is unstable when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> <mo>&gt;</mo> <msub> <mi>τ</mi> <mrow> <mn>10</mn> </mrow> </msub> <mo>=</mo> <mn>0.4618</mn> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>. (<b>a</b>) time-domain plot; (<b>b</b>) frequency spectrum plot.</p>
Full article ">Figure 5
<p>The EE attractor when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> <mo>&gt;</mo> <msub> <mi>τ</mi> <mrow> <mn>10</mn> </mrow> </msub> <mo>=</mo> <mn>0.4618</mn> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.7</mn> <mo>,</mo> <mn>0.9</mn> <mo stretchy="false">)</mo> </mrow> </semantics> </math>. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 5 Cont.
<p>The EE attractor when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> <mo>&gt;</mo> <msub> <mi>τ</mi> <mrow> <mn>10</mn> </mrow> </msub> <mo>=</mo> <mn>0.4618</mn> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.7</mn> <mo>,</mo> <mn>0.9</mn> <mo stretchy="false">)</mo> </mrow> </semantics> </math>. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>The entropy plot respect to <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 7
<p>The influence of <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>b</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> on <math display="inline"> <semantics> <mrow> <mi>y</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>The influence of <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>b</mi> <mn>4</mn> </msub> </mrow> </semantics> </math> on <math display="inline"> <semantics> <mrow> <mi>y</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 9
<p>The influence of <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>b</mi> <mn>4</mn> </msub> </mrow> </semantics> </math> on <math display="inline"> <semantics> <mrow> <mi>y</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p>The influence of <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> on the stability of Equation (19) when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>. (<b>a</b>) bifurcation diagram; (<b>b</b>) the largest Lyapunov exponent plot.</p>
Full article ">Figure 11
<p>Time-domain plot when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.04</mn> <mo>&lt;</mo> <msub> <mi>τ</mi> <mrow> <mn>20</mn> </mrow> </msub> <mo>=</mo> <mn>0.0622</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 12
<p>The EE attractor when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.04</mn> <mo>&lt;</mo> <msub> <mi>τ</mi> <mrow> <mn>20</mn> </mrow> </msub> <mo>=</mo> <mn>0.0622</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.7</mn> <mo>,</mo> <mn>0.9</mn> <mo stretchy="false">)</mo> </mrow> </semantics> </math>. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 13
<p>Equation (19) is unstable when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.08</mn> <mo>&gt;</mo> <msub> <mi>τ</mi> <mrow> <mn>20</mn> </mrow> </msub> <mo>=</mo> <mn>0.0622</mn> </mrow> </semantics> </math>. (<b>a</b>) Time-domain plot; (<b>b</b>) Poincare plot.</p>
Full article ">Figure 14
<p>The EE attractor when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.08</mn> <mo>&gt;</mo> <msub> <mi>τ</mi> <mrow> <mn>20</mn> </mrow> </msub> <mo>=</mo> <mn>0.0622</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.7</mn> <mo>,</mo> <mn>0.9</mn> <mo stretchy="false">)</mo> </mrow> </semantics> </math>. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>y</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>z</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>u</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 15
<p>The entropy plot respect to <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 16
<p>The influence of <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>b</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> on <math display="inline"> <semantics> <mrow> <mi>y</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 17
<p>The influence of <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>b</mi> <mn>4</mn> </msub> </mrow> </semantics> </math> on <math display="inline"> <semantics> <mrow> <mi>y</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics> </math>. (<b>a</b>,<b>b</b>) shown from different angles.</p>
Full article ">Figure 18
<p>The influence of <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> on <math display="inline"> <semantics> <mrow> <mi>x</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>∈</mo> <mo>[</mo> <mn>0.1</mn> <mo>,</mo> <mn>0.6</mn> <mo>]</mo> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>∈</mo> <mo>[</mo> <mn>0.01</mn> <mo>,</mo> <mn>0.1</mn> <mo>]</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 19
<p>The influence of <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> on entropy of Equation (19) when <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>∈</mo> <mo>[</mo> <mn>0.1</mn> <mo>,</mo> <mn>0.6</mn> <mo>]</mo> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>∈</mo> <mo>[</mo> <mn>0.01</mn> <mo>,</mo> <mn>0.1</mn> <mo>]</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 20
<p>The influence of <math display="inline"> <semantics> <mi>k</mi> </semantics> </math> on the stability of Equation (20) when <math display="inline"> <semantics> <mrow> <mo>(</mo> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>0.5</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics> </math>. (<b>a</b>) bifurcation diagram; (<b>b</b>) the largest Lyapunov exponent plot.</p>
Full article ">Figure 21
<p>Equation (20) is unstable when <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.05</mn> <mo>&lt;</mo> <mn>0.1689</mn> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo>(</mo> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>0.5</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics> </math>. (<b>a</b>) time-domain plot; (<b>b</b>) the EE attractor.</p>
Full article ">Figure 22
<p>Equation (20) is stable when <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.2</mn> <mo>&gt;</mo> <mn>0.1689</mn> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo>(</mo> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>0.5</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics> </math>. (<b>a</b>) time-domain plot; (<b>b</b>) the EE attractor.</p>
Full article ">
450 KiB  
Article
From Tools in Symplectic and Poisson Geometry to J.-M. Souriau’s Theories of Statistical Mechanics and Thermodynamics
by Charles-Michel Marle
Entropy 2016, 18(10), 370; https://doi.org/10.3390/e18100370 - 19 Oct 2016
Cited by 34 | Viewed by 5886
Abstract
I present in this paper some tools in symplectic and Poisson geometry in view of their applications in geometric mechanics and mathematical physics. After a short discussion of the Lagrangian an Hamiltonian formalisms, including the use of symmetry groups, and a presentation of [...] Read more.
I present in this paper some tools in symplectic and Poisson geometry in view of their applications in geometric mechanics and mathematical physics. After a short discussion of the Lagrangian an Hamiltonian formalisms, including the use of symmetry groups, and a presentation of the Tulczyjew’s isomorphisms (which explain some aspects of the relations between these formalisms), I explain the concept of manifold of motions of a mechanical system and its use, due to J.-M. Souriau, in statistical mechanics and thermodynamics. The generalization of the notion of thermodynamic equilibrium in which the one-dimensional group of time translations is replaced by a multi-dimensional, maybe non-commutative Lie group, is fully discussed and examples of applications in physics are given. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics)
415 KiB  
Article
Chemical Reactions Using a Non-Equilibrium Wigner Function Approach
by Ramón F. Álvarez-Estrada and Gabriel F. Calvo
Entropy 2016, 18(10), 369; https://doi.org/10.3390/e18100369 - 19 Oct 2016
Cited by 4 | Viewed by 4042
Abstract
A three-dimensional model of binary chemical reactions is studied. We consider an ab initio quantum two-particle system subjected to an attractive interaction potential and to a heat bath at thermal equilibrium at absolute temperature T > 0 . Under the sole action of [...] Read more.
A three-dimensional model of binary chemical reactions is studied. We consider an ab initio quantum two-particle system subjected to an attractive interaction potential and to a heat bath at thermal equilibrium at absolute temperature T > 0 . Under the sole action of the attraction potential, the two particles can either be bound or unbound to each other. While at T = 0 , there is no transition between both states, such a transition is possible when T > 0 (due to the heat bath) and plays a key role as k B T approaches the magnitude of the attractive potential. We focus on a quantum regime, typical of chemical reactions, such that: (a) the thermal wavelength is shorter than the range of the attractive potential (lower limit on T) and (b) ( 3 / 2 ) k B T does not exceed the magnitude of the attractive potential (upper limit on T). In this regime, we extend several methods previously applied to analyze the time duration of DNA thermal denaturation. The two-particle system is then described by a non-equilibrium Wigner function. Under Assumptions (a) and (b), and for sufficiently long times, defined by a characteristic time scale D that is subsequently estimated, the general dissipationless non-equilibrium equation for the Wigner function is approximated by a Smoluchowski-like equation displaying dissipation and quantum effects. A comparison with the standard chemical kinetic equations is made. The time τ required for the two particles to transition from the bound state to unbound configurations is studied by means of the mean first passage time formalism. An approximate formula for τ, in terms of D and exhibiting the Arrhenius exponential factor, is obtained. Recombination processes are also briefly studied within our framework and compared with previous well-known methods. Full article
566 KiB  
Article
A Hydrodynamic Model for Silicon Nanowires Based on the Maximum Entropy Principle
by Orazio Muscato and Tina Castiglione
Entropy 2016, 18(10), 368; https://doi.org/10.3390/e18100368 - 19 Oct 2016
Cited by 14 | Viewed by 4443
Abstract
Silicon nanowires (SiNW) are quasi-one-dimensional structures in which the electrons are spatially confined in two directions, and they are free to move along the axis of the wire. The spatial confinement is governed by the Schrödinger–Poisson system, which must be coupled to the [...] Read more.
Silicon nanowires (SiNW) are quasi-one-dimensional structures in which the electrons are spatially confined in two directions, and they are free to move along the axis of the wire. The spatial confinement is governed by the Schrödinger–Poisson system, which must be coupled to the transport in the free motion direction. For devices with the characteristic length of a few tens of nanometers, the transport of the electrons along the axis of the wire can be considered semiclassical, and it can be dealt with by the multi-sub-band Boltzmann transport equations (MBTE). By taking the moments of the MBTE, a hydrodynamic model has been formulated, where explicit closure relations for the fluxes and production terms (i.e., the moments on the collisional operator) are obtained by means of the maximum entropy principle of extended thermodynamics, including the scattering of electrons with phonons, impurities and surface roughness scattering. Numerical results are shown for a SiNW transistor. Full article
(This article belongs to the Special Issue Maximum Entropy Principle and Semiconductors)
Show Figures

Figure 1

Figure 1
<p>Schematic view of a SiNW. Electron transport is assumed to be one-dimensional in the <span class="html-italic">x</span>-direction.</p>
Full article ">Figure 2
<p>SiNW band structure.</p>
Full article ">Figure 3
<p>Cross-sections of a gate-all-around SiNW transistor.</p>
Full article ">Figure 4
<p>Charge density and total potential along the cross-section at <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> </mrow> </semantics> </math> 48 nm and <math display="inline"> <semantics> <mrow> <mi>z</mi> <mo>=</mo> </mrow> </semantics> </math> 0 nm, under a 1 V gate bias.</p>
Full article ">Figure 5
<p>Linear density for the <span class="html-italic">A</span> and <span class="html-italic">B</span> valleys versus the simulation time.</p>
Full article ">Figure 6
<p>The mean velocity (40) versus the simulation time, obtained with and without the surface roughness scattering mechanism.</p>
Full article ">Figure 7
<p>The mean energy (41) versus the simulation time, obtained with and without the surface roughness scattering mechanism.</p>
Full article ">Figure 8
<p>The mean energy flux (42) versus the simulation time, obtained with and without the surface roughness scattering mechanism.</p>
Full article ">
2100 KiB  
Article
Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory
by Joshua Rodewald, John Colombi, Kyle Oyama and Alan Johnson
Entropy 2016, 18(10), 367; https://doi.org/10.3390/e18100367 - 18 Oct 2016
Cited by 6 | Viewed by 5540
Abstract
Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN) are key to [...] Read more.
Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN) are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Show Figures

Figure 1

Figure 1
<p>Conceptual static supply network structure.</p>
Full article ">Figure 2
<p>Significant node links for static supply network weighted by transfer entropy values.</p>
Full article ">Figure 3
<p>Conceptual dynamic supply network structure.</p>
Full article ">Figure 4
<p>Conceptual dynamic supply network nodes’ simulated production histories.</p>
Full article ">Figure 5
<p>Significant node links for dynamic supply network weighted by transfer entropy values.</p>
Full article ">Figure 6
<p>Local transfer entropy moving averages (15 time units) for significant links in dynamic supply network.</p>
Full article ">Figure 7
<p>Real-world supply network nodes’ production history data.</p>
Full article ">Figure 8
<p>Significant node links for real-world supply network weighted by transfer entropy values.</p>
Full article ">Figure 9
<p>Local transfer entropy moving averages (four time units) for all potential node E suppliers in real-world supply network.</p>
Full article ">
4742 KiB  
Article
Intelligent Security IT System for Detecting Intruders Based on Received Signal Strength Indicators
by Yunsick Sung
Entropy 2016, 18(10), 366; https://doi.org/10.3390/e18100366 - 16 Oct 2016
Cited by 5 | Viewed by 5946
Abstract
Given that entropy-based IT technology has been applied in homes, office buildings and elsewhere for IT security systems, diverse kinds of intelligent services are currently provided. In particular, IT security systems have become more robust and varied. However, access control systems still depend [...] Read more.
Given that entropy-based IT technology has been applied in homes, office buildings and elsewhere for IT security systems, diverse kinds of intelligent services are currently provided. In particular, IT security systems have become more robust and varied. However, access control systems still depend on tags held by building entrants. Since tags can be obtained by intruders, an approach to counter the disadvantages of tags is required. For example, it is possible to track the movement of tags in intelligent buildings in order to detect intruders. Therefore, each tag owner can be judged by analyzing the movements of their tags. This paper proposes a security approach based on the received signal strength indicators (RSSIs) of beacon-based tags to detect intruders. The normal RSSI patterns of moving entrants are obtained and analyzed. Intruders can be detected when abnormal RSSIs are measured in comparison to normal RSSI patterns. In the experiments, one normal and one abnormal scenario are defined for collecting the RSSIs of a Bluetooth-based beacon in order to validate the proposed method. When the RSSIs of both scenarios are compared to pre-collected RSSIs, the RSSIs of the abnormal scenario are about 61% more different compared to the RSSIs of the normal scenario. Therefore, intruders in buildings can be detected by considering RSSI differences. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed received signal strenght indicator (RSSI) process phases.</p>
Full article ">Figure 2
<p>Learning and detection phases.</p>
Full article ">Figure 3
<p>Molding stage.</p>
Full article ">Figure 4
<p>The concepts of the four steps in the molding stage: (<b>a</b>) setting the boundary step; (<b>b</b>) selecting area step; (<b>c</b>) creating a mold step; (<b>d</b>) pruning the mold step.</p>
Full article ">Figure 5
<p>Experimental office environment.</p>
Full article ">Figure 6
<p>Measured RSSIs for learning. (<b>a</b>) The first subject’s RSSIs; (<b>b</b>) The second subject’s RSSIs.</p>
Full article ">Figure 7
<p>Normalization results. (<b>a</b>) The measured RSSIs on the 1st day for a subject; (<b>b</b>) The measured RSSIs on the 2nd day for a subject.</p>
Full article ">Figure 8
<p>Ordered counts of normalized tuples.</p>
Full article ">Figure 9
<p>The results of four steps in the molding stage: (<b>a</b>) boundary-setting step; (<b>b</b>) cell selection step; (<b>c</b>) mold-creation step; (<b>d</b>) mold-pruning step.</p>
Full article ">Figure 10
<p>One normal scenario used for validation when a student A acts normally.</p>
Full article ">Figure 11
<p>An abnormal scenario for validation when student A moves differently.</p>
Full article ">Figure 12
<p>The RSSI of a beacon when Student A acts normally.</p>
Full article ">Figure 13
<p>The RSSI of a beacon when Student A moves differently.</p>
Full article ">Figure 14
<p>Normalized and filtered normal RSSIs.</p>
Full article ">Figure 15
<p>Normalized and filtered abnormal RSSIs.</p>
Full article ">Figure 16
<p>Mold results: (<b>a</b>) normalized and filtered normal RSSIs; (<b>b</b>) normalized and filtered abnormal RSSIs.</p>
Full article ">Figure 17
<p>Normalized and filtered RSSIs in normal scenarios compared to RSSIs in the learning phase.</p>
Full article ">Figure 18
<p>The normalized and filtered abnormal RSSIs in abnormal scenarios comparing to the RSSIs in the learning phase.</p>
Full article ">
984 KiB  
Article
Boltzmann Sampling by Degenerate Optical Parametric Oscillator Network for Structure-Based Virtual Screening
by Hiromasa Sakaguchi, Koji Ogata, Tetsu Isomura, Shoko Utsunomiya, Yoshihisa Yamamoto and Kazuyuki Aihara
Entropy 2016, 18(10), 365; https://doi.org/10.3390/e18100365 - 13 Oct 2016
Cited by 32 | Viewed by 8508
Abstract
A structure-based lead optimization procedure is an essential step to finding appropriate ligand molecules binding to a target protein structure in order to identify drug candidates. This procedure takes a known structure of a protein-ligand complex as input, and structurally similar compounds with [...] Read more.
A structure-based lead optimization procedure is an essential step to finding appropriate ligand molecules binding to a target protein structure in order to identify drug candidates. This procedure takes a known structure of a protein-ligand complex as input, and structurally similar compounds with the query ligand are designed in consideration with all possible combinations of atomic species. This task is, however, computationally hard since such combinatorial optimization problems belong to the non-deterministic nonpolynomial-time hard (NP-hard) class. In this paper, we propose the structure-based lead generation and optimization procedures by a degenerate optical parametric oscillator (DOPO) network. Results of numerical simulation demonstrate that the DOPO network efficiently identifies a set of appropriate ligand molecules according to the Boltzmann sampling law. Full article
(This article belongs to the Collection Quantum Information)
Show Figures

Figure 1

Figure 1
<p>Coherent Ising machine (CIM) based on time-division-multiplexed (TDM) pulsed degenerate optical parametric oscillator (DOPO) with measurement and feedback control. Both local oscillator (LO) pulses and feedback (FB) pulses are taken from the pump laser. A parametric gain is provided by a periodically-poled <math display="inline"> <semantics> <msub> <mi>LiNbO</mi> <mn>3</mn> </msub> </semantics> </math> (PPLN) waveguide device and an optical ring cavity is formed by a fiber with ∼1 km length.</p>
Full article ">Figure 2
<p>6-membered ring placing near Ala-Asp-Ala tripeptide. (<b>a</b>) initial structure; (<b>b</b>) benzene with the lowest energy and (<b>c</b>) pyridine with the 2nd lowest energy.</p>
Full article ">Figure 3
<p>The success probability of satisfying the constraints for 1000 identical trials. The parameter <span class="html-italic">p</span> is the final pump rate for gradual pumping. The parameters of the Hamiltonian (Equation <a href="#FD10-entropy-18-00365" class="html-disp-formula">10</a>) are set to <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> and <span class="html-italic">C</span> = 0 for all the results.</p>
Full article ">Figure 4
<p>Histograms of the interaction energies of the final states of CIM over 1000 runs. The blue bar is the simulation result and the red dot is the estimated Boltzmann distribution. The green line at the bottom of each figure shows the number of states for the given Hamiltonian. All the histograms are normalized to 1.</p>
Full article ">Figure 5
<p>The histogram of finding the degenerate states on the same energy surface over 1000 runs. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>E</mi> <mo>=</mo> <mo>[</mo> <mo>−</mo> <mn>3</mn> <mo>.</mo> <mn>854</mn> <mo>,</mo> <mo>−</mo> <mn>3</mn> <mo>.</mo> <mn>768</mn> <mo>)</mo> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>E</mi> <mo>=</mo> <mo>[</mo> <mo>−</mo> <mn>4</mn> <mo>.</mo> <mn>626</mn> <mo>,</mo> <mo>−</mo> <mn>4</mn> <mo>.</mo> <mn>540</mn> <mo>)</mo> </mrow> </semantics> </math>.</p>
Full article ">
833 KiB  
Article
Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora
by Ryosuke Takahira, Kumiko Tanaka-Ishii and Łukasz Dębowski
Entropy 2016, 18(10), 364; https://doi.org/10.3390/e18100364 - 12 Oct 2016
Cited by 31 | Viewed by 9867
Abstract
One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, [...] Read more.
One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese), to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Figure 1
<p>Compression results for (<b>a</b>) a Bernoulli process (<math display="inline"> <semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>) and (<b>b</b>) the Wall Street Journal for Lempel-Ziv (LZ), PPM (Prediction by Partial Match), and Sequitur.</p>
Full article ">Figure 2
<p>Encoding rates for the Wall Street Journal corpus (in English). Panel (<b>a</b>) is for the original data, whereas (<b>b</b>) is the average of the data 10-fold shuffled by documents. To these results we fit functions <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>The values of <span class="html-italic">error</span> and <span class="html-italic">h</span> for all natural language data sets in <a href="#entropy-18-00364-t001" class="html-table">Table 1</a> and the three ansatz functions <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. Each data point corresponds to a distinct corpus or a distinct text, where black is English, red is Chinese, and blue for other languages. The squares are the fitting results for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, triangles—for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, and circles—for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. The means and the standard deviations of <span class="html-italic">h</span> (left) and <span class="html-italic">error</span> (right) are indicated in the figure next to the ovals, which show the range of standard deviation—dotted for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, dashed for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, and solid for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>The values of <span class="html-italic">β</span> and <span class="html-italic">h</span> for all natural language data sets in <a href="#entropy-18-00364-t001" class="html-table">Table 1</a> and the ansatz functions <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. Each data point corresponds to a distinct corpus or a distinct text, where black is English, red is Chinese, and blue for other languages. The squares are the fitting results for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, triangles—for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, and circles—for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. The means and the standard deviations of <span class="html-italic">h</span> (left) and <span class="html-italic">β</span> (right) are indicated in the figure next to the ovals, which show the range of standard deviation—dotted for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, dashed for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, and solid for <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>All large scale natural language data (first block of <a href="#entropy-18-00364-t001" class="html-table">Table 1</a>) from a linear perspective for function <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. The axes are <math display="inline"> <semantics> <mrow> <mi>Y</mi> <mo>=</mo> <mo form="prefix">ln</mo> <mi>r</mi> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>X</mi> <mo>=</mo> <msup> <mi>n</mi> <mrow> <mi>β</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics> </math>, where <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.884</mn> </mrow> </semantics> </math>. The black points are English, the red ones are Chinese, and the blue ones are other languages. The two linear fit lines are for English (lower) and Chinese (upper).</p>
Full article ">Figure 6
<p>Data from the third block of <a href="#entropy-18-00364-t001" class="html-table">Table 1</a> from a linear perspective for function <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. The axes are <math display="inline"> <semantics> <mrow> <mi>X</mi> <mo>=</mo> <msup> <mi>n</mi> <mrow> <mi>β</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>Y</mi> <mo>=</mo> <mo form="prefix">ln</mo> <mi>r</mi> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>, where <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.884</mn> </mrow> </semantics> </math> as in <a href="#entropy-18-00364-f005" class="html-fig">Figure 5</a>. The black points are the English text, the magenta ones are its randomized versions, whereas the blue ones are Bernoulli and Zipf processes.</p>
Full article ">Figure 7
<p>Stability of the entropy rate estimates obtained with the ansatz function <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>3</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">
330 KiB  
Article
The Shell Collapsar—A Possible Alternative to Black Holes
by Trevor W. Marshall
Entropy 2016, 18(10), 363; https://doi.org/10.3390/e18100363 - 12 Oct 2016
Viewed by 5115
Abstract
This article argues that a consistent description is possible for gravitationally collapsed bodies, in which collapse stops before the object reaches its gravitational radius, the density reaching a maximum close to the surface and then decreasing towards the centre. The way towards such [...] Read more.
This article argues that a consistent description is possible for gravitationally collapsed bodies, in which collapse stops before the object reaches its gravitational radius, the density reaching a maximum close to the surface and then decreasing towards the centre. The way towards such a description was indicated in the classic Oppenheimer-Snyder (OS) 1939 analysis of a dust star. The title of that article implied support for a black-hole solution, but the present article shows that the final OS density distribution accords with gravastar and other shell models. The parallel Oppenheimer-Volkoff (OV) study of 1939 used the equation of state for a neutron gas, but could consider only stationary solutions of the field equations. Recently we found that the OV equation of state permits solutions with minimal rather than maximal central density, and here we find a similar topology for the OS dust collapsar; a uniform dust-ball which starts with large radius, and correspondingly small density, and collapses to a shell at the gravitational radius with density decreasing monotonically towards the centre. Though no longer considered central in black-hole theory, the OS dust model gave the first exact, time-dependent solution of the field equations. Regarded as a limiting case of OV, it indicates the possibility of neutron stars of unlimited mass with a similar shell topology. Progress in observational astronomy will distinguish this class of collapsars from black holes. Full article
Show Figures

Figure 1

Figure 1
<p>The trajectory of a particle projected inwards from the surface of an Oppenheimer-Snyder (OS) collapsar, with initial values <math display="inline"> <semantics> <mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>/</mo> <mrow> <mo>(</mo> <mn>2</mn> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>9</mn> <mo>.</mo> <mn>538</mn> <mo>,</mo> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>. The particle passes through <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> and then turns back towards the centre at <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>918</mn> </mrow> </semantics> </math>, which corresponds to a value of <span class="html-italic">r</span> just inside the "horizon".</p>
Full article ">
840 KiB  
Article
Measures of Difference and Significance in the Era of Computer Simulations, Meta-Analysis, and Big Data
by Reinout Heijungs, Patrik J.G. Henriksson and Jeroen B. Guinée
Entropy 2016, 18(10), 361; https://doi.org/10.3390/e18100361 - 9 Oct 2016
Cited by 18 | Viewed by 5965
Abstract
In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically) with [...] Read more.
In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically) with the square root of the sample size, which provides a stimulus to probe large samples. In simulation models, the situation is entirely different. When probability distribution functions for model features are specified, the probability distribution function of the model output can be approached using numerical techniques, such as bootstrapping or Monte Carlo sampling. Given the computational power of most PCs today, the sample size can be increased almost without bounds. The result is that standard errors of parameters are vanishingly small, and that almost all significance tests will lead to a rejected null hypothesis. Clearly, another approach to statistical significance is needed. This paper analyzes the situation and connects the discussion to other domains in which the null hypothesis significance test (NHST) paradigm is challenged. In particular, the notions of effect size and Cohen’s d provide promising alternatives for the establishment of a new indicator of statistical significance. This indicator attempts to cover significance (precision) and effect size (relevance) in one measure. Although in the end more fundamental changes are called for, our approach has the attractiveness of requiring only a minimal change to the practice of statistics. The analysis is not only relevant for artificial samples, but also for present-day huge samples, associated with the availability of big data. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The absolute value of the <math display="inline"> <semantics> <mi>T</mi> </semantics> </math>-statistic when <math display="inline"> <semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>Y</mi> <mi>A</mi> </msub> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>=</mo> <mn>5.0</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>Y</mi> <mi>B</mi> </msub> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>=</mo> <mn>6.0</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> (upper solid line) for different values of the sample size <math display="inline"> <semantics> <mi>n</mi> </semantics> </math>, when <math display="inline"> <semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>Y</mi> <mi>A</mi> </msub> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>=</mo> <mn>5.0</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>Y</mi> <mi>B</mi> </msub> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>=</mo> <mn>5.2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> top (lower solid line), and the upper critical value of the <span class="html-italic">t</span>-distribution (dashed line) at <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">α</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics> </math>. The null hypothesis of equality of population means is rejected at <math display="inline"> <semantics> <mrow> <mn>0.05</mn> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>≥</mo> <mn>9</mn> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mover accent="true"> <mi>Y</mi> <mo>¯</mo> </mover> <mo>=</mo> <mn>1.0</mn> </mrow> </semantics> </math>, but when <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mover accent="true"> <mi>Y</mi> <mo>¯</mo> </mover> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics> </math>, we need to push further and use <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>≥</mo> <mn>194</mn> </mrow> </semantics> </math> to do the job.</p>
Full article ">Figure 2
<p>Probability density function for <math display="inline"> <semantics> <mrow> <msub> <mi>Y</mi> <mi>A</mi> </msub> <mo>~</mo> <mi>N</mi> <mrow> <mo>(</mo> <mrow> <mn>5.0</mn> <mo>,</mo> <mo> </mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> (solid line) and <math display="inline"> <semantics> <mrow> <msub> <mi>Y</mi> <mi>B</mi> </msub> <mo>~</mo> <mi>N</mi> <mrow> <mo>(</mo> <mrow> <mn>5.2</mn> <mo>,</mo> <mo> </mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>Y</mi> <mi>B</mi> </msub> <mo>~</mo> <mi>N</mi> <mrow> <mo>(</mo> <mrow> <mn>6.0</mn> <mo>,</mo> <mo> </mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> (two dashed lines), corresponding to standardized effect sizes <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">δ</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics> </math> (small) and <math display="inline"> <semantics> <mrow> <mn>1.0</mn> </mrow> </semantics> </math> (large).</p>
Full article ">Figure 3
<p>Probability density functions of the carbon footprint of a Vietnamese aquaculture system of Pangasius catfish, obtained from two artificial samples: large-scale (solid line) and small-scale (dashed line).</p>
Full article ">
3548 KiB  
Article
Metric for Estimating Congruity between Quantum Images
by Abdullah M. Iliyasu, Fei Yan and Kaoru Hirota
Entropy 2016, 18(10), 360; https://doi.org/10.3390/e18100360 - 9 Oct 2016
Cited by 23 | Viewed by 6002
Abstract
An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR) ill-suited [...] Read more.
An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR) ill-suited for use in the quantum computing framework, whereas the prohibitive cost of the probability-based similarity score makes it imprudent for use as an effective image quality metric. Unlike the aforementioned image quality measures, the proposed QIFM metric is calibrated as a pixel difference-based image quality measure that is sensitive to the intricacies inherent to quantum image processing (QIP). As proposed, the QIFM is configured with in-built non-destructive measurement units that preserve the coherence necessary for quantum computation. This design moderates the cost of executing the QIFM in order to estimate congruity between two or more quantum images. A statistical analysis also shows that our proposed QIFM metric has a better correlation with digital expectation of likeness between images than other available quantum image quality measures. Therefore, the QIFM offers a competent substitute for the PSNR as an image quality measure in the quantum computing framework thereby providing a tool to effectively assess fidelity between images in quantum watermarking, quantum movie aggregation and other applications in QIP. Full article
(This article belongs to the Collection Quantum Information)
Show Figures

Figure 1

Figure 1
<p>Circuit structure for comparing similarity between two FRQI quantum images (figure adapted from [<a href="#B15-entropy-18-00360" class="html-bibr">15</a>,<a href="#B23-entropy-18-00360" class="html-bibr">23</a>]).</p>
Full article ">Figure 2
<p>Generalised circuit structure for parallel comparison of similarity between FRQI quantum images (figure adapted from [<a href="#B15-entropy-18-00360" class="html-bibr">15</a>,<a href="#B23-entropy-18-00360" class="html-bibr">23</a>]).</p>
Full article ">Figure 3
<p>(<b>A</b>) Notation for a single qubit projective measurement operation and (<b>B</b>) description of the ancilla-driven measurement operation (figures and explanations in the text are adapted from [<a href="#B2-entropy-18-00360" class="html-bibr">2</a>,<a href="#B14-entropy-18-00360" class="html-bibr">14</a>,<a href="#B24-entropy-18-00360" class="html-bibr">24</a>]).</p>
Full article ">Figure 4
<p>Layout of proposed QIFM framework to assess fidelity between two (or more) quantum images.</p>
Full article ">Figure 5
<p>Flowchart for executing the proposed QIFM framework to compare two (or more) quantum images.</p>
Full article ">Figure 6
<p>QIFM sub-circuit to execute the Binary check operation (<span class="html-italic">BCO</span>) of the QIFM image metric.</p>
Full article ">Figure 7
<p>QIFM sub-circuit to execute the Bit error rate operation (<span class="html-italic">BO</span>) of the QIFM image metric.</p>
Full article ">Figure 8
<p>Dataset of images paired for the FPS analysis. (<b>A</b>) Lena; (<b>B</b>) Inverted Lena; (<b>C</b>) Blonde Lady; (<b>D</b>) Peppers; (<b>E</b>) Scarfed Lady; (<b>F</b>) Baboon; (<b>G</b>) Brunette Lady; (<b>H</b>) Cameraman; (<b>I</b>) Man; (<b>J</b>) Couple; (<b>K</b>) Aeroplane; (<b>L</b>) House; (<b>M</b>) Pentagon; (<b>N</b>) Fingerprint; (<b>O</b>) Bridge; (<b>P</b>) Trees.</p>
Full article ">Figure 9
<p>Comparison between PSNR and QIFM for watermarked images. (<b>A</b>) PSAU watermark logo; (<b>B</b>) original Lena image; (<b>C</b>) original Blonde Lady image; (<b>D</b>) watermarked version of Lena image; (<b>E</b>) watermarked version of Blonde Lady image.</p>
Full article ">
1090 KiB  
Article
Tolerance Redistributing of the Reassembly Dimensional Chain on Measure of Uncertainty
by Conghu Liu
Entropy 2016, 18(10), 348; https://doi.org/10.3390/e18100348 - 9 Oct 2016
Cited by 15 | Viewed by 4619
Abstract
How to use the limited precision of remanufactured parts to assemble higher-quality remanufactured products is a challenge for remanufacturing engineering under uncertainty. On the basis of analyzing the uncertainty of remanufacturing parts, this paper takes tolerance redistributing of the reassembly (remanufactured assembly) dimensional [...] Read more.
How to use the limited precision of remanufactured parts to assemble higher-quality remanufactured products is a challenge for remanufacturing engineering under uncertainty. On the basis of analyzing the uncertainty of remanufacturing parts, this paper takes tolerance redistributing of the reassembly (remanufactured assembly) dimensional chain as the research object. An entropy model to measure the uncertainty of assembly dimensional is built, and we quantify the degree of the uncertainty gap between reassembly and assembly. Then, in order to make sure the uncertainty of reassembly is not lower than that of assembly, the tolerance redistribution optimization model of the reassembly dimensional chain is proposed which is based on the tolerance of the grading allocation method. Finally, this paper takes the remanufactured gearbox assembly dimension chain as an example. The redistribution optimization model saves 19.11% of the cost with the assembly precision of remanufactured products. It provides new technical and theoretical support to expand the utilization rate of remanufactured parts and improve reassembly precision. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>The flow chart of the tolerance redistribution optimization model.</p>
Full article ">Figure 2
<p>The remanufactured gearbox.</p>
Full article ">Figure 3
<p>The remanufactured gearbox dimension chain graph.</p>
Full article ">Figure 4
<p>Solver status window.</p>
Full article ">Figure 5
<p>Pass rate in 2014 and 2015.</p>
Full article ">
2043 KiB  
Article
Realistic Many-Body Quantum Systems vs. Full Random Matrices: Static and Dynamical Properties
by Eduardo Jonathan Torres-Herrera, Jonathan Karp, Marco Távora and Lea F. Santos
Entropy 2016, 18(10), 359; https://doi.org/10.3390/e18100359 - 8 Oct 2016
Cited by 38 | Viewed by 6761
Abstract
We study the static and dynamical properties of isolated many-body quantum systems and compare them with the results for full random matrices. In doing so, we link concepts from quantum information theory with those from quantum chaos. In particular, we relate the von [...] Read more.
We study the static and dynamical properties of isolated many-body quantum systems and compare them with the results for full random matrices. In doing so, we link concepts from quantum information theory with those from quantum chaos. In particular, we relate the von Neumann entanglement entropy with the Shannon information entropy and discuss their relevance for the analysis of the degree of complexity of the eigenstates, the behavior of the system at different time scales and the conditions for thermalization. A main advantage of full random matrices is that they enable the derivation of analytical expressions that agree extremely well with the numerics and provide bounds for realistic many-body quantum systems. Full article
(This article belongs to the Special Issue Quantum Information 2016)
Show Figures

Figure 1

Figure 1
<p>We consider a Gaussian orthogonal ensemble (GOE) full random matrix. Top panels: level spacing distribution (<b>a</b>) and level number variance (<b>b</b>). Bottom panels: Shannon information entropy (<b>c</b>) and von Neumann entanglement entropy (<b>d</b>) for all eigenstates. Horizontal solid lines give <math display="inline"> <semantics> <mrow> <mi>ln</mi> <mo>(</mo> <mn>0.48</mn> <mi mathvariant="script">D</mi> <mo>)</mo> </mrow> </semantics> </math> in (<b>c</b>) and <math display="inline"> <semantics> <mrow> <mi>ln</mi> <mo>(</mo> <mn>0.48</mn> <msub> <mi mathvariant="script">D</mi> <mi>A</mi> </msub> <mo>)</mo> </mrow> </semantics> </math> in (<b>d</b>). The horizontal dashed line in (<b>d</b>) corresponds to <math display="inline"> <semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>v</mi> <mi>N</mi> </mrow> <mrow> <mi>r</mi> <mi>a</mi> <mi>n</mi> <mi>d</mi> </mrow> </msubsup> <mo>=</mo> <mi>ln</mi> <msub> <mi mathvariant="script">D</mi> <mi>A</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics> </math>. The random numbers of the full random matrix are rescaled so that <math display="inline"> <semantics> <mrow> <mi mathvariant="script">E</mi> <mo>∼</mo> <mn>4</mn> </mrow> </semantics> </math>. We choose <math display="inline"> <semantics> <mrow> <mi mathvariant="script">D</mi> <mo>=</mo> <mn>16</mn> <mo>!</mo> <mo>/</mo> <mn>8</mn> <msup> <mo>!</mo> <mn>2</mn> </msup> <mo>=</mo> <mn>12870</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">D</mi> <mi mathvariant="script">A</mi> </msub> <mo>=</mo> <msup> <mn>2</mn> <mn>8</mn> </msup> <mo>=</mo> <mn>256</mn> </mrow> </semantics> </math> in analogy with the matrix sizes used in <a href="#sec3-entropy-18-00359" class="html-sec">Section 3</a>.</p>
Full article ">Figure 2
<p>We consider the initial state (<a href="#FD10-entropy-18-00359" class="html-disp-formula">10</a>) evolving under a GOE full random matrix. Top panels: local density of states (LDOS) (<b>a</b>) and the survival probability (<b>b</b>). The symbols in (<b>b</b>) refer to Equation (<a href="#FD14-entropy-18-00359" class="html-disp-formula">14</a>), and the dashed horizontal line is <math display="inline"> <semantics> <mrow> <msubsup> <mover> <mi>W</mi> <mo>¯</mo> </mover> <mrow> <mi>i</mi> <mi>n</mi> <mi>i</mi> </mrow> <mrow> <mi>G</mi> <mi>O</mi> <mi>E</mi> </mrow> </msubsup> <mo>∼</mo> <mn>3</mn> <mo>/</mo> <mi mathvariant="script">D</mi> </mrow> </semantics> </math> from Equation (<a href="#FD26-entropy-18-00359" class="html-disp-formula">26</a>). Bottom panels: evolution of the Shannon entropy (<b>c</b>) and von Neumann entanglement entropy (<b>d</b>). Dashed lines correspond to the expression in Equation (<a href="#FD19-entropy-18-00359" class="html-disp-formula">19</a>) with <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>p</mi> <mi>c</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo stretchy="false">〈</mo> <mi>exp</mi> <mrow> <mo>[</mo> <msub> <mi>S</mi> <mrow> <mi>S</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mo stretchy="false">〉</mo> </mrow> </mrow> </semantics> </math> in (<b>c</b>) and <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>p</mi> <mi>c</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo stretchy="false">〈</mo> <mi>exp</mi> <mrow> <mo>[</mo> <msub> <mi>S</mi> <mrow> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mo stretchy="false">〉</mo> </mrow> </mrow> </semantics> </math> in (<b>d</b>). The dashed lines are indistinguishable from the numerical results (solid lines). Dot-dashed lines are Equation (<a href="#FD20-entropy-18-00359" class="html-disp-formula">20</a>). Circles represent linear fits, <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mrow> <mi>S</mi> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>a</mi> <mrow> <mi>S</mi> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>b</mi> <mrow> <mi>S</mi> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mi>t</mi> </mrow> </semantics> </math>, with <math display="inline"> <semantics> <mrow> <msub> <mi>a</mi> <mrow> <mi>S</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mo>-</mo> <mn>1</mn> <mo>.</mo> <mn>65</mn> <mo>,</mo> <msub> <mi>a</mi> <mrow> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mo>-</mo> <mn>0</mn> <mo>.</mo> <mn>87</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>S</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>16</mn> <mo>.</mo> <mn>6</mn> <mo>,</mo> <msub> <mi>b</mi> <mrow> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mn>9</mn> <mo>.</mo> <mn>97</mn> </mrow> </semantics> </math>. The random numbers of the full random matrix are rescaled so that <math display="inline"> <semantics> <mrow> <mi mathvariant="script">E</mi> <mo>∼</mo> <mn>4</mn> </mrow> </semantics> </math>; <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>i</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>. We choose <math display="inline"> <semantics> <mrow> <mi mathvariant="script">D</mi> <mo>=</mo> <mn>12870</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">D</mi> <mi mathvariant="script">A</mi> </msub> <mo>=</mo> <mn>256</mn> </mrow> </semantics> </math> as in <a href="#entropy-18-00359-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Top panels: density of states for the XXZ (<b>a</b>), defect (<b>b</b>) and next-nearest-neighbor (NNN) (<b>c</b>) models. Middle and bottom panels: level spacing distribution (<b>d</b>–<b>f</b>) and level number variance (<b>g</b>–<b>i</b>), respectively, for the same models in (<b>a</b>–<b>c</b>). Open boundaries, <math display="inline"> <semantics> <mrow> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>0.48</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msup> <mrow> <mi mathvariant="script">S</mi> </mrow> <mi>z</mi> </msup> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Top panels: Shannon entropy in the site basis (black symbols) and in the mean-field basis (red symbols) for the XXZ (<b>a</b>), defect (<b>b</b>) and NNN (<b>c</b>) models. Bottom panels: normalized entanglement entropy (black symbols) and normalized Shannon entropy in the mean-field basis (red symbols) for the same models as in (<b>a</b>–<b>c</b>). Parameters for all panels are the same as in <a href="#entropy-18-00359-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Néel state under the XXZ model (<b>a</b>,<b>b</b>), and defect model (<b>c</b>,<b>d</b>). (<b>a</b>,<b>c</b>) Numerical results for the LDOS (shaded area) and Gaussian envelope (solid line) with <math display="inline"> <semantics> <msub> <mi>σ</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>i</mi> </mrow> </msub> </semantics> </math> from Equation (<a href="#FD34-entropy-18-00359" class="html-disp-formula">34</a>) and <math display="inline"> <semantics> <msub> <mi>E</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>i</mi> </mrow> </msub> </semantics> </math> from Equation (<a href="#FD35-entropy-18-00359" class="html-disp-formula">35</a>). (<b>b</b>,<b>d</b>) Numerical results for the survival probability (solid line), analytical expression <math display="inline"> <semantics> <mrow> <msub> <mi>W</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>i</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <msubsup> <mi>σ</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>i</mi> </mrow> <mn>2</mn> </msubsup> <msup> <mi>t</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </semantics> </math> (dashed line) with <math display="inline"> <semantics> <msub> <mi>σ</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>i</mi> </mrow> </msub> </semantics> </math> from Equation (<a href="#FD34-entropy-18-00359" class="html-disp-formula">34</a>) and saturation value (horizontal line) given by <math display="inline"> <semantics> <mrow> <mn>1</mn> <mo>/</mo> <mi>P</mi> <msub> <mi>R</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>i</mi> </mrow> </msub> <mo>=</mo> <msub> <mo>∑</mo> <mi>α</mi> </msub> <msup> <mrow> <mo stretchy="false">|</mo> <msubsup> <mi>C</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>i</mi> </mrow> <mi>α</mi> </msubsup> <mo stretchy="false">|</mo> </mrow> <mn>4</mn> </msup> </mrow> </semantics> </math> [Equation (<a href="#FD25-entropy-18-00359" class="html-disp-formula">25</a>)]. Parameters as in <a href="#entropy-18-00359-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>Evolution of the Shannon entropy (top panels) and entanglement entropy (bottom panels) for the Néel state under the XXZ (<b>a</b>,<b>d</b>), defect (<b>b</b>,<b>e</b>) and NNN (<b>c</b>,<b>f</b>) models. Numerical results (solid lines) and fitted linear growth <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mrow> <mi>S</mi> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>a</mi> <mrow> <mi>S</mi> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>b</mi> <mrow> <mi>S</mi> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mi>t</mi> </mrow> </semantics> </math> (green dashed lines) with <math display="inline"> <semantics> <mrow> <msub> <mi>a</mi> <mrow> <mi>S</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mo>-</mo> <mn>1.04</mn> <mo>,</mo> <mo>-</mo> <mn>1.04</mn> <mo>,</mo> <mo>-</mo> <mn>1.31</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>S</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>8.67</mn> <mo>,</mo> <mn>8.67</mn> <mo>,</mo> <mn>9.47</mn> </mrow> </semantics> </math> from (<b>a</b>–<b>c</b>) and <math display="inline"> <semantics> <mrow> <msub> <mi>a</mi> <mrow> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mn>0.17</mn> <mo>,</mo> <mn>0.21</mn> <mo>,</mo> <mo>-</mo> <mn>0.06</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>v</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mn>0.43</mn> <mo>,</mo> <mn>0.30</mn> <mo>,</mo> <mn>0.80</mn> </mrow> </semantics> </math> from (<b>d</b>–<b>f</b>). Horizontal dashed lines indicate <math display="inline"> <semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>S</mi> <mi>h</mi> </mrow> <mrow> <mi>G</mi> <mi>O</mi> <mi>E</mi> </mrow> </msubsup> <mo>∼</mo> <mi>ln</mi> <mrow> <mo>(</mo> <mn>0.48</mn> <mi mathvariant="script">D</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> (top panels) and <math display="inline"> <semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>v</mi> <mi>N</mi> </mrow> <mrow> <mi>G</mi> <mi>O</mi> <mi>E</mi> </mrow> </msubsup> <mo>∼</mo> <mi>ln</mi> <mrow> <mo>(</mo> <mn>0.48</mn> <msub> <mi mathvariant="script">D</mi> <mi>A</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics> </math> (bottom panels). Parameters as in <a href="#entropy-18-00359-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 7
<p>Survival probability (<b>a</b>) and evolution of the Shannon entropy (<b>b</b>) for the Néel state under the NNN model with the parameters of <a href="#entropy-18-00359-f003" class="html-fig">Figure 3</a>, but <math display="inline"> <semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>22</mn> </mrow> </semantics> </math>. In (<b>a</b>), the numerical result is given by the solid line and the power law decay <math display="inline"> <semantics> <mrow> <mo>∝</mo> <msup> <mi>t</mi> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math> by the dashed line. In (<b>b</b>), the numerical result is given by the solid line; Equation (<a href="#FD20-entropy-18-00359" class="html-disp-formula">20</a>) by a dot-dashed line; and the linear increase, <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mrow> <mi>S</mi> <mi>h</mi> </mrow> </msub> <mo>∝</mo> <mi>t</mi> </mrow> </semantics> </math>, by the symbols.</p>
Full article ">
1230 KiB  
Article
Ordering Quantiles through Confidence Statements
by Cassio P. De Campos, Carlos A. De B. Pereira, Paola M. V. Rancoita and Adriano Polpo
Entropy 2016, 18(10), 357; https://doi.org/10.3390/e18100357 - 8 Oct 2016
Cited by 1 | Viewed by 4538
Abstract
Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important. A [...] Read more.
Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important. A nonparametric method called Quor is designed to provide a confidence value for the order of arbitrary quantiles of different populations using independent samples. This confidence may provide insights about possible differences among groups and yields a ranking of importance for the variables. Computations are efficient and use exact distributions with no need for asymptotic considerations. Experiments with simulated data and with multiple real -omics data sets are performed, and they show advantages and disadvantages of the method. Quor has no assumptions but independence of samples, thus it might be a better option when assumptions of other methods cannot be asserted. The software is publicly available on CRAN. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Figure 1

Figure 1
<p>AUC comparison using mixture of Gaussians, area under the ROC curve for different methods. Samples of the two groups come from mixtures of two Gaussians (all standard deviations are <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>). The mixture of first group has Gaussians with means <math display="inline"> <semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mn>1</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mo>-</mo> <mi>θ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mi>θ</mi> </mrow> </semantics> </math> and mixture weights 2/3 and 1/3. The mixture of the second group has weights 1/3 and 2/3 with Gaussians with means <math display="inline"> <semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mn>2</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mo>-</mo> <mn>2</mn> <mo>·</mo> <mi>θ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mn>2</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mi>θ</mi> </mrow> </semantics> </math>. Simulation is repeated 100 times with 2000 genes each, with number of samples as indicated in the subfigures. The control scenario (no difference in the groups) is built with both groups generated from a Gaussian <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>σ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>AUC Comparison using Gaussians, area under the ROC curve for different methods. Samples of the two groups come from Gaussians with standard deviation <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>. The first group has a Gaussian with mean <math display="inline"> <semantics> <mrow> <msub> <mi>μ</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>-</mo> <mi>θ</mi> </mrow> </semantics> </math> and the second group has <math display="inline"> <semantics> <mrow> <msub> <mi>μ</mi> <mn>2</mn> </msub> <mo>=</mo> <mi>θ</mi> </mrow> </semantics> </math>. Simulation is repeated 100 times with 2000 genes each, with number of samples as indicated in the subfigures. The control scenario (no difference in the groups) is build with both groups generated from a Gaussian <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>σ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Ranking Quality. Average zero-one loss of the univariate classifiers obtained with each of the selected variables according to the ranking method (we always select them according to such rank). The values are shown with respect to the target ranking. Hence, zero means optimal and 0.15 means 15% worse than the target ranking, for instance. Values are means over 20 subsamples with half of the data for each gene and group. Bars show the standard deviation.</p>
Full article ">
425 KiB  
Article
Entropy Cross-Efficiency Model for Decision Making Units with Interval Data
by Lupei Wang, Lei Li and Ningxi Hong
Entropy 2016, 18(10), 358; https://doi.org/10.3390/e18100358 - 1 Oct 2016
Cited by 19 | Viewed by 5498
Abstract
The cross-efficiency method, as a Data Envelopment Analysis (DEA) extension, calculates the cross efficiency of each decision making unit (DMU) using the weights of all decision making units (DMUs). The major advantage of the cross-efficiency method is that it can provide a complete [...] Read more.
The cross-efficiency method, as a Data Envelopment Analysis (DEA) extension, calculates the cross efficiency of each decision making unit (DMU) using the weights of all decision making units (DMUs). The major advantage of the cross-efficiency method is that it can provide a complete ranking for all DMUs. In addition, the cross-efficiency method could eliminate unrealistic weight results. However, the existing cross-efficiency methods only evaluate the relative efficiencies of a set of DMUs with exact values of inputs and outputs. If the input or output data of DMUs are imprecise, such as the interval data, the existing methods fail to assess the efficiencies of these DMUs. To address this issue, we propose the introduction of Shannon entropy into the cross-efficiency method. In the proposed model, intervals of all cross-efficiency values are firstly obtained by the interval cross-efficiency method. Then, a distance entropy model is proposed to obtain the weights of interval efficiency. Finally, all alternatives are ranked by their relative Euclidean distance from the positive solution. Full article
Show Figures

Figure 1

Figure 1
<p>The Euclidean distances of 35 DMUs.</p>
Full article ">
2218 KiB  
Article
Entropy-Based Application Layer DDoS Attack Detection Using Artificial Neural Networks
by Khundrakpam Johnson Singh, Khelchandra Thongam and Tanmay De
Entropy 2016, 18(10), 350; https://doi.org/10.3390/e18100350 - 1 Oct 2016
Cited by 69 | Viewed by 13268
Abstract
Distributed denial-of-service (DDoS) attack is one of the major threats to the web server. The rapid increase of DDoS attacks on the Internet has clearly pointed out the limitations in current intrusion detection systems or intrusion prevention systems (IDS/IPS), mostly caused by application-layer [...] Read more.
Distributed denial-of-service (DDoS) attack is one of the major threats to the web server. The rapid increase of DDoS attacks on the Internet has clearly pointed out the limitations in current intrusion detection systems or intrusion prevention systems (IDS/IPS), mostly caused by application-layer DDoS attacks. Within this context, the objective of the paper is to detect a DDoS attack using a multilayer perceptron (MLP) classification algorithm with genetic algorithm (GA) as learning algorithm. In this work, we analyzed the standard EPA-HTTP (environmental protection agency-hypertext transfer protocol) dataset and selected the parameters that will be used as input to the classifier model for differentiating the attack from normal profile. The parameters selected are the HTTP GET request count, entropy, and variance for every connection. The proposed model can provide a better accuracy of 98.31%, sensitivity of 0.9962, and specificity of 0.0561 when compared to other traditional classification models. Full article
Show Figures

Figure 1

Figure 1
<p>Attack scenario.</p>
Full article ">Figure 2
<p>Structure of the multilayer perceptron (MLP) network.</p>
Full article ">Figure 3
<p>Weight of the hidden neuron and output neuron.</p>
Full article ">Figure 4
<p>Validation performance graph for training the dataset.</p>
Full article ">Figure 5
<p>Comparison of receiver operating characteristic (ROC) curve of MLP-genetic algorithm (GA) with (<b>a</b>) radial basis function (RBF) network, (<b>b</b>) naive Bayes, (<b>c</b>) random forest, and (<b>d</b>) multilayer perceptron.</p>
Full article ">Figure 6
<p>Central processing unit (CPU) resource utilization during attack period.</p>
Full article ">Figure 7
<p>Hypertext transfer protocol (HTTP) count for the incoming traffic.</p>
Full article ">Figure 8
<p>Mean entropy per IP address.</p>
Full article ">Figure 9
<p>Variance of the entropy per IP address.</p>
Full article ">Figure 10
<p>Accuracy curve for the proposed method.</p>
Full article ">
4063 KiB  
Article
Analysis of Entropy Generation in Mixed Convective Peristaltic Flow of Nanofluid
by Tasawar Hayat, Sadaf Nawaz, Ahmed Alsaedi and Maimona Rafiq
Entropy 2016, 18(10), 355; https://doi.org/10.3390/e18100355 - 30 Sep 2016
Cited by 18 | Viewed by 6209
Abstract
This article examines entropy generation in the peristaltic transport of nanofluid in a channel with flexible walls. Single walled carbon nanotubes (SWCNT) and multiple walled carbon nanotubes (MWCNT) with water as base fluid are utilized in this study. Mixed convection is also considered [...] Read more.
This article examines entropy generation in the peristaltic transport of nanofluid in a channel with flexible walls. Single walled carbon nanotubes (SWCNT) and multiple walled carbon nanotubes (MWCNT) with water as base fluid are utilized in this study. Mixed convection is also considered in the present analysis. Viscous dissipation effect is present. Moreover, slip conditions are encountered for both velocity and temperature at the boundaries. Analysis is prepared in the presence of long wavelength and small Reynolds number assumptions. Two phase model for nanofluids are employed. Nonlinear system of equations for small Grashof number is solved. Velocity and temperature are examined for different parameters via graphs. Streamlines are also constructed to analyze the trapping. Results show that axial velocity and temperature of the nanofluid decrease when we enhance the nanoparticle volume fraction. Moreover, the wall elastance parameter shows increase in axial velocity and temperature, whereas decrease in both quantities is noticed for damping coefficient. Decrease is notified in Entropy generation and Bejan number for increasing values of nanoparticle volume fraction. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Show Figures

Figure 1

Figure 1
<p>Flow Geometry.</p>
Full article ">Figure 2
<p><math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> versus <math display="inline"> <semantics> <mi>u</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 3
<p><math display="inline"> <semantics> <mi mathvariant="sans-serif">β</mi> </semantics> </math> versus <math display="inline"> <semantics> <mi>u</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 4
<p><math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mi>u</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 5
<p><math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mi>u</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 6
<p><math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> versus <math display="inline"> <semantics> <mi mathvariant="sans-serif">θ</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 7
<p><math display="inline"> <semantics> <mi mathvariant="sans-serif">γ</mi> </semantics> </math> versus <math display="inline"> <semantics> <mi mathvariant="sans-serif">θ</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 8
<p><math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mi mathvariant="sans-serif">θ</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 9
<p><math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mi mathvariant="sans-serif">θ</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 10
<p><math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <mi>N</mi> <mi>s</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <msup> <mi mathvariant="normal">Λ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mn>1.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 11
<p><math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <mi>N</mi> <mi>s</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <msup> <mi mathvariant="normal">Λ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mn>1.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 12
<p><math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <msup> <mi mathvariant="normal">Λ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <mi>N</mi> <mi>s</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 13
<p><math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <mi>N</mi> <mi>s</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <msup> <mi mathvariant="normal">Λ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 14
<p><math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>e</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <msup> <mi mathvariant="normal">Λ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mn>1.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 15
<p><math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>e</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <msup> <mi mathvariant="normal">Λ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mn>1.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 16
<p><math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <msup> <mi mathvariant="normal">Λ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>e</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <msup> <mi mathvariant="normal">Λ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mn>1.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 17
<p><math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>e</mi> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <msup> <mi mathvariant="normal">Λ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mn>1.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 18
<p><math display="inline"> <semantics> <mi mathvariant="sans-serif">ψ</mi> </semantics> </math> versus <math display="inline"> <semantics> <mi mathvariant="sans-serif">β</mi> </semantics> </math> for SWCNT when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math>; and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.03</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 19
<p><math display="inline"> <semantics> <mi mathvariant="sans-serif">ψ</mi> </semantics> </math> versus <math display="inline"> <semantics> <mi mathvariant="sans-serif">β</mi> </semantics> </math> for MWCNT when <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math>; and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.03</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 20
<p><math display="inline"> <semantics> <mi mathvariant="sans-serif">ψ</mi> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> for SWCNT when <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.03</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.06</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.03</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.07</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math>; and (<b>d</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.03</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 21
<p><math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ψ</mi> </mrow> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> for MWCNT when <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>3.0</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">β</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">γ</mi> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">φ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.03</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.06</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.03</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.07</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math>; and (<b>d</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.03</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>E</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.02</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">
11845 KiB  
Article
Exergetic Analysis of a Novel Solar Cooling System for Combined Cycle Power Plants
by Francesco Calise, Luigi Libertini and Maria Vicidomini
Entropy 2016, 18(10), 356; https://doi.org/10.3390/e18100356 - 29 Sep 2016
Cited by 11 | Viewed by 5616
Abstract
This paper presents a detailed exergetic analysis of a novel high-temperature Solar Assisted Combined Cycle (SACC) power plant. The system includes a solar field consisting of innovative high-temperature flat plate evacuated solar thermal collectors, a double stage LiBr-H2O absorption chiller, pumps, [...] Read more.
This paper presents a detailed exergetic analysis of a novel high-temperature Solar Assisted Combined Cycle (SACC) power plant. The system includes a solar field consisting of innovative high-temperature flat plate evacuated solar thermal collectors, a double stage LiBr-H2O absorption chiller, pumps, heat exchangers, storage tanks, mixers, diverters, controllers and a simple single-pressure Combined Cycle (CC) power plant. Here, a high temperature solar cooling system is coupled with a conventional combined cycle, in order to pre-cool gas turbine inlet air in order to enhance system efficiency and electrical capacity. In this paper, the system is analyzed from an exergetic point of view, on the basis of an energy-economic model presented in a recent work, where the obtained main results show that SACC exhibits a higher electrical production and efficiency with respect to the conventional CC. The system performance is evaluated by a dynamic simulation, where detailed simulation models are implemented for all the components included in the system. In addition, for all the components and for the system as whole, energy and exergy balances are implemented in order to calculate the magnitude of the irreversibilities within the system. In fact, exergy analysis is used in order to assess: exergy destructions and exergetic efficiencies. Such parameters are used in order to evaluate the magnitude of the irreversibilities in the system and to identify the sources of such irreversibilities. Exergetic efficiencies and exergy destructions are dynamically calculated for the 1-year operation of the system. Similarly, exergetic results are also integrated on weekly and yearly bases in order to evaluate the corresponding irreversibilities. The results showed that the components of the Joule cycle (combustor, turbine and compressor) are the major sources of irreversibilities. System overall exergetic efficiency was around 48%. Average weekly solar collector exergetic efficiency ranged from 6.5% to 14.5%, significantly increasing during the summer season. Conversely, absorption chiller exergy efficiency varies from 7.7% to 20.2%, being higher during the winter season. Combustor exergy efficiency is stably close to 68%, whereas the exergy efficiencies of the remaining components are higher than 80%. Full article
(This article belongs to the Special Issue Thermoeconomics for Energy Efficiency)
Show Figures

Figure 1

Figure 1
<p>System layout of SACC (1).</p>
Full article ">Figure 2
<p>System layout of SACC (2).</p>
Full article ">Figure 3
<p>Reference CC and SACC power and efficiency (week 32).</p>
Full article ">Figure 4
<p>Reference Rankine and Joule cycle power (week 32).</p>
Full article ">Figure 5
<p>AC and GT outlet temperatures for SACC and reference CC (week 32).</p>
Full article ">Figure 6
<p>ST inlet temperatures and GT flow rate for SACC and reference CC (week 32).</p>
Full article ">Figure 7
<p>Solar cooling subsystem main temperatures (week 32).</p>
Full article ">Figure 8
<p>Destroyed exergy and exergetic efficiency (SACC vs. reference CC, week 32).</p>
Full article ">Figure 9
<p>Destroyed exergy for Joule and Rankine subsystems (SACC vs. reference CC, week 32).</p>
Full article ">Figure 10
<p>Exergy efficiency for the SACC, Joule and Rankine subsystems (week 32).</p>
Full article ">Figure 11
<p>Exergy destructions, Rankine subsystem components (week 32).</p>
Full article ">Figure 12
<p>Exergy efficiencies, Rankine subsystem components (week 32).</p>
Full article ">Figure 13
<p>Exergy destructions, Joule subsystem components (week 32).</p>
Full article ">Figure 14
<p>Exergy efficiencies, Joule subsystem components (week 32).</p>
Full article ">Figure 15
<p>Exergy flows, solar collector (week 32).</p>
Full article ">Figure 16
<p>Exergy destructions, solar cooling subsystem components (week 32).</p>
Full article ">Figure 17
<p>Exergy efficiencies, solar cooling subsystem components (week 32).</p>
Full article ">Figure 18
<p>Weekly exergy destructions.</p>
Full article ">Figure 19
<p>Weekly exergy fuels.</p>
Full article ">Figure 20
<p>Weekly exergy products.</p>
Full article ">Figure 21
<p>Weekly exergy destructions, Rankine subsystem.</p>
Full article ">Figure 22
<p>Weekly exergy destructions, Joule subsystem.</p>
Full article ">Figure 23
<p>Weekly exergy destructions, solar cooling subsystem.</p>
Full article ">Figure 24
<p>Weekly exergy efficiency.</p>
Full article ">Figure 25
<p>Weekly exergy efficiency (reference CC, SACC and proposed CC systems).</p>
Full article ">Figure 26
<p>Hourly inlet air temperature before and after entering cooling coil unit.</p>
Full article ">
378 KiB  
Article
A Langevin Canonical Approach to the Study of Quantum Stochastic Resonance in Chiral Molecules
by Germán Rojas-Lorenzo, Helen Clara Peñate-Rodríguez, Anais Dorta-Urra, Pedro Bargueño and Salvador Miret-Artés
Entropy 2016, 18(10), 354; https://doi.org/10.3390/e18100354 - 29 Sep 2016
Cited by 1 | Viewed by 4434
Abstract
A Langevin canonical framework for a chiral two-level system coupled to a bath of harmonic oscillators is used within a coupling scheme different from the well-known spin-boson model to study the quantum stochastic resonance for chiral molecules. This process refers to the amplification [...] Read more.
A Langevin canonical framework for a chiral two-level system coupled to a bath of harmonic oscillators is used within a coupling scheme different from the well-known spin-boson model to study the quantum stochastic resonance for chiral molecules. This process refers to the amplification of the response to an external periodic signal at a certain value of the noise strength, being a cooperative effect of friction, noise, and periodic driving occurring in a bistable system. Furthermore, from this stochastic dynamics within the Markovian regime and Ohmic friction, the competing process between tunneling and the parity violating energy difference present in this type of chiral systems plays a fundamental role. This mechanism is finally proposed to observe the so-far elusive parity-violating energy difference in chiral molecules. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

Figure 1
<p>Power amplitude <math display="inline"> <semantics> <msub> <mi>η</mi> <mn>1</mn> </msub> </semantics> </math> of the fundamental frequency in the power spectrum as a function of the frequency of the external driving force (<b>a</b>) and the temperature (<b>b</b>). The results are obtained for <math display="inline"> <semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>1.2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>1.0</mn> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>Suppression of the second term, <math display="inline"> <semantics> <msub> <mi>P</mi> <mn>1</mn> </msub> </semantics> </math>, of the power spectrum for (<b>a</b>) a symmetric (unbiased) and (<b>b</b>) asymmetric (biased) potential. Notice that <math display="inline"> <semantics> <mrow> <msub> <mi>ϵ</mi> <mn>0</mn> </msub> <mo>≡</mo> <mi>ϵ</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Average population differences, <math display="inline"> <semantics> <mrow> <mo>&lt;</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>&gt;</mo> </mrow> </semantics> </math>, as a function of the frequency of the external driving force (<b>a</b>) and as a function of the temperature (<b>b</b>). The results are obtained for <math display="inline"> <semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>1.2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>1.0</mn> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Power amplitude <math display="inline"> <semantics> <mrow> <msub> <mi>η</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>ω</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> of the fundamental frequency in the power spectrum when propagations start far from thermodynamical equilibrium values (<b>a</b>) and when they start closer to the thermodynamical equilibrium values (<b>b</b>). The results are obtained for <math display="inline"> <semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>1.2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>1.0</mn> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Heat capacity as a function of the frequency of the external bias for different temperatures (<b>a</b>). In the (<b>b</b>), the small frequency region is expanded. In the calculations <math display="inline"> <semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>1.2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>1.0</mn> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>.</p>
Full article ">
13147 KiB  
Review
Generalized Thermodynamic Optimization for Iron and Steel Production Processes: Theoretical Exploration and Application Cases
by Lingen Chen, Huijun Feng and Zhihui Xie
Entropy 2016, 18(10), 353; https://doi.org/10.3390/e18100353 - 29 Sep 2016
Cited by 128 | Viewed by 16876
Abstract
Combining modern thermodynamics theory branches, including finite time thermodynamics or entropy generation minimization, constructal theory and entransy theory, with metallurgical process engineering, this paper provides a new exploration on generalized thermodynamic optimization theory for iron and steel production processes. The theoretical core is [...] Read more.
Combining modern thermodynamics theory branches, including finite time thermodynamics or entropy generation minimization, constructal theory and entransy theory, with metallurgical process engineering, this paper provides a new exploration on generalized thermodynamic optimization theory for iron and steel production processes. The theoretical core is to thermodynamically optimize performances of elemental packages, working procedure modules, functional subsystems, and whole process of iron and steel production processes with real finite-resource and/or finite-size constraints with various irreversibilities toward saving energy, decreasing consumption, reducing emission and increasing yield, and to achieve the comprehensive coordination among the material flow, energy flow and environment of the hierarchical process systems. A series of application cases of the theory are reviewed. It can provide a new angle of view for the iron and steel production processes from thermodynamics, and can also provide some guidelines for other process industries. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of iron and steel production process.</p>
Full article ">Figure 2
<p>Energy flow of coking procedure.</p>
Full article ">Figure 3
<p>Effect of final moisture content on the energy consumption per ton product.</p>
Full article ">Figure 4
<p>Energy value balance of the sintering process.</p>
Full article ">Figure 5
<p>Effect of coke on the minimum energy value of sinter.</p>
Full article ">Figure 6
<p>Two-dimensional unsteady model of continuous cooling process of sintered ore.</p>
Full article ">Figure 7
<p>Effect of heat transfer ratio on temperature of waste gas.</p>
Full article ">Figure 8
<p>Schematic diagram of sinter cooling process.</p>
Full article ">Figure 9
<p>Effect of the porosity on the optimization result.</p>
Full article ">Figure 10
<p>Flow and heat transfer model in vertical tank.</p>
Full article ">Figure 11
<p>Effects of layer height on the field synergy number.</p>
Full article ">Figure 12
<p>Heat transfer model of a blast furnace wall.</p>
Full article ">Figure 13
<p>Physical model of blast furnace iron-making elemental package.</p>
Full article ">Figure 14
<p>Physical model of blast furnace iron-making procedure.</p>
Full article ">Figure 15
<p>Optimal cost distribution for a blast furnace iron-making process.</p>
Full article ">Figure 16
<p>Physical model of blast furnace iron-making procedure.</p>
Full article ">Figure 17
<p>The schematic diagram of the open simple Brayton power plant model.</p>
Full article ">Figure 18
<p>Optimal cost distribution and useful energy distribution.</p>
Full article ">Figure 19
<p>Physical model for a converter steel-making process.</p>
Full article ">Figure 20
<p>Effect of Si content on the optimal results.</p>
Full article ">Figure 21
<p>Physical model for a converter steel-making procedure.</p>
Full article ">Figure 22
<p>Effect of steel slag basicity on the optimal results.</p>
Full article ">Figure 23
<p>Schematic diagram of thin slab continuous casting and rolling procedures.</p>
Full article ">Figure 24
<p>Effect of water flow distribution in the secondary cooling zone on the final rolling temperature and final cooling temperature.</p>
Full article ">Figure 25
<p>Schematic diagram of slab continuous casting process.</p>
Full article ">Figure 26
<p>Temperature distributions of the slab for initial and optimal schedules.</p>
Full article ">Figure 27
<p>Model of a reheating furnace wall with multi-layer insulation structures.</p>
Full article ">Figure 28
<p>Comparisons of the optimal results based on different optimization objectives.</p>
Full article ">Figure 29
<p>Schematic diagram of a strip laminar cooling process.</p>
Full article ">Figure 30
<p>Effect of cooling mode on the complex function.</p>
Full article ">Figure 31
<p>Flow chart of hot blast stove flue gas sensible heat recovery and utilization.</p>
Full article ">Figure 32
<p>Flow chart for the sintering waste heat utilization.</p>
Full article ">Figure 33
<p>Model of tubular plug flow reactor.</p>
Full article ">Figure 34
<p>Effect of input temperature on the reacting rate versus piling catalyst mass.</p>
Full article ">Figure 35
<p>System layout of an air Brayton cycle driven by waste heat of blast furnace slag.</p>
Full article ">Figure 36
<p>System layout of an open simple Brayton cycle driven by residual energy of converter gas.</p>
Full article ">Figure 37
<p>Disc-shaped model of solid-gas reactor.</p>
Full article ">Figure 38
<p>Schematic diagram of a one-stage air-cooling thermoelectric power generator device driven by waste water.</p>
Full article ">Figure 39
<p>Waste heat recovery net work in iron and steel factory.</p>
Full article ">Figure 40
<p>Input-output relationship of iron-making system.</p>
Full article ">Figure 41
<p>Diagram of energy flow for electric arc furnace steel-making process.</p>
Full article ">Figure 42
<p>Causal loop diagram of ISPP.</p>
Full article ">Figure 43
<p>Response characteristic of the global iron-flow network to returned iron-flow of rolling procedure.</p>
Full article ">Figure 44
<p>Dissection of energy consumption structure for ISPP.</p>
Full article ">Figure 45
<p>Schematic diagram of the ISPP for constructal optimization.</p>
Full article ">
346 KiB  
Article
Propositions for Confidence Interval in Systematic Sampling on Real Line
by Mehmet Niyazi Çankaya
Entropy 2016, 18(10), 352; https://doi.org/10.3390/e18100352 - 28 Sep 2016
Cited by 2 | Viewed by 4752
Abstract
Systematic sampling is used as a method to get the quantitative results from tissues and radiological images. Systematic sampling on a real line ( R ) is a very attractive method within which biomedical imaging is consulted by practitioners. For the systematic sampling [...] Read more.
Systematic sampling is used as a method to get the quantitative results from tissues and radiological images. Systematic sampling on a real line ( R ) is a very attractive method within which biomedical imaging is consulted by practitioners. For the systematic sampling on R , the measurement function ( M F ) occurs by slicing the three-dimensional object equidistant systematically. The currently-used covariogram model in variance approximation is tested for the different measurement functions in a class to see the performance on the variance estimation of systematically-sampled R . An exact calculation method is proposed to calculate the constant λ ( q , N ) of the confidence interval in the systematic sampling. The exact value of constant λ ( q , N ) is examined for the different measurement functions, as well. As a result, it is observed from the simulation that the proposed M F should be used to check the performances of the variance approximation and the constant λ ( q , N ) . Synthetic data can support the results of real data. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Measurement functions with parameter <span class="html-italic">q</span>. (<b>a</b>) Measurement function (<math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>F</mi> </mrow> </semantics> </math>) of Equation (<a href="#FD8-entropy-18-00352" class="html-disp-formula">8</a>); (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>F</mi> </mrow> </semantics> </math> of Equation (<a href="#FD9-entropy-18-00352" class="html-disp-formula">9</a>).</p>
Full article ">Figure 2
<p>Measurement functions without parameter <span class="html-italic">q</span>. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>F</mi> </mrow> </semantics> </math> of Equation (<a href="#FD10-entropy-18-00352" class="html-disp-formula">10</a>); (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>F</mi> </mrow> </semantics> </math> of Equation (<a href="#FD11-entropy-18-00352" class="html-disp-formula">11</a>).</p>
Full article ">Figure 3
<p>Area functions for each brain.</p>
Full article ">
8854 KiB  
Article
Influence of the Aqueous Environment on Protein Structure—A Plausible Hypothesis Concerning the Mechanism of Amyloidogenesis
by Irena Roterman, Mateusz Banach, Barbara Kalinowska and Leszek Konieczny
Entropy 2016, 18(10), 351; https://doi.org/10.3390/e18100351 - 28 Sep 2016
Cited by 17 | Viewed by 6384
Abstract
The aqueous environment is a pervasive factor which, in many ways, determines the protein folding process and consequently the activity of proteins. Proteins are unable to perform their function unless immersed in water (membrane proteins excluded from this statement). Tertiary conformational stabilization is [...] Read more.
The aqueous environment is a pervasive factor which, in many ways, determines the protein folding process and consequently the activity of proteins. Proteins are unable to perform their function unless immersed in water (membrane proteins excluded from this statement). Tertiary conformational stabilization is dependent on the presence of internal force fields (nonbonding interactions between atoms), as well as an external force field generated by water. The hitherto the unknown structuralization of water as the aqueous environment may be elucidated by analyzing its effects on protein structure and function. Our study is based on the fuzzy oil drop model—a mechanism which describes the formation of a hydrophobic core and attempts to explain the emergence of amyloid-like fibrils. A set of proteins which vary with respect to their fuzzy oil drop status (including titin, transthyretin and a prion protein) have been selected for in-depth analysis to suggest the plausible mechanism of amyloidogenesis. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Comparison between the classical discrete oil drop model (<b>A</b>,<b>C</b>) and the fuzzy oil drop model (<b>B</b>,<b>D</b>). Circles represent positions of hydrophobic (dark) and hydrophilic (white) residues. The charts represent the assumed distribution of hydrophobicity density in each model. The figure intentionally resembles the one presented in [<a href="#B44-entropy-18-00351" class="html-bibr">44</a>] so as to visualize the continuity and evolution of theoretical hydrophobic core models.</p>
Full article ">Figure 2
<p>Graphical representation of the encapsulation of the protein molecule with a 3D Gaussian. (<b>A</b>) two-dimensional Gaussian forms plotted along the horizontal (X-axis) and vertical (Y-axis) axes. The volume of the capsule (drop) is determined by its σ coefficients. Since σ<span class="html-italic"><sub>x</sub></span> &gt; σ<span class="html-italic"><sub>y</sub></span>, the molecule is stretched along the X axis. The boundary of the 3D capsule is given by the so-called three-sigma rule for each axis independently (<math display="inline"> <semantics> <mrow> <mover accent="true"> <mi>x</mi> <mo>¯</mo> </mover> <mo>±</mo> <mn>3</mn> <msub> <mi mathvariant="sans-serif">σ</mi> <mi>x</mi> </msub> <mo>,</mo> <mover accent="true"> <mi>y</mi> <mo>¯</mo> </mover> <mo>±</mo> <mn>3</mn> <msub> <mi mathvariant="sans-serif">σ</mi> <mi>y</mi> </msub> <mo>,</mo> <mover accent="true"> <mi>z</mi> <mo>¯</mo> </mover> <mo>±</mo> <mn>3</mn> <msub> <mi mathvariant="sans-serif">σ</mi> <mi>z</mi> </msub> </mrow> </semantics> </math>); (<b>B</b>) protein molecule encapsulated in an ellipsoid. Changes in coloring (from gray to purple) represent increasing hydrophobicity density.</p>
Full article ">Figure 3
<p>Hydrophobicity density distribution according to the discrete (<b>A</b>,<b>C</b>) and continuous models (<b>B</b>,<b>D</b>). The placement of residues may not correspond to their intrinsic hydrophobicity. Here, white circles (hydrophilic residues) are localized in the central part of the protein body (left). The right-hand diagram presents the continuous distribution, with shades of grey indicating varying hydrophobicity. Diagrams B and D present the hydrophobicity distribution in the discrete and continuous models respectively: <span class="html-italic">T</span> (blue)—expected; <span class="html-italic">O</span> (brown)—observed.</p>
Full article ">Figure 4
<p>Graphical representation of fuzzy oil drop model parameters reduced to a single dimension for simplicity. The leftmost figure (<b>A</b>) presents the theorized Gaussian distribution (<span class="html-italic">T</span>—blue) while the chart on the right corresponds to the uniform distribution (<span class="html-italic">R</span>—green) (<b>C</b>). Actual (observed—red) hydrophobicity density distribution (<b>B</b>) in the target protein is shown in the center, while the corresponding value of <span class="html-italic">RD</span>(<span class="html-italic">R</span>) (<span class="html-italic">R</span> denotes a reference to the <span class="html-italic">R</span> distribution) (below 0.5) is marked on the horizontal axis with a red triangle (<b>D</b>). According to the fuzzy oil drop model this protein contains a well-defined hydrophobic core. For the purpose of analysis of selected secondary folds, the reference (<span class="html-italic">R</span>) is replaced by a distribution matching the intrinsic hydrophobicity of each residue in a given fragment. The observed distribution (<b>G</b>) is then compared to the expected one (<b>F</b>) as well as to the “intrinsic” distribution (<b>H</b>). The red triangle on the axis (<b>E</b>) marks a point above 0.5—this means that distribution G more closely approximates the “intrinsic” distribution.</p>
Full article ">Figure 5
<p>Hydrophobicity density distribution profile in 1TIT. (<b>A</b>) theoretical (<span class="html-italic">T</span>—blue) and observed (<span class="html-italic">O</span>—red) hydrophobicity density distribution; (<b>B</b>) correlation between <span class="html-italic">T</span> and <span class="html-italic">O</span> distributions in 1TIT (correlation coefficient = 0.661).</p>
Full article ">Figure 6
<p>Hydrophobicity density distribution profiles for successive β-strands present in titin. Residue numbers are listed for each fragment. Theoretical distribution–blue; intrinsic hydrophobicity—green; observed distribution—red. (<b>A</b>) β-fragment 19–25; (<b>B</b>) β-fragment 46–52; (<b>C</b>) β-fragment 54–61; (<b>D</b>) β-fragment 82–87.</p>
Full article ">Figure 7
<p><span class="html-italic">T</span> and <span class="html-italic">O</span> distributions in transthyretin. (<b>A</b>,<b>C</b>) Hydrophobicity profile in chain A: in monomer (<b>A</b>) and in dimer (<b>C</b>)—<span class="html-italic">T</span> (blue) and <span class="html-italic">O</span> (red); (<b>B</b>,<b>D</b>) correlation between <span class="html-italic">T</span> and <span class="html-italic">O</span> values for each residue, calculated for chain A. Correlation coefficients are 0.592 for the monomer (<b>B</b>) and 0.353 for the dimer (<b>D</b>).</p>
Full article ">Figure 8
<p>3D presentation of transthyretin (<b>A</b>) monomer unit; (<b>B</b>) dimer. Red—fragments recognized as irregular (versus the theoretical distribution); dark blue—highly accordant fragments; cyan—second chain in the dimer. Data has been derived from <a href="#entropy-18-00351-t003" class="html-table">Table 3</a> and <a href="#entropy-18-00351-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 9
<p><span class="html-italic">T</span>, <span class="html-italic">O</span> and <span class="html-italic">H</span> hydrophobicity density distribution profiles for individual β-strands in transthyretin. Numbers indicate which residues form part of the selected fragment. Distribution: <span class="html-italic">T</span>—blue, <span class="html-italic">O</span>—red, <span class="html-italic">H</span>—green. (<b>A</b>) β-fragment 11–19; (<b>B</b>) β-fragment 28–36; (<b>C</b>) β-fragment 40–49; (<b>D</b>) β-fragment 67–73; (<b>E</b>) β-fragment 74–81; (<b>F</b>) β-fragment 88–97; (<b>G</b>) β-fragment 104–112; (<b>H</b>) β-fragment 115–123.</p>
Full article ">Figure 10
<p>Theoretical (<span class="html-italic">T</span>—blue) and observed (<span class="html-italic">O</span>—red) hydrophobicitydensity distribution profiles for the A chain of the analyzed protein. The AmL fragment comprises residues 151 through 365 (inclusive).</p>
Full article ">Figure 11
<p>Hydrophobicity density distribution profiles for the C chain:theoretical (<span class="html-italic">T</span>—blue) and observed (<span class="html-italic">O</span>—red). Green vertical lines distinguish residues engaged in P-P interaction.</p>
Full article ">Figure 12
<p>Hydrophobicity density distribution profiles for successive layers of the β-system in AmL: <span class="html-italic">T</span>—blue, <span class="html-italic">O</span>—red, <span class="html-italic">H</span>—green. According to the theoretical distribution, a single, prominent peak should be observed in the central part of each fragment. In fact, many fragments exhibit multiple peaks as well as increased hydrophobicity in their terminal sections, resulting in poor solubility. (<b>A</b>) fragment 151–158; (<b>B</b>) fragment 166–178; (<b>C</b>) fragment 181–192; (<b>D</b>) fragment 196–208; (<b>E</b>) fragment 211–220; (<b>F</b>) fragment 225–236; (<b>G</b>) fragment 239–248; (<b>H</b>) fragment 251–262; (<b>I</b>) fragment 266–275; (<b>J</b>) fragment 280–291; (<b>K</b>) fragment 295–305; (<b>L</b>) fragment 309–320; (<b>M</b>) fragment 326–337; (<b>N</b>) fragment 342–352; (<b>O</b>) fragment 355–367.</p>
Full article ">Figure 13
<p>Properties of the 225–236 fragment. (<b>A</b>) Distribution profile: theoretical (<span class="html-italic">T</span>—blue) and observed (<span class="html-italic">O</span>—red) hydrophobicity density distribution for the fragment at 225–236. The green line represents a distribution given by the intrinsic hydrophobicity of each residue; (<b>B</b>) correlation between theoretical and observed distributions (correlation coefficient = −0.657); (<b>C</b>) correlation between intrinsic hydrophobicity and observed hydrophobicity (correlation coefficient = 0.616).</p>
Full article ">Figure 14
<p>3D structure of 2ZU0. (<b>A</b>) gray sections distinguish chains C and D and fragments of chains A and B not belonging to AmL. The AmL fragment is colored according to the hydrophobicity of each residue; (<b>B</b>) the AmL fragment in a space-filling representation colored according to residual hydrophobicity. It is evident that hydrophobic residues (red) do not form any organized system (seemingly random distribution). The red-white scale is derived from PyMol, with hydrophobicity taken as the coloring criterion [<a href="#B56-entropy-18-00351" class="html-bibr">56</a>].</p>
Full article ">Figure 15
<p>Hydrophobicity density distribution in peptides which are considered for therapeutic use as amyloid solubility promoters (only intrinsic hydrophobicity is presented). (<b>A</b>) sequence AEVVFT; (<b>B</b>) sequence TAVVTN.</p>
Full article ">Figure 16
<p>Intrinsic (green), theoretical (blue) and observed (red) hydrophobicity density distributions for fragments at 131–140 (<b>A</b>) and 142–160 (<b>B</b>) in 1B10. These fragments are regarded as candidates for amyloidogenic conformational changes given the similarity between their hydrophobicity density distribution profiles and those observed for the corresponding fragments in AmL. Both fragments have been identified by comparing the correlation coefficients for individual fragments of 1B10 against AmL (<a href="#entropy-18-00351-t007" class="html-table">Table 7</a> and <a href="#entropy-18-00351-t008" class="html-table">Table 8</a>). Note the presence of a left-handed helix (131–140) which occupies a highly energetic zone on the Ramachandran plot.</p>
Full article ">Figure 17
<p>3D structure of the prion protein with the fragment at 131–140 marked in red. This fragment was identified as susceptible to conformational changes on the basis of its similarity with AmL in the context of hydrophobicity density distribution (<a href="#entropy-18-00351-t007" class="html-table">Table 7</a> and <a href="#entropy-18-00351-t008" class="html-table">Table 8</a>).</p>
Full article ">Figure 18
<p>Hypothetical mechanism of structural changes in transthyretin leading to formation of an AmL-like structure. The sandwich structure in transthyretin (upper row) exhibits greater distance between sheets, along with greater differences in their mutual orientation when compared to AmL (lower row). The two rightmost structures visualize different angles between β-fragments in β-sheets. 3D structures are compared with Gaussian plots. The arrows at the top and on the left visualize external forces (elongation and stretching respectively). Black lines on the rightmost Gauss curve and in the bottom left-hand image visualize the distribution of hydrophobicity as observed in AmL, showing how it diverges from theoretical predictions.</p>
Full article ">Figure 19
<p>Hydrophobicity density distributions: (<b>A</b>) theoretical distribution (with a central peak)—blue line; observed distribution (somewhat divergent from the theoretical model but retaining the overall shape)—red line; (<b>B</b>) theoretical distribution—dotted line; hypothetical distorted distributions exhibiting several local peaks—blue and red lines.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop