Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Lossless Transformations and Excess Risk Bounds in Statistical Inference
Previous Article in Journal
A Local Analysis of a Mathematical Pattern for Interactions between the Human Immune System and a Pathogenic Agent
Previous Article in Special Issue
Stability of Delay Hopfield Neural Networks with Generalized Riemann–Liouville Type Fractional Derivative
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complexity Synchronization of Organ Networks

1
Department of Research and Innovaton, North Carolina State University, Raleigh, NC 27606, USA
2
Center for Nonlinear Science, University of North Texas, Denton, TX 76203, USA
3
US Combat Capabilities Command, Army Research Laboratory, Aberdeen Proving Ground, MD 21005, USA
4
Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(10), 1393; https://doi.org/10.3390/e25101393
Submission received: 27 August 2023 / Revised: 13 September 2023 / Accepted: 19 September 2023 / Published: 28 September 2023
(This article belongs to the Special Issue Fractional Calculus and Fractional Dynamics)
Figure 1
<p>(<b>Left</b>) panel: The schematic depicts the three time series from the ONs of interest here, the brain, heart and lungs. Note that the three typical time series share no obvious common features. Also be aware that the information is exchanged simultaneously among all three as well as pairwise between the three ONs. Top is ten seconds of one channel of EEG time series; bottom left is ten seconds of respiration time series; bottom right is ten seconds of ECG time series; all three datasets are measured simultaneously. (<b>Right</b>) panel: The corresponding diffusion entropy analysis was used to process the diffusion random walks constructed from the three datasets depicted in the left panel (see <a href="#secAdot1-entropy-25-01393" class="html-sec">Appendix A.1</a> for details or Mahmoodi et al. [<a href="#B5-entropy-25-01393" class="html-bibr">5</a>]). The entropy <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>S</mi> <mo>(</mo> <mi>w</mi> <mo>)</mo> </mrow> </semantics></math> is plotted versus the log of the time w as predicted in <a href="#secAdot1-entropy-25-01393" class="html-sec">Appendix A.1</a> by Equation (A2) for a scaling probability density function, such that the three slopes between the dashed vertical lines yield the scaling indices for the corresponding time series. The slope is the measure of temporal complexity of the time series given by <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>j</mi> </msub> </semantics></math>, see <a href="#entropy-25-01393-t001" class="html-table">Table 1</a>. From Mahmoodi et al. [<a href="#B5-entropy-25-01393" class="html-bibr">5</a>] with permission.</p> ">
Figure 2
<p>(<b>Left</b>) Panel: Light gray curves are the scaling indices <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mi>j</mi> </msub> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>64</mn> </mrow> </semantics></math> obtained by processing the 64 time series from the EEG channels and the black curve is the average over the 64 scaling indices at each point in time. The red and blue curves are the scaling indices obtained by processing the time series of the respiration and ECG channels, respectively. Modified diffusion entropy analysis processing was performed on each channel time series with stripe size of 0.01 for the ECG and respiratory data and 0.1 for the EEG data, using the jumping ahead rule, and on data windows of one-minute length of data (windows in increments of 20 s steps), respectively. The data were simultaneously collected while the participant was conducting the Go-NoGo shooting task (for details, see [<a href="#B5-entropy-25-01393" class="html-bibr">5</a>]). (<b>Right</b>) Panel: The corresponding pairwise cross-correlation coefficients (CCs) are calculated among the depicted EEGs channels, ECG and respiration scaling coefficients, with all calculated values of the three correlation coefficients falling within the interval 0.70 &lt; CC &lt; 0.73.</p> ">
Figure A1
<p>A schematic of the steps for the processing of time series using the technique of modified diffusion entropy analysis. Panel (<b>a</b>): The blue curve is the heart rate signal which is projected onto the interval [0, 1] and then divided to the stripe size of 0.1, which is magnified in the inset. Note that sharply peaked features in the ECG have a cluster of events in (<b>b</b>), whereas a sloping feature has well-spaced events; see the inset for a visual verification of this explanation. The horizontal lines define the stripes. Panel (<b>b</b>): The events (represented as distinct separated unit amplitude pulses) are extracted from the passage of the continuous blue curve from one stripe to others. Panel (<b>c</b>): The diffusion trajectory made by the cumulative summation of the events of panel (<b>b</b>). The vertical lines show a selected set of windows with a size of 100 that sliced the diffusion trajectory. Panel (<b>d</b>): The partitioned trajectories of panel (<b>c</b>) shifted to initiate each trajectory from a common origin and terminate each after a time <span class="html-italic">w,</span> the length of the window. Panel (<b>e</b>): The histogram of the position of the trajectories at the end of the windows (to create this histogram we used 60 s of data and a stripe size of 0.01). Taken from [<a href="#B5-entropy-25-01393" class="html-bibr">5</a>] with permission.</p> ">
Review Reports Versions Notes

Abstract

:
The transdisciplinary nature of science as a whole became evident as the necessity for the complex nature of phenomena to explain social and life science, along with the physical sciences, blossomed into complexity theory and most recently into complexitysynchronization. This science motif is based on the scaling arising from the 1/f-variability in complex dynamic networks and the need for a network of networks to exchange information internally during intra-network dynamics and externally during inter-network dynamics. The measure of complexity adopted herein is the multifractal dimension of the crucial event time series generated by an organ network, and the difference in the multifractal dimensions of two organ networks quantifies the relative complexity between interacting complex networks. Information flows from dynamic networks at a higher level of complexity to those at lower levels of complexity, as summarized in the ‘complexity matching effect’, and the flow is maximally efficient when the complexities are equal. Herein, we use the scaling of empirical datasets from the brain, cardiovascular and respiratory networks to support the hypothesis that complexity synchronization occurs between scaling indices or equivalently with the matching of the time dependencies of the networks’ multifractal dimensions.

1. Introduction

The physical concept of synchronization is nearly four centuries old, whereas the idea of complexity being sufficiently broad to constitute a science on its own is less than four decades old [1]. In this paper, we follow in the tradition of interdisciplinary studies and propose conjoining two distinctly different empirical constructs into a single concept, that being complexity synchronization, with the intention of learning something new. It is remarkable that complexity synchronization does, in fact, define a new phenomenon, which in turn provides fresh insight into the health, disease and rehabilitation of living networks. Complexity science herein produces a simple rule that underpins the complexity of living networks and how this underpinning is achieved constitutes the theme of this paper:
Traditional science seeks direct causal relations between elements in the universe, whereas complexity theory drops down a level to explain the rules that govern the interactions between lower-order elements that in the aggregate create emergent properties in higher-level systems.
[1]
In Computer Science, the concept of a distributed shared-memory network describes several computers that share a memory area, but because the variability in speed among computers has no global clock with which to order activities, network synchronization is introduced to maintain order [2]. The utility of this theoretical concept has been determined in many natural systems as well. In social sciences, synchronization has been identified as an empirical mechanism that coordinates activities between events within and between networks, but as networks become more complex so too does the concept of network synchronization. This is particularly true of the amazingly complex living network structure of the human body and the need to coordinate activities from the microscopic time scales of the chemical reactions within neural networks constituting the brain, to the mesoscopic time scales of the cardiac and respiratory networks, to the macroscopic time of circadian rhythms. For our purposes, here we intend to capture the complexity of living networks in our discussion by using the more focused and less controversial term organ network (ON).
The data processing approach revealing this new kind of synchronization is based on time series consisting of random discrete events whose statistics are of the renewal type and which enable the detection and quantification of synchrony among ONs operating on different time scales and not necessarily in stationary regimes [3]. The discrete events in such time series have been named crucial events [4] because they determine the efficiency of the information exchange between these complex ONs, and for a large class of complex networks, not necessarily only ONs, crucial events determine network failures—from heart attacks to stock market crashes.
Crucial event time series are generated by inverse power-law waiting-time probability density functions ψ ( t ) t μ with an inverse power-law index in the domain 1 < μ < 3 . Asymptotically, the generated crucial event time series describes an ergodic process for the inverse power-law index μ in the range 2 < μ < 3 with a finite average waiting time. Ergodic is the technical name for statistical processes for which averages taken over long time series are equal to those taken over probability density functions. The understanding of most of the complexity arising in many-body physics is understood using the ‘ergodic hypothesis’ dating back to Boltzmann.
A crucial event time series having the waiting-time inverse power-law index in the range 1 < μ < 2 results in an infinite average waiting time and is therefore non-ergodic. One of the simpler ways to determine whether a time series is non-ergodic is by noting whether a measurement at two times depends not on the time difference (stationary, ergodic) but rather on the two time points separately (non-stationary, non-ergodic). Consequently, in this range of μ < 2 , most of the mathematical infrastructure developed using the traditional many-body theory of physics cannot be transferred and new methods must be sought.
Herein, we show through the data processing of empirical time series that physiologic ONs generate crucial event time series, that is, the events have statistically independent time intervals and are therefore of the renewal type. In this paper, we focus on the empirical complexity of electroencephalographic (EEG) data being multifractal, as are the respiratory and cardiovascular time series, and establish that the three multifractal scalings are synchronous [3,5]. This remarkable synchrony among the three ONs’ time series is the empirical evidence for the existence of complexity synchronization, as well as its fundamental importance in coordinating the functions of various ONs for the healthy operation of the human body.
The multifractal behavior of these three time series has previously been identified using the pairwise correlation of time series to identify a synchronizing mechanism [6]. Note that the synchrony between two time series is not the same as the synchrony of the scaling parameters which occurs for criticality matching, the latter being a locking of the scaling indices in time and not necessarily a locking of the time series themselves. However, complexity synchronization does not require this lower-order synchrony in order for it to be the mechanism whereby body organs effectively communicate among themselves and thereby function as a cohesive whole.
The change in fractal dimensions, as determined by the different scaling indices of the time series, indicates the changing complexity of the ONs as various physiological functions are performed. For example, information is readily transported within overlapping memory areas of the heterogeneously complex brain, and at any point in time a given region of the brain may be able to receive from, or transmit information to, another physiological ON, depending on their function and instantaneous relative complexities. This ever-changing hierarchy of complexities is revealed herein by the way in which the multifractal nature of each of these three interacting ONs influence one another over time, as we subsequently show.
Complexity is one of those concepts that although often used in multiple disciplinary contexts eludes unique formal definition. So, to avoid becoming embroiled in a semantic debate, we herein adopt a working definition for complexity that appears to be more than satisfactory for describing ON-generated crucial event time series.

1.1. Working Definition of Complexity

A signal X ( t ) generated by an ON is given by a time sequence of crucial events, whose probability density function for the time interval between such events is an inverse power law [7]. The time series scales with a scaling index δ if for a given parameter λ we can write the homogeneous scaling relation X ( λ t ) = λ δ X ( t ) . It is readily determined that a signal’s level of complexity, as measured by the fractal dimension D, increases for the crucial event time series as the inverse power-law index μ increases. Consequently, the fractal dimension of a crucial event time series is given by the relation D = 2 δ and increases with increasing complexity [8]. Table 1 records the scaling in the power spectral density, the probability density function and the dynamic variable denoting the time series, as well as the relations among the various scaling indices μ ,   β and δ defining the scaling properties of the crucial event time series.
It was hypothesized [7] and later proven [9,10] that the information flow between interacting networks depends on their relative complexity, then called the ‘complexity matching effect’. In a fashion analogous to the flow of energy following a negative energy gradient, the flow of information follows an information (negative entropy) gradient. Consequently, the information exchanged between two such interacting fractal ONs is maximally efficient when the two complexity measures given by their respective fractal dimensions are equal [7].
We show herein that at each instant of time a local complexity of the brain (as measured by the fractal dimension of a local EEG channel time series) is either tracked or driven by the complexity of the respiratory and cardiovascular ONs, with the relative complexity of these and other physiologic ONs being task dependent. Information is readily transported within the heterogeneously complex brain, as described above. This changing hierarchy of the local complexity is revealed herein by the way in which the multifractal nature of each of these three interacting ONs influences the other two ONs over time.

1.2. Multiple Measures of Complexity

The previous paragraph might give the mistaken impression that because the power spectrum for a crucial event time series has a unique value for its inverse power-law index β , as does the scaling index δ , that the inverse power-law index for the waiting-time probability density function μ shares this property of uniqueness. It does not. There are, in fact, at least two ways to measure this last index.
The first way is to use the relation given in Table 1 and to assume that at a given time t j the trajectory X ( t ) has the value X j . How much time do we wait before the trajectory takes on this value again? This recrossing probability density function is the waiting-time probability density function given in the table, which we will denote by replacing the generic inverse power-law index for the waiting-time probability density function μ by μ D . From the other parameter relations in Table 1, it is clear that
μ D = 2 δ ,
where the scaling index is that given in the table. The crossing and recrossing of the trajectory at the fixed value of the diffusion process X j has been shown to be a renewal process [11]. To establish a connection between the waiting-time index μ D with the scaling probability density function index given in the Methods Section, we address the specific cases of super-diffusion and sub-diffusion.
It is important to stress in the field of fractal dynamics [8] the relation between the fractal dimension D and the Hurst exponent H: D = 2 H , which Mandelbrot and Van Ness [12], using the fractional calculus, interpreted to be a dynamic fractal process, which they named fractional Brownian motion (FBM). This suggests interpreting μ D to be a fractal dimension by interpreting H as the scaling index δ , as is often done, thereby yielding the fractal dimension
D = 2 δ ,
thereby establishing a general connection between the fractal dimension and scaling D = μ D . The argument given here attracts our attention to the field of complexity research by which we realize that FBM presents a singularity. Recall that Equation (2) was obtained using algebra alone from the relations recorded in Table 1, without the insight provided here.
The second way to obtain the inverse power-law index for the waiting-time probability density function is to examine the situation for the extremes of anomalous diffusion, those being super-diffusion ( δ > 0.5 ) and sub-diffusion ( δ < 0.5 ). The super-diffusion case is addressed using the crucial events described by the inverse power-law index denoted by μ S replacing the generic index μ in Table 1. Using Equation (1), with the left side of the equation interpreted as the fractal dimension and the scaling index δ on the right side expressed in terms of μ S < 2 , yields μ D = 2 1 / [ μ S 1 ] , which after rearranging the terms yields
μ S = 1 + 1 / [ 2 μ D ] .
In order for the condition specified by this equation to be satisfied requires that between the consecutive crucial events in the signal driving the diffusion process is assumed either the value of +1 or −1 by means of an equal probability coin toss.
The case of sub-diffusion with δ < 0.5 is addressed as performed by Failla et al. [11] by assuming 1 < μ S < 2 . The fluctuating driver of the diffusion process is assumed to vanish between consecutive crucial events and to take on the value of either +1 or −1 with equal probability at the time of a crucial event occurring. This yields the relation μ D = 2.5 μ S / 2 so that, again, rearranging the terms gives us
μ S = 5 2 μ D .
It is important to notice the increasing interest in the emergence of μ S , heralding the non-ergodic behavioral dynamics along with criticality in the discussion of scale-free cortical dynamics [13].
The Self-Organized Temporal Criticality model spontaneously generates temporal complexity by means of the criticality of a network’s dynamics. The global fluctuations around the mean are calculated as in the inverse power-law probability density function. Of particular interest to us is the monitoring of the times at which the fluctuations cross the origin, giving rise to the first passage time power-law index [14] μ D = 1.3 . Inserting this value into Equation (4) yields μ S = 2.4 , which explains why the inverse power-law index lies in the interval 2 < μ S < 3 and the crucial event time series is therefore ergodic as well as being of the renewal type. It is important to stress that the well-known model of criticality and complexity given by Vicek et al. [15] and characterized by Vanni et al. [16] yields essentially the same results.
In summary, the interpretation of μ D as a fractal dimension is always correct. The interpretation of the fractal dimension of the time series D as the first passage time index μ D as a renewal process does not apply to fractional Brownian motion. We stress that it is not easy to assess if a fractal process also satisfies the renewal condition, thereby emphasizing the significance of the results found for the triad of empirical time series studied herein.

1.3. Brief History of Synchronization

The first recorded articulation of the concept of synchronization, separate and distinct from that of complexity, was provided in a letter by the Dutch physicist Huygens in 1665. He observed what he called ‘the sympathy of two clocks’, wherein, despite independent initial conditions, the pendulums of two clocks hanging from the same beam synchronized in the anti-phase within thirty minutes. At this time, 22 years before Newton’s Principia, a consistent vocabulary of mechanical forces with which to understand the synchronization phenomenon did not yet exist. The concept of synchronization has since evolved from the physical similarity of two oscillatory time series to more abstract measures of similarity based on recent advances in the nonlinear dynamics of many-body networks. Thus, today’s notion of synchronization differs from Huygen’s original concept introduced over nearly four centuries ago.
At the turn of this century, Strogatz chronicled in his excellent book SYNC [17] the evolution of the synchronization concept and from which we freely draw with attribution. In the 1950s, the mathematician Norbert Wiener [18] identified the interaction of a spectrum of frequencies measured from the human brain using EEG time series as being the basis of human consciousness. However, although largely correct, his intuition did not anticipate the way in which the application of mathematics would be used. That distinction is credited to Art Winfree [19], a mathematical biologist who in the 1960s identified the fundamental nature of the nonlinear interactions of oscillators. Importantly, he showed that critical dynamics produced transitions from the disordered random behavior of microscopic degrees of freedom to highly ordered macroscopic degrees of freedom undergoing synchronous motion. He was thus able to identify dynamic self-organization as the mechanism underlying biomedical synchronization as in circadian rhythm, the entrainment of the pacemaker cells in the sinoatrial node of the beating heart, and elsewhere in the body’s physiological ONs. Strogatz credits Winfree with explaining that the resulting synchronization produced an alignment in time as distinct from the spatial alignment previously observed in physical phase transitions, e.g., the transition of a material from its fluid to its gas or solid phase.
A simplified oscillator model of self-organization in time was devised by Kuramoto [20] in the 1970s, which included the insights of both Wiener and Winfree, but with a symmetric interaction among the oscillator modes. The symmetry assumption enabled Kuramoto to obtain analytic solutions and thereby be the first to determine that a population of entities, from fireflies to brain cells, must have ‘sufficiently similar properties’ to synchronize their complex dynamics. While the individual oscillators in the Kuramoto model are regular, the emergence of global synchronization is independent of whether the individual oscillators are regular or stochastic.
The term ‘normal synchronization’ refers to the entrainment of the emergent dynamic global variables of two or more interacting networks. Consequently, this would include the critical dynamics of many-body phase transitions in the taxonomy of the expanding definition of synchronization. The individual dynamic networks in the process of chaos synchronization are chaotic and surprisingly do synchronize with other such networks while simultaneously maintaining the chaotic dynamics they had in isolation.
It was thought for a long time that chaos was incompatible with synchronization, that is until Pecora and Carroll [21] decided to apply chaos theory to encrypting messages in a chaotic signal for the purpose of communications. The ‘sufficiently similar properties’ are the fractal manifolds (attractors) of the sender and receiver. The chaotic fluctuations mask the message of the sender, which is retrievable using the deterministic dynamics of the second chaos generator identical to the first in the receiver. This strategy of driving a computer simulation of a receiver (a system with a strange attractor solution) with a chaotic signal transmitted from a duplicate of itself was indeed sufficient to coax the two into synchrony. Note that this kind of synchrony is quite different from the instantaneous time tracking of two deterministic trajectories, and both these forms of synchrony are separate and distinct from complexity synchronization. It also provides a rationale for complexity synchronization for two or more interacting ONs, that being the exchange of fractal information with the fractal dimension providing the coding.
Examples of this form of chaotic synchronization appear in chemical oscillations by means of a Belousov–Zhabotinsky reaction [22]; heat relaxation oscillator [23]; chaotic systems [24]; and control for chaotic systems [25]. The forerunner of these applications is discussed in the excellent text on the universality of synchronization in nonlinear science [26].

1.4. Rehabilitation and Complexity

We have defined complexity in terms of the fractal dimension of crucial event time series having a form of temporal complexity, and we have elsewhere proven that perfect synchronization results from interactions between the two complex networks, with the more complex network restoring the temporal complexity of the less complex network [27]. Quite generally, this restoration of the fractal dimension can be interpreted as a form of rehabilitation [4], an example of which is given by the therapeutic effect of arm-in-arm walking. Almurad et al. [28] demonstrated that if an aged patient walks in close harmony with a young companion, the ‘complexity matching effect’ results in the restoration of complexity in the gait of the elderly. Mahmoodi et al. [29] proved that scaling synchronization is a consequence of the fact that a crucial event time series has a μ index in the interval 2 < μ < 3 using the modified diffusion entropy analysis data processing technique (see Section 2 Methods). However, in the ‘complexity matching effect’, the level of complexity of the interacting ONs are often out of balance in a healthy individual, and the ON with the greater complexity drives the ON with the complexity deficit. The driver perturbs the index of the driven to a higher level, and when the complexity of the driven becomes equal to the driver, the maximal transfer of information occurs, and the two are in synchrony, with their fractal dimensions becoming stable and equal [7,28].
This notion of matching in the ‘complexity matching effect’ has developed into the idea of management resulting in the ‘Principle of Complexity Management’ [29] to include the influence of one ON on the other in the sense of an ensemble average for non-ergodic time series. On the other hand, complexity synchronization is realized through the scaling of single time series and occurs when the interaction between the two ONs is strong enough that transfers of information between the two change the driven ON’s statistics induced by the driver. But, surprisingly, the inverse power-law index of the driver is changed as well as that of the driven. Consequently, the scaling indices of the two ONs dance around a value which is between that of the driver and that of the driven [27], indicating that the two ONs have equal strength in transferring information among ONs in the more general case of healthy NoONs. This coordination of time series is observed experimentally among the triad of ONs [5], subsequently discussed (in Section 3 on Results).

2. Methods

Let us consider a useful way to characterize how the brain exchanges information with the two other major physiological ONs depicted in Figure 1. The three ONs whose time series are explicitly considered are the brain, heart and lungs ONs. Even a casual view of the typical ten-second time series shown along with a cartoon of the ON generating it reveals that they share no apparent features in common, much less the existence of any synchrony with one another. The brain’s EEG looks like a random signal; the heart’s ECG gives the impression of a two-state periodic oscillator, a normal sinus rhythm; and the normal breathing time-series pattern resembles the kind of nonlinear water wave observed heading in toward the shore and eventually crashing on the beach. Yet Mahmoodi et al. [5] showed that when these three time series are simultaneously measured, they in fact possess a remarkably new kind of synchrony in their normal healthy interactive state.
To demonstrate this new kind of synchronization, let us consider the ten-second time series for each of the three simultaneous measurements of the brain, heart and lungs depicted in Figure 1. The three time series depicted therein are denoted by X j ( t ) as well as the 63 other channels in the EEG measurement that are not shown in the figure. The subscript on the variable in this exemplar therefore denotes the output from channel j (=1, 2, 3) and the variable scales when the time t is multiplied by a constant λ resulting in X j ( λ t ) = λ δ j X j ( t ) . Here, δ j is the scaling index for channel j. Note that the scaling index is independent of time when the time series is a monofractal and is related to the fractal dimension of the neural network in the brain generating the time series in the vicinity of channel j. It is the scaling index δ j of the empirical channel j crucial event time series, or equivalently the fractal dimension of the channel j time series D j = 2 δ j , that the modified diffusion entropy analysis technique enables us to find; see Table 1 for the many relations among the four parameters characterizing the time series.
The left panel of Figure 1 depicts the three time series to be processed using the modified diffusion entropy analysis technique. The results of this analysis of the time series on the left are depicted in the panel on the right, which anticipates the theoretical findings detailed in Appendix A.1. The empirical probability density function is obtained using the histograms from the diffusion argument and from which the Shannon/Wiener (SW) entropy for the three time series is constructed. Graphing the diffusion entropy versus the logarithm of the time yields a straight line with a positive slope. The results of the data processing indicate the existence of a deep structural relation common to these three very different looking time series.
The structural relation revealed by the modified diffusion entropy analysis data processing strongly suggests that the above homogeneous scaling be replaced with their average values, X j ( λ t ) = λ δ j X j ( t ) , which means that the scaling is a property of the probability density function and not of the individual trajectories. In the next section, we examine the implications of the scaling probability density, one of which is that the empirical slopes in Figure 1 are, in fact, given by the three scaling indices δ j , j = 1 , 2 , 3 . So, let us now examine the source of this remarkable result.

Modified Diffusion Entropy Analysis (MDEA)

MDEA measures the complexity of the diffusion trajectory made by turning the empirical crucial event time series into a diffusion process. To avoid an unnecessary duplication of effort, we define the steps in the MDEA operating on a single heartbeat dataset but one significantly longer than the example just given. MDEA was applied to the post-processed continuous data from all 64 EEG channels, the electrocardiogram channel and the respiration channel of one participant in one session of neurofeedback training; for a more detailed description of the experimental protocol underlying the empirical dataset, see [5] as well as Appendix A.1.
For a stochastic process, the scaling equality is in terms of average values interpreted in terms of a scaling probability density function which can be written as [30]
P ( x , τ ) = 1 τ δ F x τ δ ,
where P ( x , τ ) d x is the probability that the random diffusion variate X ( τ ) is in the interval ( x , x + d x ) at time τ . In Appendix A.2, we show that the scaling probability density function is the general solution to a simple fractional kinetic equation [31,32], that is, a fractional equation in both time (having intrinsic memory) with an index α [4,30]
D τ α [ P ( τ ) ] = λ α P ( τ ) ,
and space (long-range inhomogeneity) with an index β . Equation (6) is a typical linear fractional rate equation with a solution given by the Mittag-Leffler function. The mathematical details from the fractional calculus are not of concern to us here; we note, however, that at early times the Mittag-Leffler function has the form of a stretched exponential, and asymptotically it becomes an inverse power law in time with an index α . We also note that the series expression for the Mittag-Leffler function has the analytic form of an exponential for α = 1 . Thus, the further α is from 1 ( α < 1 ), the slower the decay of the memory, which is to say the longer the intrinsic memory of the dynamic process reaches back in time.
The fractional equation which has the scaling probability density function as the renormalization group solution is a fractional kinetic equation, as we discuss in Appendix A.2 [30]. The scaling index δ is shown in Equation (A9) to be the ratio of the fractional derivative index at time α to the fractional derivative in space β , which is to say δ = α / β . The case ( α , β ) = (1,2) corresponds to a simple diffusion, with δ = 1 / 2 having the fractal dimension D = 1.5 .
The scaling probability density function F ( . ) is unknown in general; however, for δ = 0.5 , it is Gaussian in the scaled variable x / t 0.5 and the process is diffusive. If the probability density function is Gaussian but δ 0.5 , the process is said to describe a form of anomalous diffusion called fractional Brownian motion by Mandelbrot and Van Ness [12], who first described it using fractional calculus. We note that fractional Brownian motion events are not of the renewal type because there is a long-term memory in the generating process, and therefore such a process cannot contain crucial events. Consequently, the more interesting case is when the unknown function is not Gaussian, but the statistics are of the renewal type and therefore cannot be fractional Brownian motion but can be either crucial events or non-crucial events, both of which can be of the renewal type, e.g., a Poisson process consists of non-crucial events that are of the renewal type.
You may have noticed that because crucial event time series are renewal they cannot have memory in the sense that fractional Brownian motion has memory, but these two kinds of memory can be distinguished by separately shuffling the two time series. A time series with a scaling index δ > 0.5 and normal memory, such as fractional Brownian motion, when shuffled, will yield a scaling index of δ s h u f f l e d = 1 / 2 , whereas a time series consisting solely of crucial events with a scaling index δ > 0.5 will, when shuffled, not change its scaling index δ s h u f f l e d = δ . This counter-intuitive result was named ‘memory-beyond-memory’ by its discoverers Allegrini et al. [33] and is explained in varying levels of detail by West and Grigolini [4] and Bohara et al. [34], among others, but its existence has not been universally embraced by the theoretical community who study such things.
Thus, if the empirical probability density has the scaling form given by Equation (5), we note that a graph of the diffusion entropy Δ S ( τ ) versus the log of time ( ln τ ) makes it reasonable to interpret the slope of the empirical curve in Figure 1 to be given by the scaling index in Equation (A2) for the SW entropy. The three scaling indices indicate the values for the EEG, respiration and the ECG, for these simultaneously measured ten-second datasets make it also reasonable to interpret the brain to be the most complex, the heart the least complex and the lungs to have complexity between the other two members of the interacting triad. We emphasize that this ordering is not universal but does suggest that during this short time interval the part of the brain generating this signal was driving the other two ONs.
The results obtained for the three short datasets indicating that each of the time series has a constant fractal dimension entails that these time series are given by fractal scaling processes. If that were the end of the story it would still be a valuable result, but it would be a very restricted one because it would not allow for the ONs to adapt to new situations, which is an obvious capability of healthy ONs. The monofractal behavior of the three ONs observed for the ten-second time interval are due to intra-ON interactions that do not change their fractal behavior in this time interval because they have not been alerted to do so by any inter-ON information exchange and therefore they do not change their fractal dimensions. In the next section, we apply the modified diffusion entropy analysis to a significantly longer total time dataset for this triad of simultaneously measured time series and find they are substantially richer in information, and this longer total time reveals the ONs true nature, which is multifractal. In this way, each of the 66 ONs generates a separate multifractal but does so in a coordinated way under the influences of the other 65 ONs.
So, let us now examine how the monofractal behaviors of these 66 ONs are modified over longer times by the information exchange during their mutual interactions. In Appendix A.1, the modified diffusion entropy analysis data processing technique for multifractal crucial event time series is briefly reviewed using the electrocardiogram time series from the triad of measurement types as an exemplar. The results of this analysis applied to the 66 simultaneously measured time series are depicted in figure in Section 3.

3. Results

In this section, we present a new way to characterize how the brain exchanges information with two other major physiological ONs, those being the respiratory and cardiovascular ONs with the results of their interactions portrayed in Figure 2. This figure depicts a quasi-periodic time dependence of the scaling indices δ j , j = 1 , 2 , , 66 for the processed datasets, as discussed in the Section 2 on Methods, from each of the 64 channels of a standard EEG, along with the those from the cardiovascular and respiratory ONs that were simultaneously measured. Note that the time scale is such that if we randomly select a point along the time axis and magnified the time series in the vicinity of that point, we would obtain something similar to the three constant fractal dimensions in Figure 1 but not necessarily as closely aligned as the results depicted therein. It is clear from the left panel of Figure 2 that the quasi-periodic behavior of the scaling indices from the EEG channels are in synchrony with those from the cardiovascular and respiratory ONs.
Figure 2 affords a clear answer to the following question: How does complexity synchronization occur in scaled metrics from empirical datasets of brain, cardiovascular and respiratory networks? There are 64 different EEG channels, corresponding to the green lines. For each of them, using stripes of a proper size was possible to find the scaling δ j , j = 1 , 2 , , 64 in a sufficiently small bin of time Δ t to define an ‘instantaneous’ value of δ j ( t ) , j = 1 , 2 , , 64 . This by itself is a significant benefit of using modified diffusion entropy analysis. The same method of analysis was applied to the respiration (red curve) and ECG (blue curve) time series. While the interaction between the brain and the physiology ONs of the body has a number of conjectured forms in the physiology literature, Figure 2 firmly establishes that the complexity of these different physiological processes as measured by their respective multifractal dimensions are synchronized.
The visual impression of the synchrony of the processed datasets in this last figure is borne by the cross-correlation coefficients of the three scaling index types recorded in the right-side panel being in the narrow interval [0.70, 0.73]. This synchrony of the multifractal behavior is a clear manifestation of the complexity synchronization phenomenon, which is not a strict deterministic phenomenon but is a statistical regularity.
Note that it is the scaling indices that are changing over time in Figure 2, thereby indicating the coordinated multifractal behavior of the time series from the brain, ECG and respiratory ONs. The neuroscientist Buzsaki [35] commented that transient coupling between various parts of the brain supports an information transfer to, and from, other ONs. We draw this to the readers’ attention because the scaling results shown in the figure support this conjecture. But a word of caution is appropriate here in that the synchrony observed in Figure 2 for the scaling indices does not necessarily have anything to do with the synchronous behavior observed from the central moment correlation properties of the time series observed in the insightful paper on the ‘complexity matching effect’ by Delgnieres et al. [36].
The quasi-periodic nature of the scaling parameter depicted in Figure 2 provides insight into the way the dynamic information from the brain, heart and lungs is exchanged during their mutual interactions. The gray curves in this panel depict the instantaneous scaling index over all 64 EEG channels, which is compared with the scaling index for the cardiovascular network (blue curve), the scaling index of the respiratory network (pink curve) and the ‘scaling index for the brain’ obtained by averaging over the 64 channels of the EEG (black curve). This figure indicates that all the ONs (or 66 network channels) have dramatic changes in complexity over time, being a direct consequence of their inter-ON and intra-ON interactions. This time dependence of the scaling indices means that the fractal dimensions become multifractal dimensions with quasi-periodic time dependencies. To properly interpret the behavior depicted in Figure 2 requires that we answer the following question: What is a scaling parameter and what does it entail regarding the underlying dynamic network? A question we partially answer in the Discussion Section.

4. Discussion

Healthy ONs being hosted by a healthy living network have equal strength in transferring information among ONs in a NoONs but give rise to time series whose fractal fluctuations contain control information that guides both the internal behavior and external information exchange of these complex ONs within the NoONs. In fact, the health of the human body is determined by the multifractal dimensions or alternatively by the scaling indices δ j ( t ) . The scaling index has the value 1 as the ideal condition for the health of the human body. It is a singular condition corresponding to the largest possible scaling and has been used in the analysis of heart rate variability (HRV) time series as a diagnostic indicator separating patients who have congestive heart failure from those that are healthy [34,37]. More generally, the brain may also explore the condition μ < 2 and the condition μ > 2 , with a decrease in the scaling value.
It bears repeating that Figure 2 provides a clear description of how complexity synchronization occurs in the empirical datasets from the brain, cardiovascular and respiratory networks using modified diffusion entropy analysis. One of the significant benefits of using this data processing technique is that it reveals the crucial event character of these 66 channels or ONs. As mentioned, the matter as to how information is exchanged among the channels of the brain and the ONs of the body is presently a matter of scientific conjecture; however, Figure 2 proves that the complexities of these 66 different channels are synchronized. It is this synchrony of the multifractality of the interacting ONs that enables the efficient exchange of coded information among the ONs within the human body.
It is the fractal statistics of physiological fluctuations that determine the spatial properties of the tree-like branching structures of the human lung, arterial and venous systems and other ramified structures [38]. Statistical fractals also determine the waiting-time distribution of the time intervals successive beats of the human heart [4,33,39,40], in respiration [37], in dyadic conversation [41], in the human nervous system [42,43], in the dynamics of the brain [44,45], in the walking rehabilitation of the elderly [28,46], in motor control [36,47] and in interpersonal coordination [48,49], to name just a few. But it is worth quoting Buzsaki on the fractal nature of the brain [50]:
No matter what fraction of the brain web we are investigating, neuronal loops are the principle organization at nearly all levels. A physicist would call this multilevel, self-similar organization a fractal of loops.
The fractal paradigm is captured by the statistics of the scaling probability density function and is a consequence of the fact that the scaling probability density function is the solution to a fractional kinetic equation, as sketched out in Appendix A.2. The dominant characteristics of fractal statistics are spatial (x) inhomogeneity, temporal (t) intermittency and the phase-space trajectory ( x ; t ) replacement of the dynamic variable X ( t ) . In the phase space, the scaling of the dynamic variable is replaced by a scaling probability density function of the form given by Equation (5), which is true quite generally for ON statistical phenomena [51]. The first moment of X ( t ) , using the scaling probability density function, recaptures the homogeneous scaling form of the dynamic variable, X ( λ t ) = λ δ X ( t ) , whose solution has the same power-law time dependence discussed in connection with the scaling relations recorded in Table 1. Such processes have monofractal statistical behavior.
So, what does it mean when we obtain a multifractal statistical process, which is to say a time-dependent scaling parameter δ t ? The short answer is that the statistics are given by the scaling probability density function but with the constant scaling index replaced with the time-dependent scaling index. The longer answer is, well, longer, because it must provide an understanding for the time dependence.
Lloyd et al. [52] argue in their review that biological systems are homeodynamic (or homeorhetic) as a manifestation of an ON’s ability to self-organize at behavior bifurcation points where an ON loses stability and restabilizes in a new state. As a result of this self-organization, ONs display complex behaviors with a spectrum of emergent characteristics, including bistable switches, thresholds, mutual entrainment, as well as periodic behavior. These processes may proceed on different time scales, from very rapid processes at the molecular level to the enormously long time scales of evolutionary change; see, for example, Steven Gould’s long discussion on punctuated equilibrium theory in his 1400 page book The Structure of Evolutionary Theory [53]. It is apparently the dynamic self-organization under homeorhetic conditions that makes possible the organized complexity of life. Given the changeable behavior of the underlying complexity of NoONs, it is not surprising and is to be expected that the statistics are multifractal rather than monofractal in living networks.
The identification of fractal statistics was a major step away from the signal-plus-noise model that had dominated the engineer’s view of the disruptive role of fluctuations in complex phenomena. The scaling behavior of biomedical time series entails the fact that the fractal fluctuations are not normally disruptive but are rich in information that is exchanged in the interactions among ONs. The strength of the fractal paradigm lies in the fact that no single scale or frequency carries the signal, but rather pieces of the signal are encoded across a spectrum of scales. In this way, when noise does disrupt the signal, the repetitive nature of the fractal scaling ensures that, although the signal may be weakened, the information will not be totally lost. One way the resilience of a fractal structure to both internal and external normally disruptive fluctuations was understood involved using a fractal scaling model of the airways within mammalian lungs [54,55]. These mathematical results prompted the adoption of the interpretation that fractal structures are preadapted to such disruption, which meant that a fractal structure already possessed the scale being presented by the disputer, or a scale reasonably close to it. However, the modern term ‘resilience’ is more neutral with respect to a causal mechanism than is the more descriptive term ‘preadapted’.
However, even this generalization of the engineering paradigm was shown to be too restrictive to properly describe the richness in the statistics of physiologic time series. Most if not all physiologic time series are found to be characterized by time-dependent scaling parameters and therefore belong to the broader class of complex processes of multifractals. The time series from the heart, lungs and brain give some indication of the reasonableness of this interpretation. The scaling indices for the brain, heart and lungs have a range of variation (min:max) given by Δ δ 0.267 (brain); 0.122 (heart); and 0.121 (lungs). This increased flexibility of the range of variation in the brain’s multifractal scaling index may well be a reflection of the brain’s multi-task functionality, given the fact that the brain is itself a living NoON.
The health of the living NoONs that comprise the human body is determined by the scaling properties of the various ONs. It is the fractal scaling that determines how well the overall harmony is maintained, because the ideal health condition of the body is represented by δ = 1 . This singular condition corresponds to the largest value of the scaling index. The brain may also explore the conditions μ < 2 and μ > 2 with a decrease in the value of the scaling index δ . Consequently, it was recognized that disease and injury are described by the loss of variability (complexity) [54,56], and for that reason, the strategies used for combating disease/injury are being critically re-examined. For example, experiments show a preference in the response of physiologic ONs to 1/f-signals over that of white noise, indicating a sensitivity of these ONs to fractal scaling control [42,44,57]. In the more general rehabilitation context, the strategy determining how we develop life support equipment is another important example of the need for re-examination. The tradition in applying life support strategies is to supply blood at the average rate of the beating heart, to ventilate the lungs at their average rate and so on for the other ONs necessary for sustaining life [54,58].
It is clear from Figure 2 that the condition δ > 0.5 is always true, namely, all the physiological activities of the body adopt a super-diffusion approach of different intensity. The parameter μ referring to the time distance between two consecutive crucial events remains smaller than μ = 3 , which is the border with the region μ > 3 corresponding to the ordinary statistical behavior of thermodynamic equilibrium. The condition μ > 3 entails Gaussian statistics for the time series, thereby losing the renewal statistics of the empirical time series. This loss of crucial event time-series status entails that the ON undergoing such a transition is either diseased or has been damaged by an external cause.
We must consider the way nature has resolved the difficult problems of providing robust methods for ONs to exchange information with one another, with information flowing from the more complex to the less complex network [7]. Then, that knowledge is applied to the least invasive kind of intervention necessary for recovery. The way ONs exchange information provides guidance on how medical devices ought to intervene to facilitate recovery from illness/injury through rehabilitation. The least invasive method of rehabilitation is one that uses an ON’s own strategies to establish the road from illness or injury back to health. The lungs respond best to the natural driving of fractal bio-ventilators and the heart to the fractal cardiopulmonary bypass bio-pumps, each driven by the appropriate spectrum of fractal bio-frequencies; see the experiments successfully carried out as well as the clinical successes of Mutch and colleagues [58,59,60].

5. Conclusions

We conclude that an ON’s emergent time series, whose fractal properties are determined by its scaling index, not by its detailed microscopic dynamics, determines the health of that ON and ultimately of the human body. This scaling codifies the success of that ON in carrying out its function. Moreover, the index quantifies the information shared with the other ONs within the NoONs. We draw this conclusion from what we have learned by processing the interacting time series from the triad of the brain, heart and lungs, whose overall health is determined by the information shared among the three over time [5]. Given this result, what can we further conclude about the universality of complexity synchronization?
A possible mechanism for the quasi-periodic complexity synchronization among the time series generated by the brain, heart and lungs was suggested by reading Buzsaki’s 2006 book. The thalamus, being a hub in cortical–thalamic network interactions, serves as an integration center through which ‘reciprocal causality’ exists among various brain regions and likely among the brain and various ONs [50]:
‘The thalamus is a large collection of relay nuclei, a kind of customs and border patrol agency. These nuclei are the only source of information for the neocortex about the body and the surrounding physical world … The principal mechanism of the cortical-thalamic-cortical flow of activity is self-sustained oscillations …’
The meaning derived from this quote is that the integration center is inherently oscillatory, so that the interaction of multiple networks is coordinated and oscillatory. The above insight was taken from Cycle 7 of Buzsaki’s Rhythms of the Brain in his discussion of resting and sleep states void of external sensory stimulation or motor activity, but we think it readily generalizes to waking and task states, although such states would be constrained or influenced by sensory input and motor output.
The statistical analysis of the 66 empirical ON time series, in fact, support the conclusion that ONs generate crucial event time series, a condition necessary for complexity synchronization. Moreover, we may also conclude that such crucial event time series have fractal statistics, which also facilitates complexity synchronization in multi-ON interactions. However, the fractal nature of these time series is not constant, which is to say they are not monofractal but change with the vagaries of the interactions of one ON with another, because other ONs are the environment in NoONs. These multifractal time series are produced by the internal dynamics of the individual ONs and can be described using Self-Organized Temporal Criticality models. The critical self-organization generated by an ON’s internal dynamics produces in time [27] what the Self-Organized Criticality model produces in space [61]. Consequently, physiologic phenomena are always multifractal and their spectral width is a measure of the state of health of that network [51,62,63,64] and consequently the overall health of the individual.
ONs use fractal statistics to preadapt their dynamics to potentially disruptive perturbations [54], whereas multifractality generalizes that adaptability to the breakup of classical trajectories into fractal trajectories with the onset of chaos [32]. This kind of adaptation enables going beyond what Taleb labeled ‘antifragile’ behavior [65]. The antifragility concept encompasses how things, in our case the ‘thing’ would be an ON time series, gain from disruption rather than being weakened by it. The increase in uncertainty that antifragility promotes in order for an ON to become stronger, i.e., increase its complexity, in the face of disruption and adversity, whether produced inside or outside the ON, is precisely what is measured by the width of the multifractal spectrum.
A remarkable aspect of multifractality is that it is not just a consequence of the critical dynamics of ONs, in which Self-Organized Temporal Criticality would be a reasonable driver for such behavior. The scaling behavior of these three physiologic time series is invisible to most data processing techniques and thereby so too are the crucial events. The hidden interdependence is above the level of time-series scaling generated by the interactions of the three ONs, those being the heart, lungs and brain. It is only after the modified diffusion entropy analysis processing of the time series that the complexity synchronization mechanism tying the three ON time series together is revealed.
Complexity synchronization is a newly identified evolutionary mechanism ‘Nature devised’ to enable NoONs to continue performing their global functioning behaviors by incorporating the complex dynamic feedback from the host NoON into the guest ON’s individual dynamics. The multifractal dimension indicates how information is encoded within ON time series and which guarantees a proper response regardless of the complexity of the healthy host state. The time-dependent fractal dimensional encoding insures efficient communication across multiple interacting ONs. The quasi-periodic oscillations are each statistically disrupted by distinct inverse power-law temporal frequency perturbations.
In summary, we conclude that a time series generated by the critical dynamics of a healthy ON is a homogeneous random fractal, with independent time intervals, and is consequently a crucial event time series. Furthermore, such a time series is modulated by a self-similar scaling index δ , giving rise to a fractal dimension μ D = 2 δ that directly measures the complexity of the ON time series. The full complexity of an ON connected within a NoON is captured by the time dependence of the scaling index δ ( t ) , resulting in a multifractal dimension of the interacting ON time series. We stress here that the quasi-periodicity of the three time series observed is necessary to carry out the distinct task in which the triad of ONs are engaged, and we expect even richer time dependencies to be revealed as the variety of tasks of differing levels of difficulty are carried out under controlled conditions or perhaps more coordination under conditions of complete rest or sleep.
We hypothesize that complexity synchronization is the mechanism necessary to coordinate the multiple time dependencies of the many interacting ONs composing NoONs, under changing conditions. Consequently, the degree of disruption of an ON’s time series produced by illness or injury may be quantified by the degree to which the complexity synchronization level among the ONs deviates from their value during their normal operation in a healthy NoON.

Author Contributions

Conceptualization, K.M, S.E.K. and B.J.W.; Methodology, P.G.; Formal analysis, B.J.W. and K.M.; Investigation, K.M.; Data curation, S.E.K.; Writing—original draft, B.J.W.; Writing—review & editing, P.G. and P.J.F. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by US Army Research Laboratory cooperative agreement W911NF-16-2-0008.

Data Availability Statement

All codes and the datasets are available at https://github.com/Korosh137/MDEA.git (accessed on 22 August 2023) or upon request to K.M.

Acknowledgments

The authors would like to extend their sincere appreciation to the US Army Research Laboratory for support.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Modified Diffusion Entropy Analysis (MDEA)

In the MDEA technique, the first step is to project the data of each channel onto the interval [0, 1] by normalizing each time series by the total time interval of the dataset. This enables the processing of each time series to be directly compared. The data profile over the unit interval is then divided into parallel stripes (Figure A1a, ECG data). (The inset in the figure displays what the events would look like when the stripe size is 0.1. Note that a slowly varying feature of the data trace would have relatively few events well-spaced in time. However, a sharply peaked feature would have a large number of closely spaced events). Next, the events are extracted by defining them as unit amplitude pulses if the signal at that time is in a different stripe with respect to its previous value (Figure A1b) and zero if it remains in the same stripe. Using the time series of the extracted events, we create a diffusion trajectory (Figure A1c), i.e., the cumulative sum of the events in Figure A1b. The statistics of a single diffusion trajectory (blue curve in Figure A1c) are determined by selecting a window size w and partitioning the diffusion trajectory into many pieces, each starting from an event. By initiating all the segments from an event, they can all be shifted to start from a common origin (Figure A1d). Finally, we evaluate the ensemble distribution of histograms at a given time (Figure A1e) because the events are statistically independent.
To make the statistics of the single time-series diffusion trajectory correspond to that performed in the MDEA processing of the data, we pick a window size τ and slice the empirical signal into many pieces, each of length w = τ , and start from an event (panel d). By shifting all the slices to start from this origin, we evaluate the distribution of trajectories at time τ (panel e). Denoting the probability density function for different window sizes as P ( x , τ ) , we can define the SW entropy as follows:
S ( τ ) = d x P ( x , τ ) log 2 P ( x , τ )
Assuming that P ( x , τ ) is the probability density function corresponding to window size τ , we can define the diffusion entropy using the SW entropy as being the information contained in the time series. Using the scaling probability density function, without knowing the F(.) function, the deviation of the SW entropy from its reference state defined by the unknown function is
Δ S ( τ ) S ( τ ) S r e f = δ ln τ
Consequently, if a graph of the SW entropy for an empirical process versus the logarithm of the time yields a curve with a positive slope, which we interpret to be the scaling index δ .
Figure A1. A schematic of the steps for the processing of time series using the technique of modified diffusion entropy analysis. Panel (a): The blue curve is the heart rate signal which is projected onto the interval [0, 1] and then divided to the stripe size of 0.1, which is magnified in the inset. Note that sharply peaked features in the ECG have a cluster of events in (b), whereas a sloping feature has well-spaced events; see the inset for a visual verification of this explanation. The horizontal lines define the stripes. Panel (b): The events (represented as distinct separated unit amplitude pulses) are extracted from the passage of the continuous blue curve from one stripe to others. Panel (c): The diffusion trajectory made by the cumulative summation of the events of panel (b). The vertical lines show a selected set of windows with a size of 100 that sliced the diffusion trajectory. Panel (d): The partitioned trajectories of panel (c) shifted to initiate each trajectory from a common origin and terminate each after a time w, the length of the window. Panel (e): The histogram of the position of the trajectories at the end of the windows (to create this histogram we used 60 s of data and a stripe size of 0.01). Taken from [5] with permission.
Figure A1. A schematic of the steps for the processing of time series using the technique of modified diffusion entropy analysis. Panel (a): The blue curve is the heart rate signal which is projected onto the interval [0, 1] and then divided to the stripe size of 0.1, which is magnified in the inset. Note that sharply peaked features in the ECG have a cluster of events in (b), whereas a sloping feature has well-spaced events; see the inset for a visual verification of this explanation. The horizontal lines define the stripes. Panel (b): The events (represented as distinct separated unit amplitude pulses) are extracted from the passage of the continuous blue curve from one stripe to others. Panel (c): The diffusion trajectory made by the cumulative summation of the events of panel (b). The vertical lines show a selected set of windows with a size of 100 that sliced the diffusion trajectory. Panel (d): The partitioned trajectories of panel (c) shifted to initiate each trajectory from a common origin and terminate each after a time w, the length of the window. Panel (e): The histogram of the position of the trajectories at the end of the windows (to create this histogram we used 60 s of data and a stripe size of 0.01). Taken from [5] with permission.
Entropy 25 01393 g0a1

Appendix A.2. Fractional Kinetic Equation

In this Appendix, a sketch of the formal solution to a fractional equation of evolution for the scaling probability density function is presented employing renormalization group theory. The equation to be solved was first derived using the continuous-time random-walk model of Montroll and Weiss (MW), who generalized the random-walk process to continuous time. West and Grigolini [66] outline this approach in which the time interval for successive steps is given by an inverse power law in time and the step length is an inverse power law in space. Consequently, the fractional equation describing the space-time evolution for the probability density function is determined to be [66]
D t α [ P ( x , t ) ] = K β D | x | β [ P ( x , t ) ]
where the Caputo fractional derivative in time is operating on the left side of the equation and the Riesz–Feller fractional derivative in space is operating on the right side of the equation. This fractional kinetic equation is solved as an initial value problem for the probability density function, where P 0 ( x ) = P ( x , t = 0 ) .
Zaslavsky [31,32] applied the renormalization group transformation R to the network dynamics such that the scaling properties of the incremental changes are
R : δ x = λ x δ x , R : δ t = λ T δ t ,
which apply, after some averaging, to a restricted space-time domain, and the scaling parameters are ( λ x , λ T ).
He continues with the observation that the fractional kinetic Equation (A3) given above is invariant under the renormalization group transformation:
R : D t α [ P ( x , t ) ] = K β R : D | x | β [ P ( x , t ) ] ,
which implies that the transformed fractional equation satisfies the scaling behavior:
λ T α D t α [ P ( x , t ) ] = λ x β K β D | x | β [ P ( x , t ) ] .
The lowest-order renormalization group solution is given by equating the renormalization parameters raised to their respective powers: λ T α = λ x β . The solution to the renormalized fractional equation is given in terms of the Fourier transform of the probability density function, which is the characteristic function ϕ ( k , t ) , expressed in terms of the Mittag-Leffler function E β ( . ) to be
ϕ ( k , t ) = E α ( K β | k | β t α ) .
Consequently, taking the inverse Fourier transform of this characteristic function and expressing the Mittag-Leffler function as a series results in, after some algebra [30,32], the scaling solution for the probability density function:
P ( λ T α / β x , λ T t ) = P ( x , t ) / λ T α / β .
Selecting the ratio of the scaling parameters to be
δ = α / β ,
and
λ T = 1 / t ,
enables us to rewrite Equation (A8) in the form of the scaling probability density function given by Equation (5). Thus, the solution to the simplest fractional kinetic equation yields the scaling probability density function used, along with the MDEA data processing technique, to interpret the datasets from the brain, heart and lungs in the text as crucial event time series with synchronized levels of complexity as measured by their respective multifractal dimensions.

References

  1. Phelan, S.E. What is complexity science, really? Emerg. A J. Complex. Issues Organ. Manag. 2001, 3, 120–136. [Google Scholar] [CrossRef]
  2. Braginton, P. Taxonomy of Synchronization and Barrier as a Basic Mechanism for Building Other Synchronization from It. Master’s Thesis, California State University, San Bernardino, CA, USA, 2003. [Google Scholar]
  3. Mahmoodi, K.; Kerick, S.E.; Grigolini, P.; Franaszczuk, P.J.; West, B.J. Temporal complexity measure of reaction time series: Operational versus event time. Brain Behav. 2023, 13, e3069. [Google Scholar] [CrossRef]
  4. West, B.J.; Grigolini, P. Crucial Events: Why Are Catastrophes Never Expected? World Scientific: Singapore, 2021. [Google Scholar]
  5. Mahmoodi, K.; Kerick, S.E.; Grigolini, P.; Franaszczuk, P.J.; West, B.J. Complexity synchronization: A measure of interaction between the brain, heart and lungs. Sci. Rep. 2023, 13, 11433. [Google Scholar] [CrossRef]
  6. Bartsch, R.P.; Ivanov, P.C. Coexisting forms of coupling and phase-transitions in physiological networks. In Proceedings of the Nonlinear Dynamics of Electronic Systems: 22nd International Conference, NDES 2014, Albena, Bulgaria, 4–6 July 2014; Proceedings 22. Springer: Berlin/Heidelberg, Germany, 2014; pp. 270–287. [Google Scholar]
  7. West, B.J.; Geneston, E.L.; Grigolini, P. Maximizing information exchange between complex networks. Phys. Rep. 2008, 468, 1–99. [Google Scholar] [CrossRef]
  8. Feder, F.J. Fractals, See Figure 9.5 on Page; Plenum Press: New York, NY, USA, 1988. [Google Scholar]
  9. Aquino, G.; Bologna, M.; West, B.J.; Grigolini, P. Beyond the death of linear response theory: Criticality of the 1/f-noise condition. Phys. Rev. Lett. 2010, 105, 040601. [Google Scholar] [CrossRef]
  10. Aquino, G.; Bologna, M.; Grigolini, P.; West, B.J. Transmission of information between complex systems: 1/f resonance. Phys. Rev. E 2011, 83, 051130. [Google Scholar] [CrossRef] [PubMed]
  11. Failla, R.; Griolini, P.; Ignaccoo, M.; Schwettmann, A. Random growth of interfaces as a subordination process. Phys. Rev. E 2004, 70, 010101. [Google Scholar] [CrossRef]
  12. Mandelbrot, B.B.; Van Ness, J.W. Fractional Brownian motions, fractional noises and applications. SIAM Rev. 1968, 10, 422–437. [Google Scholar] [CrossRef]
  13. Jones, S.A.; Barfield, J.H.; Norman, V.K.; Shew, W.L. Scale-free behavioral dynamics directly linkded with scale-free cortical dynamics. eLife 2023, 12, e79950. [Google Scholar] [CrossRef] [PubMed]
  14. Mahmoodi, K.; West, B.J.; Grigolini, P. Self-organized complex networks: Individual versus global rules. Front. Physiol. Fractal Physiol. 2017, 8, 1–11. [Google Scholar]
  15. Vicsek, T.; Czirok, A.; Ben-Jacob, E.; Cohen, I.; Shochet, O. Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 1995, 75, 1226. [Google Scholar] [CrossRef] [PubMed]
  16. Vanni, F.; Lukovic, M.; Grigolini, P. Criticality and trnasmssion of informatio in a swarm of cooperatie units. Phys. Rev. Lett. 2011, 107, 078103. [Google Scholar] [CrossRef]
  17. Strogatz, S.H. SYNC: How Order Emerges from Chaos in the Universe; Hyperion Books: New York, NY, USA, 2004. [Google Scholar]
  18. Wiener, N. Nonlinear Problems in Random Theory; MIT Press: Cambridge, MA, USA, 1962. [Google Scholar]
  19. Winfree, A.T. The Geometry of Biological Time; Springer: Berlin/Heidelberg, Germany, 1980; Volume 2. [Google Scholar]
  20. Kuramoto, Y. Chemical Oscillations, Waves, and Turbulence; Springer: Berlin, Germany, 1984. [Google Scholar]
  21. Pecora, L.M.; Carroll, T.L. Synchronization in chaotic systems. Phys. Rev. Lett. 1990, 64, 821. [Google Scholar] [CrossRef]
  22. Kinoshita, S. Pattern Formations and Oscillatory Phenomena; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  23. Gálvez-de León, C.E.; Ríos, A.; Cuellar, K.G.; Escalante, B.A.; Rodríguez-González, J. Is the isolated heart a relaxation-oscillator? In Feedback Control for Personalized Medicine; Elsevier: New York, NY, USA, 2022; pp. 203–218. [Google Scholar]
  24. Tamba, V.K.; Kengne, R.; Kingni, S.T.; Fotsin, H.B. A four-dimensional chaotic system with one or without equilibrium points: Dynamical analysis and its application to text encryption. In Recent Advances in Chaotic Systems and Synchronization; Elsevier: New York, NY, USA, 2019; pp. 277–300. [Google Scholar]
  25. Alimi, M.; Rhif, A.; Rebai, A.; Vaidyanathan, S.; Azar, A.T. Optimal adaptive backstepping control for chaos synchronization of nonlinear dynamical systems. In Backstepping Control of Nonlinear Dynamical Systems; Elsevier: New York, NY, USA, 2021; pp. 291–345. [Google Scholar]
  26. Pikovsky, A.; Rosenblum, M.; Kurths, J. Synchronization: A universal concept in nonlinear science. Am. J. Phys. 2002, 70, 655. [Google Scholar] [CrossRef]
  27. Mahmoodi, K.; West, B.J.; Grigolini, P. On the dynamical foundation of multifractality. Physcia A Stat. Mech. Its Appl. 2017, 551, 124038. [Google Scholar] [CrossRef]
  28. Almurad, Z.M.; Roume, C.; Delignières, D. Complexity matching in side-by-side walking. Hum. Mov. Sci. 2017, 54, 125–136. [Google Scholar] [CrossRef] [PubMed]
  29. Mahmoodi, K.; West, B.J.; Grigolini, P. Complex periodicity and synchronization. Front. Physiol. 2020, 1198. [Google Scholar] [CrossRef]
  30. West, B.J. Fractional Calculus View of Complexity: Tomorrow’s Science; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  31. Saichev, A.I.; Zaslavsky, G.M. Fractional kinetic quation: Solurions and applications. Chaos 1997, 7, 753–764. [Google Scholar] [CrossRef] [PubMed]
  32. Zaslavsky, G.M. Chaos, fractional kinetics, and anomalous transport. Phys. Rept. 2002, 371, 461. [Google Scholar] [CrossRef]
  33. Allegrini, P.; Grigolini, P.; Hamilton, P.; Palatella, L.; Raffaelli, G. Memory beyond memory in heart beating, a sign of a healthy physiological condition. Phys. Rev. E 2002, 65, 041926. [Google Scholar] [CrossRef]
  34. Bohara, G.; West, B.J.; Grigolini, P. Bridging waves and crucial events in the dynamics of the brain. Front. Physiol. 2018, 9, 1174. [Google Scholar] [CrossRef] [PubMed]
  35. Buzsaki, G. The Brain from Inside Out; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
  36. Delignières, D.; Almurad, Z.M.; Roume, C.; Marmelat, V. Multifractal signatures of complexity matching. Exp. Brain Res. 2016, 234, 2773–2785. [Google Scholar] [CrossRef]
  37. Bohara, G.; Lambert, D.; West, B.J.; Grigolini, P. Crucial events, randomness, and multifractality. Phys. Rev. E 2017, 96, 062216. [Google Scholar] [CrossRef]
  38. Meakin, P. Fractals, Scaling and Growth Far from Equilibrium; Cambridge University Press: Cambridge, UK, 1998; Volume 5. [Google Scholar]
  39. Humeau, A.; Buard, B.; Mahé, G.; Chapeau-Blondeau, F.; Rousseau, D.; Abraham, P. Multifractal analysis of heart rate variability and laser Doppler flowmetry fluctuations: Comparison of results from different numerical methods. Phys. Med. Biol. 2010, 55, 6279. [Google Scholar] [CrossRef]
  40. Ivanov, P.C.; Amaral, L.A.N.; Goldberger, A.L.; Havlin, S.; Rosenblum, M.G.; Struzik, Z.R.; Stanley, H.E. Multifractality in human heartbeat dynamics. Nature 1999, 399, 461–465. [Google Scholar] [CrossRef]
  41. Abney, D.H.; Paxton, A.; Dale, R.; Kello, C.T. Complexity matching in dyadic conversation. J. Exp. Psychol. Gen. 2014, 143, 2304. [Google Scholar] [CrossRef] [PubMed]
  42. Correll, J. 1/f noise and effort on implicit measures of bias. J. Personal. Soc. Psychol. 2008, 94, 48. [Google Scholar] [CrossRef]
  43. Delignières, D.; Lemoine, L.; Torre, K. Time intervals production in tapping and oscillatory motion. Hum. Mov. Sci. 2004, 23, 87–103. [Google Scholar] [CrossRef]
  44. Allegrini, P.; Menicucci, D.; Bedini, R.; Fronzoni, L.; Gemignani, A.; Grigolini, P.; West, B.J.; Paradisi, P. Spontaneous brain activity as a source of ideal 1/f noise. Phys. Rev. E 2009, 80, 061914. [Google Scholar] [CrossRef]
  45. Kello, C.T.; Beltz, B.C.; Holden, J.G.; Van Orden, G.C. The emergent coordination of cognitive function. J. Exp. Psychol. Gen. 2007, 136, 551. [Google Scholar] [CrossRef] [PubMed]
  46. Almurad, Z.M.; Roume, C.; Blain, H.; Delignières, D. Complexity matching: Restoring the complexity of locomotion in older people through arm-in-arm walking. Front. Physiol. 2018, 9, 1766. [Google Scholar] [CrossRef]
  47. Coey, C.A.; Washburn, A.; Hassebrock, J.; Richardson, M.J. Complexity matching effects in bimanual and interpersonal syncopated finger tapping. Neurosci. Lett. 2016, 616, 204–210. [Google Scholar] [CrossRef]
  48. Fine, J.M.; Likens, A.D.; Amazeen, E.L.; Amazeen, P.G. Emergent complexity matching in interpersonal coordination: Local dynamics and global variability. J. Exp. Psychol. Hum. Percept. Perform. 2015, 41, 723. [Google Scholar] [CrossRef] [PubMed]
  49. Marmelat, V.; Delignières, D. Strong anticipation: Complexity matching in interpersonal coordination. Exp. Brain Res. 2012, 222, 137–148. [Google Scholar] [CrossRef]
  50. Buzsaki, G. Rhythms of the Brain; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  51. West, B.J. The fractal tapestry of life 2: Entailment of fractional oncology by physiology networks. Front. Netw. Physiol. 2022, 2, 845495. [Google Scholar] [CrossRef]
  52. Lloyd, D.; Aon, M.A.; Cortassa, S. Why homeodynamics, not homeostasis? Sci. World J. 2001, 1, 133–145. [Google Scholar] [CrossRef] [PubMed]
  53. Gould, S. The Structure of Evolutionary Theory; Harvard University Press: Cambridge, MA, USA, 2002. [Google Scholar]
  54. West, B.J. Where Medicine Went Wrong: Rediscovering the Path to Complexity; World Scientific: Singapore, 2006; Volume 11. [Google Scholar]
  55. West, B.J. Fractal Physiology and Chaos in Medicine; World Scientific: Singapore, 2013; Volume 16. [Google Scholar]
  56. West, B.J. A mathematics for medicine: The network effect. Front. Physiol. 2014, 5, 456. [Google Scholar] [CrossRef]
  57. Yu, Y.; Romero, R.; Leer, T. Preference of sensory neural coding for 1/f-signals. Phys. Rev. Lett. 2005, 94, 108103. [Google Scholar] [CrossRef]
  58. Mutch, W.A.C.; Lefevre, G.R. Health, small-worlds, fractals and complex networks: An emerging fiel. Med. Sci. Monit. 2003, 9, MT55–MT59. [Google Scholar]
  59. Mutch, W.A.C.; Harm, S.; Lefevre, G.; Graham, M.R.; Girling, L.G.; Kowalski, S.E. Biologically variable ventilation increases arterial oxygenation over that seen with positive end-expiratory pressure alone in a porcine model of acute respiratory distress syndrome. Crit. Care Med. 2000, 28, 2457–2464. [Google Scholar] [CrossRef]
  60. West, B.J.; Mutch, W.A.C. On the Fractal Language of Medicine. 2023; under review. [Google Scholar]
  61. Bak, P.; Tang, C.; Wiesenfeld, K. Self-organized criticality: An explanatiion of 1/f noise. Phys. Rev. Lett. 1987, 59, 381–384. [Google Scholar] [CrossRef]
  62. West, B.J.; Latka, M.; Glaubic-Latka, M.; Latka, D. Multifractality of cerebral blood flow. Phys. A Stat. Mech. Its Appl. 2003, 318, 453–460. [Google Scholar] [CrossRef]
  63. West, B.J. The Fractal Tapestry of Life 3: Multifractals Entail the Fractional Calculus. Fractal Fract. 2022, 6, 225. [Google Scholar] [CrossRef]
  64. Ivanov, P.C.; Rosenblum, M.G.; Peng, C.K.; Mietus, J.; Havlin, S.; Stanley, H.E.; Goldberger, A.L. Scaling behaviour of heartbeat intervals obtained by wavelet-based time-series analysis. Nature 1996, 383, 323–327. [Google Scholar] [CrossRef] [PubMed]
  65. Arney, C. Antifragile: Things that gain from disorder. Math. Comput. Educ. 2013, 47, 238. [Google Scholar]
  66. West, B.; Grigolini, P. Complex Webs: Anticipating the Improbable; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
Figure 1. (Left) panel: The schematic depicts the three time series from the ONs of interest here, the brain, heart and lungs. Note that the three typical time series share no obvious common features. Also be aware that the information is exchanged simultaneously among all three as well as pairwise between the three ONs. Top is ten seconds of one channel of EEG time series; bottom left is ten seconds of respiration time series; bottom right is ten seconds of ECG time series; all three datasets are measured simultaneously. (Right) panel: The corresponding diffusion entropy analysis was used to process the diffusion random walks constructed from the three datasets depicted in the left panel (see Appendix A.1 for details or Mahmoodi et al. [5]). The entropy Δ S ( w ) is plotted versus the log of the time w as predicted in Appendix A.1 by Equation (A2) for a scaling probability density function, such that the three slopes between the dashed vertical lines yield the scaling indices for the corresponding time series. The slope is the measure of temporal complexity of the time series given by δ j , see Table 1. From Mahmoodi et al. [5] with permission.
Figure 1. (Left) panel: The schematic depicts the three time series from the ONs of interest here, the brain, heart and lungs. Note that the three typical time series share no obvious common features. Also be aware that the information is exchanged simultaneously among all three as well as pairwise between the three ONs. Top is ten seconds of one channel of EEG time series; bottom left is ten seconds of respiration time series; bottom right is ten seconds of ECG time series; all three datasets are measured simultaneously. (Right) panel: The corresponding diffusion entropy analysis was used to process the diffusion random walks constructed from the three datasets depicted in the left panel (see Appendix A.1 for details or Mahmoodi et al. [5]). The entropy Δ S ( w ) is plotted versus the log of the time w as predicted in Appendix A.1 by Equation (A2) for a scaling probability density function, such that the three slopes between the dashed vertical lines yield the scaling indices for the corresponding time series. The slope is the measure of temporal complexity of the time series given by δ j , see Table 1. From Mahmoodi et al. [5] with permission.
Entropy 25 01393 g001
Figure 2. (Left) Panel: Light gray curves are the scaling indices δ j , j = 1 , , 64 obtained by processing the 64 time series from the EEG channels and the black curve is the average over the 64 scaling indices at each point in time. The red and blue curves are the scaling indices obtained by processing the time series of the respiration and ECG channels, respectively. Modified diffusion entropy analysis processing was performed on each channel time series with stripe size of 0.01 for the ECG and respiratory data and 0.1 for the EEG data, using the jumping ahead rule, and on data windows of one-minute length of data (windows in increments of 20 s steps), respectively. The data were simultaneously collected while the participant was conducting the Go-NoGo shooting task (for details, see [5]). (Right) Panel: The corresponding pairwise cross-correlation coefficients (CCs) are calculated among the depicted EEGs channels, ECG and respiration scaling coefficients, with all calculated values of the three correlation coefficients falling within the interval 0.70 < CC < 0.73.
Figure 2. (Left) Panel: Light gray curves are the scaling indices δ j , j = 1 , , 64 obtained by processing the 64 time series from the EEG channels and the black curve is the average over the 64 scaling indices at each point in time. The red and blue curves are the scaling indices obtained by processing the time series of the respiration and ECG channels, respectively. Modified diffusion entropy analysis processing was performed on each channel time series with stripe size of 0.01 for the ECG and respiratory data and 0.1 for the EEG data, using the jumping ahead rule, and on data windows of one-minute length of data (windows in increments of 20 s steps), respectively. The data were simultaneously collected while the participant was conducting the Go-NoGo shooting task (for details, see [5]). (Right) Panel: The corresponding pairwise cross-correlation coefficients (CCs) are calculated among the depicted EEGs channels, ECG and respiration scaling coefficients, with all calculated values of the three correlation coefficients falling within the interval 0.70 < CC < 0.73.
Entropy 25 01393 g002
Table 1. This table makes easy reference to the scaling index δ from the above homogeneous scaling relation for the scaled variable X ( t ) , relating it to the inverse power-law spectral density S p (f) index β through the waiting-time probability density function ψ (t) index μ in the two asymptotic regimes. The value μ = 2 is the boundary between the underlying process having a finite ( μ > 2 ) or an infinite ( μ < 2 ) average waiting time and is also the point at which β = 1 where the process is that of true 1 / f -noise. Consequently, β and μ are interchangeable measures of complexity. For an ergodic time series such as that determined by the waiting-time inverse power-law index, μ increases with decreasing scaling index δ and the complexity decreases. From [5] with permission.
Table 1. This table makes easy reference to the scaling index δ from the above homogeneous scaling relation for the scaled variable X ( t ) , relating it to the inverse power-law spectral density S p (f) index β through the waiting-time probability density function ψ (t) index μ in the two asymptotic regimes. The value μ = 2 is the boundary between the underlying process having a finite ( μ > 2 ) or an infinite ( μ < 2 ) average waiting time and is also the point at which β = 1 where the process is that of true 1 / f -noise. Consequently, β and μ are interchangeable measures of complexity. For an ergodic time series such as that determined by the waiting-time inverse power-law index, μ increases with decreasing scaling index δ and the complexity decreases. From [5] with permission.
Scaled FunctionsParameter RelationsParameter Range
waiting-time PDF ψ ( t ) t μ 1 ⩽ μ ⩽ 3
power spectrum S ( f ) f β μ = 3 β
scale variable X ( t ) t δ μ = 1 + δ 1 ⩽ μ ⩽ 2non-ergodic
μ = 1 + 1 / δ 2 ⩽ μ ⩽ 3ergodic
δ = 0.5 μ ≥ 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

West, B.J.; Grigolini, P.; Kerick, S.E.; Franaszczuk, P.J.; Mahmoodi, K. Complexity Synchronization of Organ Networks. Entropy 2023, 25, 1393. https://doi.org/10.3390/e25101393

AMA Style

West BJ, Grigolini P, Kerick SE, Franaszczuk PJ, Mahmoodi K. Complexity Synchronization of Organ Networks. Entropy. 2023; 25(10):1393. https://doi.org/10.3390/e25101393

Chicago/Turabian Style

West, Bruce J., Paolo Grigolini, Scott E. Kerick, Piotr J. Franaszczuk, and Korosh Mahmoodi. 2023. "Complexity Synchronization of Organ Networks" Entropy 25, no. 10: 1393. https://doi.org/10.3390/e25101393

APA Style

West, B. J., Grigolini, P., Kerick, S. E., Franaszczuk, P. J., & Mahmoodi, K. (2023). Complexity Synchronization of Organ Networks. Entropy, 25(10), 1393. https://doi.org/10.3390/e25101393

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop