Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Multi-Focus Image Fusion for Full-Field Optical Angiography
Next Article in Special Issue
H Optimization of Three-Element-Type Dynamic Vibration Absorber with Inerter and Negative Stiffness Based on the Particle Swarm Algorithm
Previous Article in Journal
A Novel Evidence Combination Method Based on Improved Pignistic Probability
Previous Article in Special Issue
Genetic Algebras Associated with ξ(a)-Quadratic Stochastic Operators
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hierarchical Wilson–Cowan Models and Connection Matrices

by
W. A. Zúñiga-Galindo
and
B. A. Zambrano-Luna
*,†
School of Mathematical & Statistical Sciences, University of Texas Rio Grande Valley, One West University Blvd., Brownsville, TX 78520, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2023, 25(6), 949; https://doi.org/10.3390/e25060949
Submission received: 13 May 2023 / Revised: 9 June 2023 / Accepted: 14 June 2023 / Published: 16 June 2023
(This article belongs to the Special Issue New Trends in Theoretical and Mathematical Physics)
Figure 1
<p>The rooted tree associated with the group <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> <mo>/</mo> <msup> <mn>2</mn> <mn>3</mn> </msup> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>. The elements of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> <mo>/</mo> <msup> <mn>2</mn> <mn>3</mn> </msup> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math> have the form <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <msub> <mi>i</mi> <mn>0</mn> </msub> <mo>+</mo> <msub> <mi>i</mi> <mn>1</mn> </msub> <mn>2</mn> <mo>+</mo> <msub> <mi>i</mi> <mn>2</mn> </msub> <msup> <mn>2</mn> <mn>2</mn> </msup> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <mspace width="0.277778em"/> <msub> <mi>i</mi> <mn>0</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>i</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mn>2</mn> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>}</mo> </mrow> </mrow> </semantics></math>. The distance satisfies <math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mo form="prefix">log</mo> <mn>2</mn> </msub> <msub> <mfenced separators="" open="|" close="|"> <mi>i</mi> <mo>−</mo> <mi>j</mi> </mfenced> <mn>2</mn> </msub> <mo>=</mo> </mrow> </semantics></math> level of the first common ancestor of <span class="html-italic">i</span>, <span class="html-italic">j</span>.</p> ">
Figure 2
<p>Heat map of function <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math>; see (<a href="#FD18-entropy-25-00949" class="html-disp-formula">18</a>). Here, <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mn>0</mn> <mo>)</mo> <mo>=</mo> <mi>ϕ</mi> <mo>(</mo> <mn>1</mn> <mo>)</mo> <mo>=</mo> <mi>ϕ</mi> <mo>(</mo> <mn>7</mn> <mo>)</mo> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> is white; <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mn>2</mn> <mo>)</mo> <mo>=</mo> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math> is black; and <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> is red for <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>≠</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>7</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math>.</p> ">
Figure 3
<p>An approximation of <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. The time axis goes from 0 to 100 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The figure shows the response of the network to a brief localized stimulus (the pulse given in (<a href="#FD19-entropy-25-00949" class="html-disp-formula">19</a>)). The response is also a pulse. This result is consistent with the numerical results in [<a href="#B2-entropy-25-00949" class="html-bibr">2</a>] (Section 2.2.1, Figure 3).</p> ">
Figure 4
<p>An approximation of <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. The time axis goes from 0 to 200 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The figure shows the response of the network to a maintained stimulus (see (<a href="#FD19-entropy-25-00949" class="html-disp-formula">19</a>)). The response is a pulse train. This result is consistent with the numerical results in [<a href="#B2-entropy-25-00949" class="html-bibr">2</a>] (Section 2.2.5, Figure 7).</p> ">
Figure 5
<p>An approximation of <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mo>−</mo> <mn>30</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. The time axis goes from 0 to 100 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The figure shows the response of the network to a maintained stimulus (see (<a href="#FD19-entropy-25-00949" class="html-disp-formula">19</a>) and (<a href="#FD20-entropy-25-00949" class="html-disp-formula">20</a>)). The response is a pulse train in space and time. This result is consistent with the numerical results in [<a href="#B2-entropy-25-00949" class="html-bibr">2</a>] (Section 2.2.7, Figure 9).</p> ">
Figure 6
<p>An approximation of <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>h</mi> <mo>˜</mo> </mover> <mi>E</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>≡</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>; the kernels <math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> </semantics></math> are as in Simulation 1, and <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>E</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is as in (<a href="#FD21-entropy-25-00949" class="html-disp-formula">21</a>). The time axis goes from 0 to 60 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The first figure is the stimuli, and the second figure is the response of the network.</p> ">
Figure 7
<p>An approximation of <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>E</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>. We take <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>≡</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>; the kernels <math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> </semantics></math> are as in Simulation 1, and <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>E</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is as in (<a href="#FD22-entropy-25-00949" class="html-disp-formula">22</a>). The time axis goes from 0 to 60 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The first figure is the stimuli, and the second figure is the response of the network.</p> ">
Figure 8
<p>The left matrix is the connection matrix of the cat cortex. The right matrix corresponds to a discretization of the kernel <math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>E</mi> <mi>E</mi> </mrow> </msub> </semantics></math> used in Simulation 1.</p> ">
Figure 9
<p>Three <span class="html-italic">p</span>-adic approximations for the connection matrix of the cat cortex. We take <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>. The first approximation uses <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; the second, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>; and the last, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p> ">
Figure 10
<p>We use <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, and the time axis goes from 0 to 150 with a step of <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>. The left image uses <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; the right one uses <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>; and the central one uses <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p> ">
Versions Notes

Abstract

:
This work aims to study the interplay between the Wilson–Cowan model and connection matrices. These matrices describe cortical neural wiring, while Wilson–Cowan equations provide a dynamical description of neural interaction. We formulate Wilson–Cowan equations on locally compact Abelian groups. We show that the Cauchy problem is well posed. We then select a type of group that allows us to incorporate the experimental information provided by the connection matrices. We argue that the classical Wilson–Cowan model is incompatible with the small-world property. A necessary condition to have this property is that the Wilson–Cowan equations be formulated on a compact group. We propose a p-adic version of the Wilson–Cowan model, a hierarchical version in which the neurons are organized into an infinite rooted tree. We present several numerical simulations showing that the p-adic version matches the predictions of the classical version in relevant experiments. The p-adic version allows the incorporation of the connection matrices into the Wilson–Cowan model. We present several numerical simulations using a neural network model that incorporates a p-adic approximation of the connection matrix of the cat cortex.

1. Introduction

This work explores the interplay among Wilson–Cowan models, connection matrices, and non-Archimedean models of complex systems.
The Wilson–Cowan model describes the evolution of excitatory and inhibitory activity in a synaptically coupled neuronal network. The model is given by the following system of non-linear integro-differential evolution equations:
τ E ( x , t ) t = E ( x , t ) + 1 r E E ( x , t ) S E w E E x E ( x , t ) w E I x I ( x , t ) + h E x , t τ I ( x , t ) t = I ( x , t ) + 1 r I I ( x , t ) S I w I E x E ( x , t ) w I I ( x ) I ( x , t ) + h I x , t ,
where E ( x , t ) is a temporal coarse-grained variable describing the proportion of excitatory neuron firing per unit of time at position x R at instant t R + . Similarly, the variable I ( x , t ) represents the activity of the inhibitory population of neurons. The main parameters of the model are the strength of the connections among the subtypes of population ( w E E , w I E , w E I , and w I I ) and the strength of the input to each subpopulation ( h E x , t and h I x , t ). This model generates a diversity of dynamical behaviors that are representative of activity observed in the brain, such as multistability, oscillations, traveling waves, and spatial patterns; see, e.g., [1,2,3] and the references therein.
We formulate the Wilson–Cowan model on locally compact Abelian topological groups. The classical model corresponds to the group ( R , + ) . In this framework, using classical techniques on semilinear evolution equations (see, e.g., [4,5]), we show that the corresponding Cauchy problem is locally well posed, and if r E = r I = 0 , it is globally well posed; see Theorem 1. This last condition corresponds to the case of two coupled perceptrons.
Nowadays, there is a large number of experimental data about the connection matrices of the cerebral cortex of invertebrates and mammalians. Based on these data, several researchers hypothesized that cortical neural networks are arranged in fractal or self-similar patterns and have the small-world property; see, e.g., [6,7,8,9,10,11,12,13,14,15,16,17,18,19] and the references therein. Connection matrices provide a static view of neural connections.
The investigation of the relationships between the Wilson–Cowan model and connection matrices is quite natural, since the model was proposed to explain the cortical dynamics, while the matrices describe the functional geometry of the cortex. We initiate this study here.
A network having the small-world property necessarily has long-range interactions; see Section 3. In the Wilson–Cowan model, the kernels ( w E E , w I E , w E I , and w I I ) describing the neural interactions are Gaussian in nature, so only short-range interactions may occur. For practical purposes, these kernels have compact support. On the other hand, the Wilson–Cowan model on a general group requires that the kernels be integrable; see Section 2. We argue that G must be compact to satisfy the small-world property. Under this condition, any continuous kernel is integrable. Wilson and Cowan formulated their model on the group ( R , + ) . The only compact subgroup of this group is the trivial one. The small-world property is, therefore, incompatible with the classical Wilson–Cowan model.
It is worth noting that the absence of non-trivial compact subgroups in ( R , + ) is a consequence of the Archimedean axiom (the absolute value is not bounded on the integers). Therefore, to avoid this problem, we can replace R with a non-Archimedean field, which is a field where the Archimedean axiom is not valid. We selected the field of the p-adic numbers. This field has infinitely many compact subgroups, and the balls have center in the origin. We selected the unit ball, the ring of p-adic numbers Z p . The p-adic integers are organized in an infinite rooted tree. We used this hierarchical structure as the topology for our p-adic version of the Wilson–Cowan model. In principle, we could use other groups, such as the classical compact groups, to replace ( R , + ) , but it is also essential to have a rigorous study of the discretization of the model. For the group Z p , this task can be performed using standard approximation techniques for evolutionary equations; see, e.g., [5] (Section 5.4).
The p-adic Wilson–Cowan model admits good discretizations. Each discretization corresponds to a system of non-linear integro-differential equations on a finite rooted tree. We show that the solution of the Cauchy problem of this discrete system provides a good approximation to the solution of the Cauchy problem of the p-adic Wilson–Cowan model; see Theorem 2.
We provide extensive numerical simulations of p-adic Wilson–Cowan models. In Section 5, we present three numerical simulations showing that the p-adic models provide a similar explanation to the numerical experiments presented in [2]. In these experiments, the kernels ( w E E , w I E , w E I , and w I I ) were chosen to have properties similar to those of the kernels used in [2]. In Section 6, we consider the problem of how to integrate the connection matrices into the p-adic Wilson–Cowan model. This fundamental scientific task aims to use the vast number of data on maps of neural connections to understand the dynamics of the cerebral cortex of invertebrates and mammalians. We show that the connection matrix of the cat cortex can be well approximated with a p-adic kernel K r ( x , y ) . We then replace the excitatory–excitatory relation term w E E E with Z p K r ( x , y ) E ( y ) d y but keep the other kernels as in Simulation 1 presented in Section 5. The response of this network is entirely different from that given in Simulation 1. For the same stimulus, the response of the last network exhibits very complex patterns, while the response of the network presented in Simulation 1 is simpler.
The p-adic analysis has shown to be the right tool in the construction of a wide variety of models of complex hierarchic systems; see, e.g., [20,21,22,23,24,25,26,27,28] and the references therein. Many of these models involve abstract evolution equations of the type t u + A u = F ( u ) . In these models, the discretization of operator A is an ultrametric matrix A l = a i j i , j G l , where G l is a finite rooted tree with l levels and p l branches; here, p is a fixed prime number (see the numerical simulations in [27,28]). Locally, connection matrices look very similar to matrices A l . The problem of approximating large connection matrices with ultrametric matrices is an open problem.

2. An Abstract Version of the Wilson–Cowan Equations

In this section, we formulate the Wilson–Cowan model on locally compact topological groups and study the well-posedness of the Cauchy problem attached to these equations.

2.1. Wilson–Cowan Equations on Locally Compact Abelian Topological Groups

Let G , + be a locally compact Abelian topological group. Let d μ be a fixed Haar measure on G , + . The basic example is R N , + , the N-dimensional Euclidean space considered an additive group. In this case, d μ is the Lebesgue measure of R N .
Let L G be the R -vector space of functions f : G R satisfying
f = sup x G A f x < ,
where A is a subset of G with measure zero. Let L 1 G be the R -vector space of functions f : G R satisfying
f 1 = G f x d μ < .
For a fixed w L 1 G , the mapping
L G L G f x w f x = G w x y f y d μ ( y )
is a well-defined, linearly bounded operator satisfying
w f w 1 f .
Remark 1.
(i) We recall that f : R R is called a Lipschitz function if there is a positive constant L ( f ) such that f ( x ) f ( y ) L ( f ) x y for all x and y.
(ii) Given X and Y , Banach spaces, we denote by C ( X , Y ) the space of continuous functions from X to Y .
(iii) If Y = R , we use the simplified notation C ( X ) .
We fix two bounded Lipschitz functions S E and S I satisfying
S E 0 = S I 0 = 0 .
We also fix w E E , w I E , w E I , w I I L 1 G , and h E x , t , h I x , t C ( 0 , , L G ) .
The Wilson–Cowan model on G is given by the following system of non-linear integro-differential evolution equations:
τ E ( x , t ) t = E ( x , t ) + 1 r E E ( x , t ) S E w E E x E ( x , t ) w E I x I ( x , t ) + h E x , t τ I ( x , t ) t = I ( x , t ) + 1 r I I ( x , t ) S I w I E x E ( x , t ) w I I ( x ) I ( x , t ) + h I x , t ,
where ∗ denotes the convolution in the space variables, and r E , r I R .
The space X : = L G × L G endowed with the norm
f 1 , f 2 = max f 1 , f 2
is a real Banach space.
Given f = f 1 , f 2 X , and P ( x ) , Q ( x ) L G , we set
F E ( f ) = S E w E E ( x ) f 1 ( x ) w E I ( x ) f 2 ( x ) + P ( x ) ,
and
F I ( f ) = S I w I E ( x ) f 1 ( x ) w I I ( x ) f 2 ( x ) + Q ( x ) .
We also set
X X f H ( f ) ,
where H ( f ) = H E ( f ) , H I ( f ) and
H E ( f ) = ( 1 r E f 1 ) F E ( f ) , H I ( f ) = ( 1 r I f 2 ) F I ( f ) .
Remark 2.
We say that H is Lipschitz continuous (or globally Lipschitz) if there is a constant L ( H ) such that H ( f ) H ( g ) L ( H ) f g , for all f, g X . We also say that H is locally Lipschitz continuous (or locally Lipschitz) if for every h X , there exists a ball B R h = f X ; f h < R such that H ( f ) H ( g ) L ( R , h ) f g for all f, g B R h . Since X is a vector space, without loss of generality, we can assume that h = 0 .
Lemma 1.
We use the above notation. If r I 0 or r E 0 , H : X X is a well-defined locally Lipschitz mapping. If r I = r E = 0 , then H : X X is a well-defined, globally Lipschitz mapping.
Proof. 
We first notice that for f, g X , using that S E is Lipschitz,
F E ( f ) F E ( g ) ( x ) L S E w E E ( x ) f 1 ( x ) g 1 x w E I ( x ) f 2 ( x ) g 2 ( x ) L S E w E E 1 f 1 g 1 + w E I 1 f 2 g 2 L S E max w E E 1 , w E I 1 f g ,
which implies that
F E ( f ) F E ( g ) L ( F E ) f g .
Similarly,
F I ( f ) F I ( g ) L ( F I ) f g ,
where L ( F I ) = L ( S I ) max w I E 1 , w I I 1 .
Now, using estimation (4) and the fact that F E ( f ) S E ,
H E ( f ) H E ( g ) = ( 1 r E f 1 ) F E ( f ) ( 1 r E g 1 ) F E ( g ) = 1 r E f 1 F E ( f ) F E ( g ) r E F E ( g ) f 1 g 1 1 + r E f 1 F E ( f ) F E ( g ) + r E F E ( f ) f 1 g 1 1 + r E f 1 L ( F E ) + r E S E f g .
With a similar reasoning, using estimation (5), one obtains
H I ( f ) H I ( g ) 1 + r I f 2 L ( F I ) + r I S I f g ,
and consequently,
H ( f ) H ( g ) = max H E ( f ) H E ( g ) , H I ( f ) H I ( g ) A 1 + B f + C f g ,
where
A : = max L ( F E ) , L ( F I ) , B : = r E , r I , C : = max r E S E , r I S I .
In the case r E = r I = 0 , estimation (6) takes the form
H ( f ) H ( g ) A f g .
This, in turn, implies that for f X ,
H ( f ) H ( f ) H ( 0 ) + H ( 0 ) A f + F E 0 , F I 0 = A f + S E 0 , S I 0 A f + max S E , S I < .
Then, estimations (7) and (8) imply that H is a well-defined, globally Lipschitz mapping.
We now consider the case r I 0 or r E 0 . Let us take f, g B R 0 for some R > 0 . Then, f 1 < R , and estimation (6) takes the form
H ( f ) H ( g ) 1 + r E R L ( F E ) + r E S E f g C f g , for f , g B R 0 .
This implies that
H ( f ) H ( f ) H ( 0 ) + H ( 0 ) C f + max S E , S I < .
Then, the restriction of H to B R 0 × B R 0 is a well-defined Lipschitz mapping. □
The estimations given in Lemma 1 are still valid for functions depending continuously on a parameter t. More precisely, let us take T > 0 and f i C 0 , T , U , for i = 1 , 2 , where U L G is an open subset. We assume that
0 , T f i 1 U , for i = 1 , 2 .
We use the notation f i = f i · , t , where t 0 , T and i = 1 , 2 . We replace P ( x ) with h E x , t and Q ( x ) with h I x , t , where h E x , t , h I x , t C 0 , , L G . We denote the corresponding mapping H f by H f , s . We also set X U , T : = 0 , T × U .
Lemma 2.
With the above notation, the following assertions hold:
(i) 
The mapping H : X U , T × X U , T X is continuous, and for each t 0 , T and each h U , there exist R > 0 and L < such that
H f , s H g , s L f g for f , g B R ( h ) , s 0 , t .
(ii) 
For t 0 , T and f U × U ,
0 t H f , s d s < .
Proof. 
(i) This follows from Lemma 1. By estimations (8) and (9), H f , s is bounded by a positive constant C depending on R; then,
0 t H f , s d s < C T .
 □

2.2. The Cauchy Problem

With the above notation, the Cauchy problem for the abstract Wilson–Cowan system takes the following form:
τ t E x , t I x , t + E x , t I x , t = H E x , t I x , t , x G , t 0 E x , 0 I x , 0 = E 0 x I 0 x X .
Theorem 1.
(i) There exists T 0 0 , T depending on E 0 x I 0 x X , such that Cauchy problem (10) has a unique solution E x , t I x , t in C 1 ( 0 , T 0 , X ) .
(ii) The solution satisfies
E x , t = e t τ E 0 x + 0 t e ( t s ) τ ( 1 r E E x , s ) × S E w E E ( x ) E x , s w E I ( x ) I x , s + h E x , s d s ,
I x , t = e t τ I 0 x + 0 t e ( t s ) τ ( 1 r E I x , s ) × S I w I E ( x ) E x , s w I I ( x ) I x , s + h E x , s d s ,
for t 0 , T 0 and x G .
(iii) If r I = r E = 0 , then T 0 = for any E 0 x I 0 x X , and
E x , t E 0 + τ S E and I x , t I 0 + τ S I .
(iv) The solution E x , t I x , t in C 1 ( 0 , T 0 , X ) depends continuously on the initial value.
Proof. 
(i)–(iii) By Lemma 2-(i) and [5] (Lemma 5.2.1 and Theorem 5.1.2), for each
E 0 x I 0 x X ,
there exists a unique E x , t I x , t C ( 0 , T 0 , X ) that satisfies (11) and (12). By Lemma 2-(i) and [5] (Corollary 4.7.5), E x , t I x , t C 1 ( 0 , T 0 , X ) and satisfies (10). By [4] (Theorem 4.3.4) (see also [5] (Theorem 5.2.6)), T 0 = or T 0 < and lim t T 0 E t , I t = . In the case r I = r E = 0 , by using
0 t e ( t s ) τ S E w E E ( x ) E x , s w E I ( x ) I x , s + h E x , s d s S E 0 t e ( t s ) τ d s < τ S E ,
and
0 t e ( t s ) τ S I w I E ( x ) E x , s w I I ( x ) I x , s + h I x , s d s < τ S I ,
one shows (13), which implies that T 0 = .
(iv) This follows from [5] (Lemma 5.2.1 and Theorem 5.2.4). □

3. Small-World Property and Wilson–Cowan Models

After formulating the Wilson–Cowan model on locally compact Abelian groups, our next step is to find the groups for which the model is compatible with the description of the cortical networks given by connection matrices. From now on, we take r I = r E = 0 ; in this case, the Wilson–Cowan equations describe two coupled perceptrons.

3.1. Compactness and Small-World Networks

The original Wilson–Cowan model is formulated on ( R , + ) . The kernels w A B , A, B E , I , which control the connections among neurons, are supposed to be radial functions of the form
e C A B x y , or e D A B x y 2 ,
where C A B and D A B are positive constants. Since R is unbounded, hypothesis (14) implies that only short-range interactions among neurons occur. The strength of the connections produced by kernels of type (14) is negligible outside of a compact set; then, for practical purposes, interactions among groups of neurons only occur at small distances.
Nowadays, it is widely accepted that the brain is a small-world network; see, e.g., [8,9,10,11] and the references therein. Small-worldness is believed to be a crucial aspect of efficient brain organization that confers significant advantages in signal processing; furthermore, small-world organization is deemed essential for healthy brain function (see, e.g., [10], and the references therein). A small-world network has a topology that produces short paths across the whole network, i.e., given two nodes, there is a short path between them (the “six degrees of separation” phenomenon). In turn, this implies the existence of long-range interactions in the network. The compatibility of the Wilson–Cowan model with the small-world network property requires a non-negligible interaction between any two groups of neurons, i.e., w A B x > ε > 0 , for any x G , and for A , B E , I , where the constant ε > 0 is independent of x. By Theorem 1, it is reasonable to expect that w A B , A, B E , I are integrable; then, necessarily, G must be compact.
Finally, we mention that R N , + does not have non-trivial compact subgroups. Indeed, if x 0 0 , then x 0 = n x 0 ; n Z is a non-compact subgroup of R N , + , because n ; n Z is not bounded. This last assertion is equivalent to the Archimedean axiom of real numbers. In conclusion, the compatibility between the Wilson–Cowan model and the small-world property requires changing R , + to a compact Abelian group. The simplest solution is to replace R , · with a non-Archimedean field F , · F , where the norm satisfies
x + y F max x F , y F .

3.2. Neuron Geometry and Discreteness

Nowadays, there are extensive databases of neuronal wiring diagrams (connection matrices) of the invertebrates’ and mammalians’ cerebral cortex. The connection matrices are adjacency matrices of weighted directed graphs, where the vertices represent neurons, regions in a cortex, or neuron populations. These matrices correspond to the kernels w A B , A, B E , I ; then, it seems natural to consider using discrete Wilson–Cowan models [2,3] (Chapter 2). We argue that two difficulties appear. First, since the connection matrices may be extremely large, studying the corresponding Wilson–Cowan equations is only possible via numerical simulations. Second, it seems that the discrete Wilson–Cowan model is not a good approximation of the continuous Wilson–Cowan model; see [3] (page 57). Wilson–Cowan equations can be formally discretized by replacing integrals with finite sums. However, these discrete models are relevant only when they are good approximations of continuous models. Finally, we want to mention that O. Sporns has proposed the hypothesis that cortical connections are arranged in hierarchical self-similar patterns [8].

4. p -Adic Wilson–Cowan Models

The previous section shows that the classical Wilson–Cowan can be formulated on a large class of topological groups. This formulation does not use any information about the geometry of the neural interaction, which is encoded in the geometry of the group G . The next step is to incorporate the connection matrices into the Wilson–Cowan model, which requires selecting a specific group. In this section, we propose the p-adic Wilson–Cowan models where G is the ring of p-adic integers Z p .

4.1. The p-Adic Integers

This section reviews some basic results of p-adic analysis required in this article. For a detailed exposition on p-adic analysis, the reader may consult [29,30,31,32]. For a quick review of p-adic analysis, the reader may consult [33].
From now on, p denotes a fixed prime number. The ring of p-adic integers Z p is defined as the completion of the ring of integers Z with respect to the p-adic norm | · | p , which is defined as
| x | p = 0 if x = 0 p γ if x = p γ a Z ,
where a is an integer coprime with p. The integer γ = o r d p ( x ) : = o r d ( x ) , with o r d ( 0 ) : = + , is called the p-adic order of x.
Any non-zero p-adic integer x has a unique expansion of the form
x = x k p k + x k + 1 p k + 1 + ,
with x k 0 , where k is a non-negative integer, and x j are numbers from the set 0 , 1 , , p 1 . There are natural field operations, sum and multiplication, on p-adic integers; see, e.g., [34]. Norm (15) extends to Z p as x p = p k for a non-zero p-adic integer x.
The metric space Z p , · p is a complete ultrametric space. Ultrametric means that x + y p max x p , y p . As a topological space, Z p is homeomorphic to a Cantor-like subset of the real line; see, e.g., [29,30,35].
For r N , let us denote by B r ( a ) = { x Z p ; x a p p r }  the ball of radius  p r with center in  a Z p and take B r ( 0 ) : = B r . Ball B 0 equals the ring ofp-adic integers  Z p . We use Ω p r x a p to denote the characteristic function of ball B r ( a ) . Given two balls in Z p , either they are disjoint, or one is contained in the other. The balls are compact subsets; thus, Z p , · p is a compact topological space.

Tree-like Structures

The set of p-adic integers modulo p l , l 1 , consists of all the integers of the form i = i 0 + i 1 p + + i l 1 p l 1 . These numbers form a complete set of representatives of the elements of additive group G l = Z p / p l Z p , which is isomorphic to the set of integers Z / p l Z (written in base p) modulo p l . By restricting · p to G l , it becomes a normed space, and G l p = 0 , p l 1 , , p 1 , 1 . With the metric induced by · p , G l becomes a finite ultrametric space. In addition, G l can be identified with the set of branches (vertices at the top level) of a rooted tree with l + 1 levels and p l branches. By definition, the tree’s root is the only vertex at level 0. There are exactly p vertices at level 1, which correspond with the possible values of the digit i 0 in the p-adic expansion of i. Each of these vertices is connected to the root by a non-directed edge. At level k, with 2 k l + 1 , there are exactly p k vertices, and each vertex corresponds to a truncated expansion of i of the form i 0 + + i k 1 p k 1 . The vertex corresponding to i 0 + + i k 1 p k 1 is connected to a vertex i 0 + + i k 2 p k 2 at level k 1 if and only if i 0 + + i k 1 p k 1 i 0 + + i k 2 p k 2 is divisible by p k 1 . See Figure 1. Balls B r ( a ) = a + p r Z p are infinite rooted trees.

4.2. The Haar Measure

Since ( Z p , + ) is a compact topological group, there exists a Haar measure d x , which is invariant under translations, i.e., d ( x + a ) = d x [36]. If we normalize this measure by the condition Z p d x = 1 , then d x is unique. It follows immediately that
B r ( a ) d x = a + p r Z p d x = p r Z p d y = p r , r N .
In a few occasions, we use the two-dimensional Haar measure d x d y of the additive group ( Z p × Z p , + ) to normalize this measure by the condition Z p Z p d x d y = 1 . For a quick review of the integration in the p-adic framework, the reader may consult [33] and the references therein.

4.3. The Bruhat–Schwartz Space in the Unit Ball

A real-valued function φ defined on Z p is called Bruhat–Schwartz function (or a test function) if, for any x Z p , there exists an integer l N such that
φ ( x + x ) = φ ( x ) for any x B l .
The R -vector space of Bruhat–Schwartz functions supported in the unit ball is denoted by D ( Z p ) . For φ D ( Z p ) , the largest number l = l ( φ ) satisfying (16) is called the exponent of local constancy (or the parameter of constancy) of  φ . A function φ in D ( Z p ) can be written as
φ x = j = 1 M φ x ˜ j Ω p r j x x ˜ j p ,
where x ˜ j , j = 1 , , M , are points in Z p ; r j , j = 1 , , M , are non-negative integers; and Ω p r j x x ˜ j p denotes the characteristic function of ball B r j ( x ˜ j ) = x ˜ j + p r j Z p .
We denote by D l ( Z p ) the R -vector space of all test functions of the form
φ x = i G l φ i Ω p l x i p , φ i R ,
where i = i 0 + i 1 p + + i l 1 p l 1 G l = Z p / p l Z p , l 1 . Notice that φ is supported on Z p and that D ( Z p ) = l N D l ( Z p ) .
The space D l ( Z p ) is a finite-dimensional vector space spanned by the basis
Ω p l x i p i G l .
By identifying φ D l ( Z p ) with the column vector φ i i G l R # G l , we get that D l ( Z p ) is isomorphic to R # G l endowed with the norm
φ i i G l N = max i G l φ i .
Furthermore,
D l D l + 1 D ( Z p ) ,
where ↪ denotes continuous embedding.

4.4. The p-Adic Version and Discrete Version of the Wilson–Cowan Models

The p-adic Wilson–Cowan model is obtained by taking G = Z p and d μ = d x in (10).
On the other hand, f 1 f , and
L 1 ( Z p ) L ( Z p ) C ( Z p ) D ( Z p ) ,
where C ( Z p ) denotes the R -space of continuous functions on Z p endowed with the norm · . Furthermore, D ( Z p ) is dense in L 1 ( Z p ) [30] (Proposition 4.3.3); consequently, it is also dense in L ( Z p ) and C ( Z p ) .
For the sake of simplicity, we assume that w E E , w I E , w E I , w I I C ( Z p ) , and h E x , t , h I x , t C ( 0 , , C ( Z p ) ) . Theorem 1 is still valid under these hypotheses. We use the theory of approximation of evolution equations to construct good discretizations of the p-adic Wilson–Cowan system; see, e.g., [5] (Section 5.4).
This theory requires the following hypotheses.
(A) (a) X = C ( Z p ) × C ( Z p ) and X l = D l ( Z p ) × D l ( Z p ) , l 1 , endowed with the norm f = f 1 , f 2 = max f 1 , f 2 are Banach spaces. It is relevant to mention that X l is a subspace of X and that X l is a subspace of X l + 1 .
(b) The operator
P l : X X l f x P l f x = i G l f i Ω p l x i p
is linear and bounded, i.e., P l B ( X , X l ) and P l f f , for every f X .
(c) We set 1 l : X l X to be the identity operator. Then, 1 l B ( X l , X ) , and 1 l f = f , for every f X l .
(d) P l 1 l f = f , for l 1 , f X l .
(B, C) The Wilson–Cowan system, see (10), involves the operator 1 τ 1 , where 1 B ( X , X ) is the identity operator. As approximation, we use 1 B ( X l , X l ) , for every l 1 . Furthermore,
lim l P l f f = 0 ,
(see [37] (Lemma 1)).
(D) For t 0 , , 1 τ H ( s , f ) : 0 , t × X X is continuous and such that, for some L < ,
1 τ H ( s , f ) 1 τ H ( s , g ) L f g ,
for 0 s t , f, g X . This assertion is a consequence of the fact that H : X X is well-defined, globally Lipschitz; see Lemma 1.
We use the notation E t = E · , t , I t = I · , t C 1 ( 0 , T , X ) and, for the approximations, E l t = E l · , t , I l t = I l · , t C 1 ( 0 , T , X ) . The space discretization of p-adic Wilson–Cowan system (10) is
t E l t I l t + 1 τ E l t I l t = 1 τ P l H E l t I l t , E l 0 I l 0 = P l E 0 x I 0 x X l .
The next step is to obtain an explicit expression for the space discretization given in (17). We need the following formulae.
Remark 3.
Let us take
w ( x ) = j G l w j Ω p l x j p , ϕ ( y ) = i G l ϕ i Ω p l y i p D l ( Z p ) .
Then,
w ϕ x = Z p w x y ϕ y d y = k G l p l i G l w k i ϕ i Ω p l x k p D l ( Z p ) .
Indeed,
w ϕ x = j G l i G l w j ϕ i Z p Ω p l x y j p Ω p l y i p d y .
By changing variables as z = y i , d z = d x , in the integral,
w ϕ x = j G l i G l w j ϕ i Z p Ω p l x z i + j p Ω p l z p d z = j G l i G l w j ϕ i p l Z p Ω p l x z i + j p d z .
Now, by taking k = i + j and using the fact that G l is an additive group,
w ϕ x = k G l i G l w k i ϕ i p l Z p Ω p l x z k p d z = k G l p l i G l w k i ϕ i Ω p l x k p .
Remark 4.
Let us take S : R R . Then,
S i G l ϕ i Ω p l y i p = i G l S ϕ i Ω p l y i p .
This formula follows from the fact that the supports of the functions Ω p l y i p , i G l , are disjoint.
The space discretization of the integro-differential equation in (17) is obtained by computing the term P l H ( E l t I l t ) using Remarks 3 and 4. By using the notation
w l A B = w i A B i G l , w i A B = w A B i , for A , B E , I ,
E l ( t ) = E i t i G l , E i t = E i , t , and I l ( t ) = I i t i G l , I i t = I i , t ,
h l A t = h i A t i G l , h i A t = h A i , t , for A E , I ,
and for ϕ l = ϕ i i G l , θ l = θ i i G l ,
ϕ l θ l = k G l ϕ i k θ k i G l .
With this notation, the announced discretization takes the following form:
τ E l ( t ) t = E l ( t ) + S E w l E E E l ( t ) w l E I I l ( t ) + h l E t τ I l ( t ) t = I l ( t ) + S I w l I E E l ( t ) w l I I ( x ) I l ( t ) + h l I t .
Theorem 2.
Let us take r I = r E = 0 , E 0 x I 0 x X , and T 0 , . Let E t I t C 1 ( 0 , T 0 , X ) be solutions (11) and (12) given in Theorem 1. Let E l t I l t be the solution of Cauchy problem (17). Then,
lim l sup 0 t T E l t I l t E t I t = 0 .
Proof. 
We first notice that Theorem 1 is valid for Cauchy problem (17); more precisely, this problem has a unique solution E l t I l t in C 1 ( 0 , T 0 , X l ) satisfying properties akin to the ones stated in Theorem 1. Since X l is a subspace of X , by applying Theorem 1 to Cauchy problem (17), we obtain the existence of a unique solution E l t I l t in C 1 ( 0 , T 0 , X ) satisfying the properties announced in Theorem 1. To show that the solution E l t I l t belongs to C ( 0 , T 0 , X l ) , we use [5] (Theorem 5.2.2). For similar reasoning, the reader may consult Remark 2 and the proof of Theorem 1 in [27]. The proof of the theorem follows from hypotheses A, B, C, and D according to [5] (Theorem 5.4.7). For similar reasoning, the reader may consult the proof of Theorem 4 in [27]. □

5. Numerical Simulations

We use heat maps to visualize approximations of the solutions of p-adic discrete Wilson–Cowan Equations (17). The vertical axis gives the position, which is a truncated p-adic number. These numbers correspond to a rooted tree’s vertices at the top level, i.e., G l ; see Figure 1. For convenience, we include a representation of this tree. The heat maps’ colors represent the solutions’ values in a particular neuron. For instance, let us take p = 2 , l = 4 , and
ϕ ( x ) = Ω ( 2 4 | x | 2 ) Ω ( 2 4 | x 2 | 2 ) + Ω ( 2 4 | x 1 | 2 ) + Ω ( 2 4 | x 7 | 2 ) .
The corresponding heat map is shown in Figure 2. If the function depends on two variables, say, ϕ ( x , t ) , where x Z p and t R , the corresponding heat map color represents the value of ϕ ( x , t ) at time t and neuron x.
We take τ = 10 , r I = r E = 1 , p = 3 , and l = 6 ; then,
w A B ( x ) = b A B exp ( σ A B ) b A B exp ( σ A B | x | p ) , for A , B E , I ,
and
S A ( z ) = 1 1 + exp ( v A ( z θ A ) ) 1 1 + exp ( v A θ A ) , for z R , A E , I .
The kernel w A B ( x ) is a decreasing function of | x | p . Thus, close neurons interact strongly. S A ( z ) is a sigmoid function satisfying S A ( 0 ) = 0 .

5.1. Numerical Simulation 1

The purpose of this experiment is to show the response of the p-adic Wilson–Cowan network to a short pulse and a constant stimulus. See Figure 3, Figure 4 and Figure 5. Our results are consistent with the results obtained by Cowan and Wilson in [2] (Sections 2.2.1–2.2.5). The pulses are
h E ( x , t ) = 3.7 Ω ( p 2 | x 4 | p ) 1 [ 0 , δ ] ( t ) , for x Z p , t 0 , δ ,
h I ( x , t ) = Q Ω ( | x 4 | p ) 1 [ 0 , δ ] ( t ) , for x Z p , t 0 , δ ,
where 1 [ 0 , δ ] ( t ) is the characteristic function of time interval 0 , δ , δ > 0 . We use the following parameters: v E = 2.75 , v I = 0.3 , b E E = 1.5 , σ E E = 4 , b I I = 1.8 , σ I I = 3 , θ E = 9 , θ I = 17 , b I E = 1.35 , σ I E = 6 , b E I = 1.35 , and σ E I = 6 .

5.2. Numerical Simulation 2

In [2] (Section 3.3.1), Wilson and Cowan applied their model to the spatial hysteresis in the one-dimensional tissue model. In this experiment, a human subject was exposed to a binocular stimulus. The authors used sharply peaked Gaussian distributions to model the stimuli. The two stimuli were symmetrically moved apart by a small increment and re-summed, and the network response was allowed to reach equilibrium.
Initially, the two peaks (stimuli) were very close; the network response consisted of a single pulse (peak) (see [2] (Section 3.3.1, Figure 13A)). Then, the peaks separated from each other (i.e., the disparity between the two stimuli increased). The network response was a pulse in the middle of the binocular stimulus until a critical disparity was reached. At this stimulus disparity, the single pulse (peak) decayed rapidly to zero, and twin response pulses formed at the locations of the now rather widely separated stimuli; see [2] (Section 3.3.1, Figure 13B).
Following this, the stimuli were gradually moved together again in the same form until they essentially consisted of one peak. However, the network response consisted of two pulses; see [2] (Section 3.3.1, Figure 13C).
The classical Wilson–Cowan model and our p-adic version can predict the results of this experiment. We use the function
h ˜ E ( x , t ) = e ( 30 ( 0.5 m ( x ) ) 0.5 t ) 2 + e ( 30 ( 0.5 m ( x ) ) + 0.5 t ) 2
to model the stimuli in the case where the peaks do not move together and
h E ( x , t ) = h ˜ E ( x , t ) 1 [ 0 , 18 ] ( t ) + h ˜ E ( x , 36 t ) 1 [ 18 , 36 ] ( t )
to model the stimuli in the case where the peaks gradually move together. The function m : Z p R is the Monna map; see [38].
Figure 6 shows the stimuli (see (21)) and the network response when the stimulus peaks are gradually separated. The network response begins with a single pulse. When a critical disparity threshold is reached, the response becomes a twin pulse, which is the prediction of the classical Wilson–Cowan model; see [2] (Section 3.3.1, Figure 13A,B).
Figure 7 depicts the stimuli and the network response in the instance where the stimulus peaks gradually split and finally move together. The network response at the end of the experiment consists of twin pulses. This finding is consistent with that of the classical Wilson–Cowan model [2] (Section 3.3.1, Figure 13C).

6. p -Adic Kernels and Connection Matrices

There have been significant theoretical and experimental developments in comprehending the wiring diagrams (connection matrices) of the cerebral cortex of invertebrates and mammals over the last thirty years; see, for example, [6,7,8,9,10,11,12,13,14,15,16,17,18,19] and the references therein. The topology of cortical neural networks is described by connection matrices. Building dynamic models from experimental data recorded in connection matrices is a very relevant problem.
We argue that our p-adic Wilson–Cowan model provides meaningful dynamics on networks whose topology comes from a connection matrix. Figure 8 depicts the connection matrix of the cat cortex (see, e.g., [7,8,9,10,11,12,13,14]) and the matrix of the kernel w E E used in Simulation 1. The p-adic methods are relevant only if the connection matrices can be very well approximated for matrices coming from discretizations of p-adic kernels. This is an open problem. Here, we show that such an approximation is feasible for the cat cortex connection matrix.
Given an arbitrary matrix A, by adding zero entries, we may assume that its size is p k × p k , where p is a suitable prime number. We assume that A = a i j i , j G k , where G k is the ring of integers modulo p k endowed with the p-adic topology, as in the above. This hypothesis means that the connection matrices have an ultrametric nature; this type of matrices appear in connection with complex systems, such as spin glasses; see [23] (Section 4.2) and the references therein. Given an integer r satisfying 0 r k , the reduction mod p r map is defined as i 0 + i 1 p + + i k 1 p k 1 i 0 + i 1 p + + i r 1 p r 1 . We now define
G k × G k Π r G r × G r i , j i mod p r , j mod p r .
Map Π r 1 induces a block decomposition of matrix A into p 2 k r blocks of size p r × p r . Given a , b G r × G r , the corresponding block is A a , b = a i j i , j Π r 1 a , b . Now, we attach to a , b G r × G r ,
ϕ a , b ( x , y ) = l G r m G r ϕ a , b ( l , m ) Ω p r x l p Ω p r y m p D r ( Z p × Z p ) ,
and identify ϕ a , b ( x , y ) with matrix ϕ a , b ( l , m ) l , m G r . By using the correspondence
ϕ a , b ( l , m ) l , m G r A a , b ,
we approximate matrix A with a kernel K r ( x , y ) , which is locally translation invariant. More precisely, for each a , b G r , K r ( x , y ) = ϕ a , b ( x y ) for all x a + p r Z p and y b + p r Z p . Notice that if r = k , the matrix attached to K r ( x , y ) is A. See Figure 9. This procedure allows us to incorporate experimental data from connecting matrices into our p-adic Wilson–Cowan model.
By using the above procedure, we replace the excitatory–excitatory relation term w E E E with Z p K r ( x , y ) E ( y ) d y but keep the other kernels as in Simulation 1. For the stimuli, we use h E = 3.5 Ω ( p 2 | x 1 | p ) , with p = 2 , l = 6 , and h I ( x ) = 30 . In Figure 9, we show three different approximations for the cat cortex connection matrix using p-adic kernels. The black area in the right matrix in Figure 9 (which corresponds to zero entries) comes from the process of adjusting the size of the origin matrix to 2 6 × 2 6 .
The corresponding p-adic network responses are shown in Figure 10 for different values of r. In the case r = 0 , the interaction among neurons is short range, while in the case r = 5 , there is long-range interaction. The response in the case r = 0 is similar to the one presented in Simulation 1; see Figure 5. When the connection matrix gets close to the cat cortex matrix (see Figure 9), which is when the matrix allows more long-range connections, the response of the network presents more complex patterns (see Figure 10).

7. Final Discussion

The Wilson–Cowan model describes interactions between populations of excitatory and inhibitory neurons. This model constitutes a relevant mathematical tool for understanding cortical tissue functionality. On the other hand, in the last twenty-five years, there has been tremendous experimental development in understanding the cerebral cortex’s neuronal wiring in invertebrates and mammalians. Employing different experimental techniques, the wiring patterns can be described by connection matrices. Such a matrix is just an adjacency matrix of a directed graph whose nodes represent neurons, groups of neurons, or portions of the cerebral cortex. The oriented edges represent the strength of the connections between two groups of neurons. This work explores the interplay between the classical Wilson–Cowan model and connection matrices.
Nowadays, it is widely accepted that the networks in the cerebral cortex of mammalians have the small-world property, which means a non-negligible interaction exists between any two groups of neurons in the network. The classical Wilson–Cowan model is not compatible with the small-world property. We show that the original Wilson–Cowan model can be formulated on any topological group, and the Cauchy problem for the underlying equations of the model is well posed. We give an argument showing that the small-world property requires that the group be compact, and consequently, the classical model should be discarded. In practical terms, the classical Wilson–Cowan model cannot incorporate the experimental information contained in connection matrices. We propose a p-adic Wilson–Cowan model, where the neurons are organized in an infinite rooted tree. We present numerical experiments showing that this model can explain several phenomena, similarly to the classical model. The new model can incorporate experimental information coming from connection matrices.

Author Contributions

Investigation, W.A.Z.-G. and B.A.Z.-L.; Writing—original draft, W.A.Z.-G. and B.A.Z.-L.; Writing—review & editing, W.A.Z.-G. and B.A.Z.-L. All authors have read and agreed to the published version of the manuscript.

Funding

The first author was partially supported by the Lokenath Debnath Endowed Professorship.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wilson, H.R.; Cowan, J.D. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 1972, 12, 1–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Wilson, H.R.; Cowan, J.D.A. Mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 1973, 13, 55–80. [Google Scholar] [CrossRef] [PubMed]
  3. Stephen, C.; Peter, B.G.; Roland, P.; James, W. (Eds.) Neural Fields: Theory and Applications; Springer: Heidelberg, Germany, 2014. [Google Scholar]
  4. Thierry, C.; Alain, H. An Introduction to Semilinear Evolution Equations; Oxford University Press: Oxford, UK, 1998. [Google Scholar]
  5. Milan, M. Applied Functional Analysis and Partial Differential Equations; World Scientific Publishing Co., Inc.: River Edge, NJ, USA, 1998. [Google Scholar]
  6. Sporns, O.; Tononi, G.; Edelman, G.M. Theoretical neuroanatomy: Relating anatomical and functional connectivity in graphs and cortical connection matrices. Cereb. Cortex 2000, 10, 127–141. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Scannell, J.W.; Burns, G.A.; Hilgetag, C.C.; O’Neil, M.A.; Young, M.P. The connectional organization of the cortico-thalamic system of the cat. Cereb. Cortex 1999, 9, 277–299. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Sporns, O. Small-world connectivity, motif composition, and complexity of fractal neuronal connections. Biosystems 2006, 85, 55–64. [Google Scholar] [CrossRef]
  9. Sporns, O.; Honey, C.J. Small worlds inside big brains. Proc. Natl. Acad. Sci. USA 2006, 103, 19219–19220. [Google Scholar] [CrossRef] [Green Version]
  10. Hilgetag, C.C.; Goulas, A. Is the brain really a small-world network? Brain Struct. Funct. 2016, 221, 2361–2366. [Google Scholar] [CrossRef] [Green Version]
  11. Muldoon, S.F.; Bridgeford, E.W.; Bassett, D.S. Small-World Propensity and Weighted Brain Networks. Sci. Rep. 2016, 6, 22057. [Google Scholar] [CrossRef] [Green Version]
  12. Bassett, D.S.; Bullmore, E.T. Small-World Brain Networks Revisited. Neuroscientist 2017, 23, 499–516. [Google Scholar] [CrossRef] [Green Version]
  13. Akiki, T.J.; Abdallah, C.G. Determining the Hierarchical Architecture of the Human Brain Using Subject-Level Clustering of Functional Networks. Sci. Rep. 2019, 9, 19290. [Google Scholar] [CrossRef] [Green Version]
  14. Scannell, J.W.; Blakemore, C.; Young, M.P. Analysis of connectivity in the cat cerebral cortex. J. Neurosci. 1995, 15, 1463–1483. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Fornito, A.; Zalesky, A.; Bullmore, E. Connectivity Matrices and Brain Graphs. In Fundamentals of Brain Network Analysis; Academic Press: Cambridge, MA, USA, 2016; pp. 89–113. [Google Scholar]
  16. Sporns, O. Networks of the Brain; Penguin Random House LLC.: New York, NY, USA, 2016. [Google Scholar]
  17. Swanson, L.W.; Hahn, J.D.; Sporns, O. Organizing principles for the cerebral cortex network of commissural and association connections. Proc. Natl. Acad. Sci. USA 2017, 114, E9692–E9701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Demirtaş, M.; Burt, J.B.; Helmer, M.; Ji, J.L.; Adkinson, B.D.; Glasser, M.F.; Van Essen, D.C.; Sotiropoulos, S.N.; Anticevic, A.; Murray, J.D. Hierarchical Heterogeneity across Human Cortex Shapes Large-Scale Neural Dynamics. Neuron 2019, 101, 1181–1194. [Google Scholar] [CrossRef] [Green Version]
  19. Škoch, A.; Rehák Bučkovxax, B.; Mareš, J.; Tintěra, J.; Sanda, P.; Jajcay, L.; Horxaxxcxek, J.; Španiel, F.; Hlinka, J. Human brain structural connectivity matrices-ready for modeling. Sci. Data 2022, 9, 486. [Google Scholar] [CrossRef]
  20. Avetisov, V.A.; Bikulov, A.K.; Osipov, V.A. p-Adic description of characteristic relaxation in complex systems. J. Phys. A 2003, 36, 4239–4246. [Google Scholar] [CrossRef] [Green Version]
  21. Avetisov, V.A.; Bikulov, A.H.; Kozyrev, S.V.; Osipov, V.A. p-Adic models of ultrametric diffusion constrained by hierarchical energy landscapes. J. Phys. A 2002, 35, 177–189. [Google Scholar] [CrossRef] [Green Version]
  22. Parisi, G.; Sourlas, N. p-Adic numbers and replica symmetry breaking. Eur. Phys. J. B 2000, 14, 535–542. [Google Scholar] [CrossRef]
  23. Khrennikov, A.; Kozyrev, S.; Zúñiga-Galindo, W.A. Ultrametric Equations and Its Applications: Encyclopedia of Mathematics and Its Applications 168; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  24. Zúñiga-Galindo, W.A. Eigen’s paradox and the quasispecies model in a non-Archimedean framework. Phys. A Stat. Mech. Its Appl. 2022, 602, 127648. [Google Scholar] [CrossRef]
  25. Zúñiga-Galindo, W.A. Ultrametric diffusion, rugged energy landscapes, and transition networks. Phys. A Stat. Mech. Its Appl. 2022, 597, 127221. [Google Scholar] [CrossRef]
  26. Zúñiga-Galindo, W.A. Reaction-diffusion equations on complex networks and Turing patterns, via p-adic analysis. J. Math. Anal. Appl. 2020, 491, 124239. [Google Scholar] [CrossRef]
  27. Zambrano-Luna, B.A.; Zuniga-Galindo, W.A. p-Adic cellular neural networks. J. Nonlinear Math. Phys. 2023, 30, 34–70. [Google Scholar] [CrossRef]
  28. Zambrano-Luna, B.A.; Zúñiga-Galindo, W.A. p-Adic cellular neural networks: Applications to image processing. Phys. D Nonlinear Phenom. 2023, 446, 133668. [Google Scholar] [CrossRef]
  29. Vladimirov, V.S.; Volovich, I.V.; Zelenov, E.I. p-Adic Analysis and Mathematical Physics; World Scientific: Singapore, 1994. [Google Scholar]
  30. Albeverio, S.; Khrennikov, A.; Shelkovich, V.M. Theory ofp-Adicdistributions: Linear and Nonlinear Models; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  31. Kochubei, A.N. Pseudo-Differential Equations and Stochastics over Non-Archimedean Fields; Marcel Dekker: New York, NY, USA, 2001. [Google Scholar]
  32. Taibleson, M.H. Fourier Analysis on Local Fields; Princeton University Press: Princeton, NJ, USA, 1975. [Google Scholar]
  33. Bocardo-Gaspar, M.; García-Compeán, H.; Zúñiga-Galindo, W.A. Regularization of p-adic string amplitudes, and multivariate local zeta functions. Lett. Math. Phys. 2019, 109, 1167–1204. [Google Scholar] [CrossRef] [Green Version]
  34. Koblitz, N. p-Adic Numbers, p-Adic Analysis, and Zeta-Functions. Graduate Texts in Mathematics No. 58; Springer: New York, NY, USA, 1984. [Google Scholar]
  35. Chistyakov, D.V. Fractal geometry of images of continuous embeddings of p-adic numbers and solenoids into Euclidean spaces. Theor. Math. Phys. 1996, 109, 1495–1507. [Google Scholar] [CrossRef]
  36. Halmos, P. Measure Theory; D. Van Nostrand Company Inc.: New York, NY, USA, 1950. [Google Scholar]
  37. Zúñiga-Galindo, W.A. Non-Archimedean Reaction-Ultradiffusion Equations and Complex Hierarchic Systems. Nonlinearity 2018, 31, 2590–2616. [Google Scholar] [CrossRef] [Green Version]
  38. Monna, A.F. Sur une transformation simple des nombres p-adiques en nombres réels. Indag. Math. 1952, 14, 1–9. [Google Scholar] [CrossRef]
Figure 1. The rooted tree associated with the group Z 2 / 2 3 Z 2 . The elements of Z 2 / 2 3 Z 2 have the form i = i 0 + i 1 2 + i 2 2 2 , i 0 , i 1 , i 2 { 0 , 1 } . The distance satisfies log 2 i j 2 = level of the first common ancestor of i, j.
Figure 1. The rooted tree associated with the group Z 2 / 2 3 Z 2 . The elements of Z 2 / 2 3 Z 2 have the form i = i 0 + i 1 2 + i 2 2 2 , i 0 , i 1 , i 2 { 0 , 1 } . The distance satisfies log 2 i j 2 = level of the first common ancestor of i, j.
Entropy 25 00949 g001
Figure 2. Heat map of function ϕ ( x ) ; see (18). Here, ϕ ( 0 ) = ϕ ( 1 ) = ϕ ( 7 ) = 1 is white; ϕ ( 2 ) = 1 is black; and ϕ ( x ) = 0 is red for x 0 , 1 , 7 , 2 .
Figure 2. Heat map of function ϕ ( x ) ; see (18). Here, ϕ ( 0 ) = ϕ ( 1 ) = ϕ ( 7 ) = 1 is white; ϕ ( 2 ) = 1 is black; and ϕ ( x ) = 0 is red for x 0 , 1 , 7 , 2 .
Entropy 25 00949 g002
Figure 3. An approximation of E ( x , t ) . We take Q = 0 and δ = 5 . The time axis goes from 0 to 100 with a step of 0.05 . The figure shows the response of the network to a brief localized stimulus (the pulse given in (19)). The response is also a pulse. This result is consistent with the numerical results in [2] (Section 2.2.1, Figure 3).
Figure 3. An approximation of E ( x , t ) . We take Q = 0 and δ = 5 . The time axis goes from 0 to 100 with a step of 0.05 . The figure shows the response of the network to a brief localized stimulus (the pulse given in (19)). The response is also a pulse. This result is consistent with the numerical results in [2] (Section 2.2.1, Figure 3).
Entropy 25 00949 g003
Figure 4. An approximation of E ( x , t ) . We take Q = 0 and δ = 100 . The time axis goes from 0 to 200 with a step of 0.05 . The figure shows the response of the network to a maintained stimulus (see (19)). The response is a pulse train. This result is consistent with the numerical results in [2] (Section 2.2.5, Figure 7).
Figure 4. An approximation of E ( x , t ) . We take Q = 0 and δ = 100 . The time axis goes from 0 to 200 with a step of 0.05 . The figure shows the response of the network to a maintained stimulus (see (19)). The response is a pulse train. This result is consistent with the numerical results in [2] (Section 2.2.5, Figure 7).
Entropy 25 00949 g004
Figure 5. An approximation of E ( x , t ) . We take Q = 30 and δ = 100 . The time axis goes from 0 to 100 with a step of 0.05 . The figure shows the response of the network to a maintained stimulus (see (19) and (20)). The response is a pulse train in space and time. This result is consistent with the numerical results in [2] (Section 2.2.7, Figure 9).
Figure 5. An approximation of E ( x , t ) . We take Q = 30 and δ = 100 . The time axis goes from 0 to 100 with a step of 0.05 . The figure shows the response of the network to a maintained stimulus (see (19) and (20)). The response is a pulse train in space and time. This result is consistent with the numerical results in [2] (Section 2.2.7, Figure 9).
Entropy 25 00949 g005
Figure 6. An approximation of h ˜ E ( x , t ) and E ( x , t ) . We take h I ( x , t ) 0 , p = 3 , and l = 6 ; the kernels w A B are as in Simulation 1, and h E ( x , t ) is as in (21). The time axis goes from 0 to 60 with a step of 0.05 . The first figure is the stimuli, and the second figure is the response of the network.
Figure 6. An approximation of h ˜ E ( x , t ) and E ( x , t ) . We take h I ( x , t ) 0 , p = 3 , and l = 6 ; the kernels w A B are as in Simulation 1, and h E ( x , t ) is as in (21). The time axis goes from 0 to 60 with a step of 0.05 . The first figure is the stimuli, and the second figure is the response of the network.
Entropy 25 00949 g006
Figure 7. An approximation of h E ( x , t ) and E ( x , t ) . We take h I ( x , t ) 0 , p = 3 , and l = 6 ; the kernels w A B are as in Simulation 1, and h E ( x , t ) is as in (22). The time axis goes from 0 to 60 with a step of 0.05 . The first figure is the stimuli, and the second figure is the response of the network.
Figure 7. An approximation of h E ( x , t ) and E ( x , t ) . We take h I ( x , t ) 0 , p = 3 , and l = 6 ; the kernels w A B are as in Simulation 1, and h E ( x , t ) is as in (22). The time axis goes from 0 to 60 with a step of 0.05 . The first figure is the stimuli, and the second figure is the response of the network.
Entropy 25 00949 g007
Figure 8. The left matrix is the connection matrix of the cat cortex. The right matrix corresponds to a discretization of the kernel w E E used in Simulation 1.
Figure 8. The left matrix is the connection matrix of the cat cortex. The right matrix corresponds to a discretization of the kernel w E E used in Simulation 1.
Entropy 25 00949 g008
Figure 9. Three p-adic approximations for the connection matrix of the cat cortex. We take p = 2 and l = 6 . The first approximation uses r = 0 ; the second, r = 3 ; and the last, r = 5 .
Figure 9. Three p-adic approximations for the connection matrix of the cat cortex. We take p = 2 and l = 6 . The first approximation uses r = 0 ; the second, r = 3 ; and the last, r = 5 .
Entropy 25 00949 g009
Figure 10. We use p = 2 and l = 6 , and the time axis goes from 0 to 150 with a step of 0.05 . The left image uses r = 0 ; the right one uses r = 3 ; and the central one uses r = 5 .
Figure 10. We use p = 2 and l = 6 , and the time axis goes from 0 to 150 with a step of 0.05 . The left image uses r = 0 ; the right one uses r = 3 ; and the central one uses r = 5 .
Entropy 25 00949 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zúñiga-Galindo, W.A.; Zambrano-Luna, B.A. Hierarchical Wilson–Cowan Models and Connection Matrices. Entropy 2023, 25, 949. https://doi.org/10.3390/e25060949

AMA Style

Zúñiga-Galindo WA, Zambrano-Luna BA. Hierarchical Wilson–Cowan Models and Connection Matrices. Entropy. 2023; 25(6):949. https://doi.org/10.3390/e25060949

Chicago/Turabian Style

Zúñiga-Galindo, W. A., and B. A. Zambrano-Luna. 2023. "Hierarchical Wilson–Cowan Models and Connection Matrices" Entropy 25, no. 6: 949. https://doi.org/10.3390/e25060949

APA Style

Zúñiga-Galindo, W. A., & Zambrano-Luna, B. A. (2023). Hierarchical Wilson–Cowan Models and Connection Matrices. Entropy, 25(6), 949. https://doi.org/10.3390/e25060949

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop