Abstract
Semantic memory is characterized by a hierarchical organization of concepts based on shared properties. However, this aspect is insufficiently dealt with in recent neurocomputational models. Moreover, in many cognitive problems that exploit semantic memory, gamma-band synchronization can be relevant in favoring information processing and feature binding. In this work, we propose an attractor network model of semantic memory. Each computational unit, coding for a different feature, is described with a neural mass circuit oscillating in the gamma range. The model is trained with an original nonsymmetric Hebb rule based on a presynaptic gating mechanism. After training, the network creates a taxonomy of categories, distinguishes between subordinate and superordinate concepts, and discriminates between salient and marginal features. Examples are provided concerning a fourteen-animal taxonomy, including several subcategories. A sensitivity analysis reveals the robustness of the network but also points out conditions leading to confusion among categories, similar to the one observed in dreaming and some neurological disorders. Finally, the analysis emphasizes the role of fast GABAergic interneurons and inhibitory-excitatory balance to allow the correct synchronization of features. The model represents an original attempt to deal with a hierarchical organization of objects in semantic memory and correlated patterns, still exploiting gamma-band synchronization to favor neural processing. The same ideas, introduced in a more sophisticated multilayer network, can deepen our knowledge of semantic memory organization in the brain. Finally, they can open new perspectives in quantitatively analyzing neurological disorders connected with distorted semantics.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
The capacity to recognize objects and recover their contents is an essential aspect of our cognitive system, involving the so-called “semantic memory.” The common idea is that the representation of objects in semantic memory consists of a list of features that describe an object’s fundamental aspects in a context-independent and impersonal manner [1,2,3]. These features are acquired with time, reflecting a long experience in the course of life.
Given the importance of semantics in our cognition and the enormous consequences of its damage in daily life, it becomes crucial to understand the neural mechanisms involved and to formulate mechanistic explanations. Neurocomputational models, inspired by brain functioning, can play a relevant role in this domain, proposing possible solutions for neurological problems, emphasizing putative mechanisms and circuits, and suggesting testable predictions to validate or reject hypotheses. Furthermore, neural networks inspired by biology can represent innovative tools in artificial intelligence and machine learning, devising new technological solutions to old problems.
Indeed, the development of neurocomputational models of semantic memory has a long tradition, dating back to the early nineties.
Of particular relevance, Rumelhart et al. [4] and Rogers and McClelland [5, 6] used a feedforward schema trained with backpropagation, with the primary objective of investigating which representation can develop in the hidden units of a multilayer network. This approach was further extended by Rogers et al., as described below.
Conversely, other models, starting from the early nineties, were based on attractor dynamics, in which the object representation is stored as an equilibrium point of a recurrent network. These provided essential indications of semantic aphasia and associative priming [7,8,9,10]. Specifically, Cree et al. [9] found that distinctive features play a more significant role in activating a concept than shared features. O’Connor, Cree, and McRae [11] demonstrated that a single layer of feature nodes can represent both subordinate and superordinate concepts without needing any a priori hierarchical organization. However, these models trained synapses using the recurrent backpropagation-through-time algorithm.
Other authors, conversely, used auto-associative networks trained with the Hebb rule, as in the classic well-known Hopfield model [12]. McRae et al. [13] used an attractor network trained with the Hebb rule to investigate the role of feature correlation in the organization of semantic memory and explained several semantic priming effects. Kawamoto [14] demonstrated that attractor basins are crucial to understanding priming, deep dyslexia, and ambiguity resolution. Miikkulainen and Silberman et al. [15, 16] developed a model consisting of two self-organizing maps. Siekmeier and Hoffman [17] used the Hebb rule in a Hopfield network to compare semantic priming in normal subjects and schizophrenic patients. However, classic Hebb rules produce symmetrical synapses and have limitations in discovering a hierarchical organization of concepts.
More recent advanced models of semantic memory introduced a multilayer topology inspired by cortical structure and function. One of the purposes of these models is to reconcile divergent hypotheses in the literature, such as the presence of category-specific vs. category-general semantic areas (i.e., a sensory-functional vs. a distributed representation). In particular, Rogers et al. developed a series of multilayer models, implementing a distributed-plus-hub theory of semantic memory, assuming that concepts reflect both hub and spoke representations and their interactions. This neurocomputational model explains several neuroimaging and patient pieces of evidence [18] and indicates how the semantic representation changes dynamically with stimulus processing [19]. It is worth noting that these networks too are trained with a variant of backpropagation adapted to recurrent networks to minimize squared errors.
However, a second multilayer semantic model, which exploits the Hebb rule, was developed by Garagnani et al. in a series of papers during the last decade [20,21,22]. Based on neuroanatomical studies, the network includes twelve cortical areas and their within-area and between-area connectivity. In particular, the model mimics the function of primary and secondary sensorimotor regions of the frontal, temporal, and occipital cortex, along with a connector hub. Hebbian mechanisms for synaptic modification are used to study the emergence of associations among word forms (phonological/lexical) and the object’s semantic content. The results show that cell assemblies are formed after learning, reflecting the semantic content, and explain the presence of both category-specific and more general semantic processes. A version of the model with spiking neurons also analyzes high-frequency synchronous oscillations during word processing [23, 24]. Interestingly, a recent version [25] also analyzes differences between concrete and abstract concepts, ascribing the formation of concrete concepts to the presence of shared features. Conversely, abstract concepts are explained by family resemblance among instances.
Despite the significant value of the last models, a few relevant aspects of semantic memory still need to be dealt with. First, concepts in nature exhibit a strong correlation among their features, which can lead to a hierarchical organization among concepts. Nevertheless, Hebbian mechanisms have difficulty finding these organizations (see Mac Rae et al. [13] for an excellent critical analysis). Furthermore, several authors have argued that not all features are equally important in representing the concepts [9]. In particular, feature listing tasks [10, 26, 27] show that not all features within an object play the same role. While some are salient and quickly come to mind when thinking of an object, others are marginal and rarely evoked. The Hebbian learning procedures used in most previous models neglect these two essential aspects of the semantics, using orthogonal object representations and assuming that the total object content (i.e., all features) is experienced at any step, neglecting the probability aspects of our experience and differences in feature saliency.
A few more recent studies investigated how the Hebb rule could be modified in attractor networks to deal with correlated patterns and, more generally, to improve storage capacity. To overcome the limitations of attractor networks in the presence of correlated patterns, Blumenfeld et al. [28] introduced a learning rule in which synaptic weight changes are facilitated by the novelty (i.e., the difference between the present input and stored memories) and demonstrated that this rule allows memory representations of correlated stimuli depending on the learning order. Tang et al. [29] extended the previous work using a more plausible network and examining the role of saliency weights (i.e., the patterns are stored with a variable saliency factor). Results show that saliency weights can vividly affect the memory representations, allowing flexibility of the resulting attractors. Kropff and Treves [30] introduced a Hebbian rule in which the presynaptic threshold reflects the neuron popularity and showed that this rule can store and retrieve sets of correlated patterns. It is worth noting that this rule requires the extraction of statistical properties; hence, it is inappropriate for one-shot learning. Boboeva et al. [31] studied the capacity of a Potts network (i.e., a variant of the Hopfield network in which neurons have multiple possible discrete states) in the presence of correlated data. They showed that correlation reduces the storage capacity. However, when the storage capacity is overcome, the network can maintain the gross core semantic components, and only fine details are compromised. Finally, Pereira and Brunel [32] used a variant of the Hebb rule in which the presynaptic and postsynaptic terms are described through nonlinear functions of neuron activity, and these functions are empirically derived from data. Although not directly mentioning correlated patterns, the authors showed that this rule, with sigmoidal dependence on pre- and postsynaptic firing rates, almost maximizes the number of stored patterns.
Extending previous studies, the present work investigates the role of Hebbian learning in forming a semantic memory. Still, it introduces new aspects: a further analysis of correlation among patterns, according to a hierarchical category structure, and a probabilistic experience so that features in an object are perceived with a given (higher or smaller) probability at each presentation. To deal with these aspects, we propose a new version of the Hebb rule, able to produce a not-symmetrical pattern of synapses, and we show that this rule automatically generates a distinction between marginal and relevant features and category representations based on shared and distinctive features.
To test this rule, we use a simplified auto-associative network with only one layer of units, as in older models of semantic memory. We know that this is a substantial simplification of reality and that a multilayer network is necessary to simulate the real neural processing circuits in the brain. The simplification is justified by the possibility of presenting results and synapse changes in a much more straightforward way, putting in evidence the virtues and limitations of the proposed Hebb rule within a simple auto-associative net. In the last section, we discuss how the present model can be extended in future work to fit novel, more complete models (e.g., by Garagnani et al.) or, more generally, with multilayer deep neural networks.
Finally, brain rhythms are known to play a significant role in cognition. Slow gamma oscillations of neural activity (in the 30–40 Hz range) have been frequently observed during various memory tasks involving working, episodic, and semantic aspects [33,34,35,36,37,38,39].
Despite the presence of some controversies in the recent literature (see [40, 41] for cons), an influential theory suggests a role for the gamma rhythm in binding conceptual information [38, 42, 43]. Notably, Tallon-Baudry et al. [33, 34] suggest that neural discharges in the gamma band play a fundamental role in binding activities in areas involved in an object representation, merging bottom-up (i.e., sensory) and top-down (i.e., memory and past experience) information in a coherent entity. This idea is supported by the observation that stimuli for which subjects have a long-term memory representation lead to significantly larger gamma responses than unknown stimuli [44]. Indeed, gamma-band activity accompanies many cognitive functions, like attention [45,46,47], arousal [48], object recognition [33, 34], and language perception [49]. Hence, Herrmann et al. suggested that in these tasks, the gamma band response realizes a match of sensory information with memory contents, a mechanism named by the authors “match and utilization model” [44, 50]. Gamma rhythms are observed in the hippocampus during episodic memory retrieval (often linked with a slower theta oscillation) [51,52,53] and in the prefrontal cortex during working memory tasks [35, 54], all conditions where a relationship with semantics can be postulated [55].
Furthermore, a compromised link between semantic organization and brain oscillations has been observed in several neurological conditions. Some of them, like Alzheimer’s disease or semantic dementia, are characterized by a progressive loss of object recognition, possibly involving the theta-gamma code [27, 56, 57]. Others, like schizophrenia, are characterized by an illogical use of concepts and a possible involvement of alterations in the gamma rhythm [58,59,60].
Since oscillatory aspects are of value, activity in each computational unit in our model is simulated through a neural mass model developed by the authors in recent years [61], in which brain rhythms arise from feedback interactions among local excitatory and inhibitory populations. Specifically, parameters are assigned so that each unit oscillates in the gamma band. Neural masses are a valuable alternative way to simulate oscillations compared with spiking neurons, assuming that a single unit describes the behavior of entire populations of neurons, coding for the same aspect. This formalism is more oriented to analyzing local field potentials or cortical activity reconstructed in regions of interest from high-density scalp EEG [62]. A taxonomy of various animals (mammals and birds), including several subcategories and salient-plus-marginal features, shows the main model virtues and limitations in representing semantic objects. A sensitivity analysis on some parameters involving the Hebb rule or the gamma rhythm generation is finally performed to test the robustness of the network, suggest some testable predictions, and unmask conditions leading to pathological behavior.
Method
The Object Taxonomy
To illustrate the model behavior, we use a simple taxonomy including fourteen animals divided into two categories, “mammals” and “birds,” with different marginal and salient features. Moreover, each category is further subdivided into several subcategories, which can also intersect. These are illustrated in the diagram in Fig. 1, while a list of all features is presented in Table 1. It is worth noting some features (like “it eats”) are common to all the animals and are used to generate the category “animal.” Other features (like “it has four legs” for mammals or “it has two legs” for birds) are used to generate the subcategories “mammals” and “birds” in the taxonomy. Other features (like “it eats grass” for the herbivorous or “it lives in nature” for wild animals) generate additional subcategories, which can also be partially superimposed. Finally, the individual animals have some distinctive salient and marginal features not shared with the others (like “it barks” for the dog). In our model, the difference between salient and marginal features depends on the probability of feature presentation during training (see “Training the Network”): features presented quite rarely during training automatically become marginal and are not spontaneously evoked when thinking about the concept (see [26]). The few animals simulated in this work are just examples to illustrate model potentiality.
In the model, each feature in Table 1 is represented through a single neural mass model, implementing the interaction between pyramidal neurons and excitatory-inhibitory local interneurons, as illustrated below. We assume that whenever a feature is present in the perception of the external world or is spontaneously evoked in the mind, the pyramidal neurons oscillate in the gamma band, coding for the feature’s presence in the object representation.
The Neural Mass Model
The model of each computational unit, coding for an individual feature, is realized through the feedback connection among a population of pyramidal neurons, a population of excitatory interneurons (both with glutamatergic synapses), a population of GABAergic inhibitory interneurons with slow synapse dynamics, and a population of GABAergic inhibitory interneurons with fast synapse dynamics (see the upper panel in Fig. 2). As traditionally done in neural mass models (see [61, 63, 64]), the output of each population represents the spike density of a group of neurons of the same kind, which share similar inputs and exhibit similar global dynamics. More details on the construction of individual computational units can be found in previous papers of the authors [61, 65, 66] and in Supplementary Materials, where all equations and parameter numerical values can be found.
The basic idea of the semantic model is that, after training, the individual computational units, each coding a feature, are linked together in an auto-associative network to implement the conceptual knowledge of the object. As well known, long-range synapses in the brain are realized by axons of pyramidal glutamatergic neurons [67]. However, as demonstrated in “Results” and discussed in the final section, a balance between excitation and inhibition is needed in the network connectivity to obtain robust synchronization in the gamma band. In particular, using only excitatory links can spread excessive activity across the auto-associative network, causing some units to reach saturation. Inhibitory long-range connections are necessary to control the net’s global level of gamma oscillations. Accordingly, we assumed that each unit can send both long-range connections from pyramidal presynaptic to pyramidal postsynaptic neurons in other units (thus realizing an excitatory “pyramidal → pyramidal” link) and long-range connections from presynaptic pyramidal to postsynaptic fast-inhibitory interneurons in other units (thus realizing a bi-synaptic inhibitory link, “pyramidal → fast inhibitory → pyramidal”). In the following, the first connections will be generically named “excitatory” and represented with the connection strength Wex. In contrast, the second connections will be called “inhibitory” and defined with the connection strength Win. Furthermore, we included the presence of a pure delay between the units. A qualitative example of the connectivity between two units is shown in the lower panels of Fig. 2.
We assume that excitatory and inhibitory (bi-synaptic) connections among units are trained with an original Hebbian rule through a learning phase, during which several features of an object are provided as input with a given probability.
Training the Network
As commented in “Introduction,” we need not-symmetrical connections among units after training to simulate hierarchical concepts, reflecting the differences between shared and distinctive features and between salient and marginal features. In particular, shared features must not send a connection to distinctive features (for instance, the feature “it eats” must not excite the feature “it barks” since not all animals bark). Conversely, a distinctive feature must send a synapse to the shared features of its object (for instance, the feature “it barks” must send connections, among the others, to “it is a pet” typical of a domestic animal, “it has four legs” typical of a mammal, and also to “it eats”). Similarly, marginal features must not receive synapses from salient features since they are not spontaneously produced but must send synapses to salient features since they contribute to object recognition.
We used a Hebb rule with different roles and thresholds for the presynaptic and postsynaptic neurons to realize this asymmetrical connectivity.
Both the presynaptic and postsynaptic activities (normalized to the maximum, to work in a 0–1 range) are averaged on a 30 ms interval (approximately equal to the period of the gamma rhythm) and compared with a (different) threshold. The synapse change is proportional to the product of these differences multiplied by a learning factor. Furthermore, to make synapses asymmetrical, we adopted a “presynaptic” ON/OFF switch, i.e., the Hebb rule holds only if the presynaptic activity is above the threshold. Otherwise, no synaptic change occurs. In other words, the presynaptic population must be sufficiently active to have a synapse change. In this condition, the synapse change is a reinforcement or a weakening, depending on whether the postsynaptic population activity is above or below the threshold.
The overall model is reported in Supplementary Materials; however, the individual equations for synapse change are anticipated below.
(i) The connection change from presynaptic pyramidal to postsynaptic pyramidal populations is computed as follows:
where \({W}_{ij}^{ex}\) denotes an excitatory pyramidal-pyramidal connection for the postsynaptic pyramidal population in unit i and the presynaptic pyramidal population in unit j, \({\overline{z} }_{pi}\) and \({\overline{z} }_{pj}\) are the activity of the pyramidal postsynaptic and presynaptic pyramidal populations, respectively, normalized to the maximum and mediated over the previous 30 ms, \({\theta }_{post}\) and \({\theta }_{pre}\) are the postsynaptic and presynaptic thresholds, \(\gamma\) is the learning factor and the operation ()+ denotes the function “positive part” (i.e., (u)+ = u if u is positive, (u)+ = 0 otherwise). The last term in Eq. (1) signifies that the learning rate progressively decreases when the synapse approaches a maximum saturation level (\({W}_{max}\)).
(ii) Connection change from the presynaptic pyramidal to postsynaptic fast-inhibitory populations
The meaning of symbols in Eq. (2) is similar to (1), but now \({\overline{z} }_{fi}\) represents the normalized activity of the fast-inhibitory population in the postsynaptic unit, i, mediated over the previous 30 ms. We used different learning rates and threshold values when training the two kinds of connections. As shown in “Results,” this will be essential to build a connectivity network that respects the constraints delineated above.
Finally, a further normalization has been adopted for the sum of synapses entering a given unit, i.e., we assumed this sum cannot overcome a given upper saturation. This constraint reproduces a physiological limitation in overall neurotransmitter availability and is helpful to avoid excessive excitation (see Supplementary Materials for more details). Interestingly, this normalization comes into play only for synapses that enter units encoding shared properties, i.e., representing categories. Indeed, without normalization, the latter (being shared by all animals) would receive too many synaptic inputs, reaching a saturation value and losing the oscillation in the gamma rhythm, thus resulting in an erroneous training phase.
Due to Eqs. (1) and (2), not-symmetrical connections are created in the network after training. By way of example, let us assume that a distinctive feature and a shared feature are excited together during the presentation of a given object and so oscillate in the gamma range (for instance, the distinctive feature “it barks” and the shared feature “it eats” may be both active during the presentation of the object “dog”). As a consequence of the Hebb rule, a connection is reinforced between them in both directions, as shown in the upper panel of Fig. 3. Let us now assume that, in a subsequent moment, the shared feature (“it eats”) occurs during the presentation of another animal (for instance, “the cat”). So, the distinctive feature “it barks” is not excited at the moment (middle panel). As a consequence of the presynaptic rule, the strength of the connection from the presynaptic distinctive feature (“it barks”) to the postsynaptic shared feature (“it eats”) does not change (since the presynaptic activity is silent). Conversely, the synapse from the shared to the distinctive feature (from “it eats” to “it barks”) is weakened: in fact, the presynaptic population at the moment is active, and the postsynaptic population is inhibited. Since the shared feature occurs in many animals, prolonged training causes the suppression of synapses from shared to distinctive features. In contrast, connections from the distinctive features to the shared features are preserved (bottom panel).
Similar reasoning can also be applied to what concerns the synapses linking a salient and a marginal feature within a given object. Connections from marginal (less frequent) to salient (more frequent) features are preserved whenever a marginal feature is absent, whereas synapses from salient to marginal features disappear. The final result, however, depends on the particular choice of the postsynaptic and presynaptic thresholds and the probabilities given to the individual features during training (see also [68, 69]).
Training is performed by providing all objects (in this example, the 14 animals) once within a random permutation: this represents just one epoch of training. During each object presentation, a feature is given as input with a given probability. Specifically, if the feature is experienced, the input to the unit is assigned a value as high as 500, allowing the unit to oscillate with a gamma rhythm without reaching a saturation (see Fig. S1 in Supplementary Materials for one example). If the feature is absent, the input is zero, and before training the unit is silent (but it can oscillate after training if it receives sufficient excitatory input from other co-active units). In this work, we used a frequency of occurrence as high as 80% for all salient features and just 30% for marginal features (all shared features are assumed as salient), i.e., we consider a stationary condition for the external environment (see Table A3 in Supplementary Materials for some examples of inputs concerning the dog). Training with a not-stationary environment (with varying probabilities for the features) can be the subject of a future study. Six hundred epochs were used to reach quite a stable final value (with only slight changes). During each simulation, the model is run for 0.35 s, and the Hebb rule is applied at each integration step (hence, with a very small learning rate, see Supplementary Materials) during the last 0.25 s when the initial transient is exhausted. This time agrees with the common opinion that an object is recognized in semantic memory within 150–200 ms (see [22]).
Results
Pattern of Synapses
Figure 4 shows an example of the excitatory synapses obtained after learning, using the basic parameter values reported in Tables A1 and A2 of the Supplementary Materials for the Hebb rule (learnings factors and thresholds in Eqs. (1) and (2)). These parameters were assigned empirically after a few attempts to achieve a correct behavior. The results of the sensitivity analysis will be illustrated below.
For example, Fig. 4a shows the excitatory synapses (type Wex) entering into all the distinctive features that characterize the dog from all the remaining network features. According to the probability used (see Table 1), the first five features of the dog are salient, and the last three are marginal. Interestingly, the five salient features receive strong synapses from all the other salient features of the dog and synapses (although less strong) from the three marginal features. Hence, they are automatically evoked. Conversely, the marginal features do not receive significant synapses and hence cannot be automatically produced. Noteworthily, the distinctive features of the dog do not receive excitatory synapses either from the distinctive features of the other animals or from the shared features of subcategories (including domestic animals, since there is another domestic animal, i.e., the cat) or from the shared features of mammals, birds, and animals.
The inhibitory synapses (type Win) entering into the eight distinctive features of the dog are shown in Fig. S2 of Supplementary Materials. These synapses are similar to the excitatory ones but with smaller strengths. We can observe that the salient distinctive features of the dog also receive inhibitory synapses from the shared features of mammals (positions 58–60). This is not a problem: in fact, it is of value that shared features send a slight inhibition to distinctive features to avoid their appearance in the presence of information from other animals (for instance, to prevent the feature “it barks” through the production of the shared features of mammals, erroneously activates the distinctive features of the cat).
Similar patterns of synapses can be found for the other distinctive features in the network (both concerning mammals and birds) and are not shown for briefness.
Figure 4b shows the synapses entering into the three shared features of the mammals and the three shared features of the birds. These features receive strong synapses from the shared features of the same category (either mammals or birds), the features of the corresponding subcategories (for instance, the features of “volatile” for the category “birds,” the feature of the category “herbivorous” for the category “mammals”), and also from the distinctive features of all the individual members of the category: this implements a correct logic for the categories. No cross interference between categories occurs (i.e., the distinctive features of birds do not send synapses to mammals’ shared features and mammals’ distinctive features do not send synapses to the shared features of birds). Moreover, as shown in Fig. 4c, the three features shared by all animals (“it eats,” “it sleeps,” and “it drinks”) receive synapses from all features in the network and hence are always evoked.
Finally, to complete the analysis of subcategories, Fig. 4d shows the excitatory synapses entering into one shared feature of the category “farm,” one shared feature of the subcategory “herbivorous,” and a shared feature of the subcategory “volatile.” All other features denoting subcategories behave similarly and are not shown for briefness. In all cases, the feature receives synapses only from members of the same category. Interestingly, the feature “it flies” does not receive synapses from the general features of the bird, nor from features of the roster and the hen (which do not fly). The last aspect, of course, is quite fragile and will be discussed at the end.
Basic Simulation Results
First, if one unit receives no input but only white noise, it works in the bottom portion of the sigmoidal relationship; hence, its spike density is close to zero, and the noise effect is strongly attenuated. We used an input as high as 500 to produce an oscillatory behavior in an individual isolated unit, even without other synaptic inputs. Examples are shown in Fig. S1 in Supplementary Materials.
Figure 5 shows an example of network activity in response to a distinctive salient property of the cat (i.e., “it is independent”). The upper panel shows the initial 300 ms of simulation, and the bottom panel zooms on the same simulation, showing just two gamma cycles. Some aspects are relevant. Thanks to the presence of auto-associative connections among the units, all four distinctive salient features of the cat, together with the two shared features of domestic animals, the three shared features of the mammals, and the three shared features of all animals oscillate in the gamma band and become synchronized within 100–110 ms. The distinctive marginal features of the cat are silent, and, of course, all the distinctive features of other animals and the shared features of other categories are silent, too.
Figure 6 shows the response to a marginal feature of the cat (“it has whiskers”). There are some differences compared with the case of a salient feature. Now, the network requires a longer time (about 170 ms) to synchronize the cat’s salient and shared features. Moreover, the stimulated unit coding for the marginal feature oscillates with the gamma rhythm but out-of-phase compared with the other salient features of the cat.
A similar behavior (production of all salient distinctive features, no production of marginal features, production of all shared features in its categories and the category animal) can be observed by exciting one salient distinctive feature or one marginal feature of any animal. An example of a bird’s salient distinctive feature is shown in Fig. S3 of the Supplementary Materials.
Figure 7 shows some additional examples. They concern (i) a salient feature of the zebra (“it has stripes”). The zebra is a complex example since it exhibits two shared features of the herbivorous and two shared features of wild animals, together with the shared features of mammals and animals. The simulation shows that the network can recover all these features, recognizing that the zebra is an animal with characteristics of a mammal, an herbivorous, and a wild animal, and further includes the additional distinctive features. (ii) A shared feature of a herbivorous (“it eats grass”). The network correctly recovers the two shared features of herbivorous and, since all herbivores are mammals, recovers the three features of mammals and, of course, the three features of animals. (iii) A shared feature of dangerous animals (“it has big claws”). Interestingly, since all dangerous animals in our structure are also wild animals and mammals, the network recovers the features of dangerous and wild animals, mammals, and the generic features of all animals; (iv–vi) the network response to a shared feature of mammals (“it has four legs”), a shared feature of birds (“it has wings”), and a feature shared by all animals (“it eats”). In all cases, the network can recover the entire salient content of the given category (i.e., the three features of mammals + three features of animals in the first case, the three features of birds + three features of animals in the second case, and only the three general features of animals in the last case) without recovering distinctive features of individual members.
Similar examples can be obtained concerning all subcategories in the model. In no case are distinctive features of individual members or shared features of other subcategories incorrectly recovered.
Finally, an interesting example is given in Fig. 8, where we show what is occurring in response to a feature “it has spots” shared by the cow and the giraffe. It is worth noting that both animals are herbivorous, but only the first is a “farm animal,” and only the second is a “wild animal.” The top panel shows the model response. The network has difficulty in finding the correct response in this challenging condition. After a long time (almost 300 ms), the network restores the three properties of the mammals, the three properties of the animals, and just one property of the herbivorous. The other property is lacking. However, it is sufficient to assume a slight increase in the strength of all excitatory synapses (+ 30% increase in all synapses Wex, bottom panel) to obtain a prompted and correct response. Similar behavior is obtained by simulating other properties shared by two animals without generating a category, i.e., “it lives in the herds” (common to the zebra and lion) and “it pecks” (common to the roster and hen).
The possibility of increasing the synapse strength under challenging conditions, requiring much attention, can be related to neurotransmitter changes (especially acetylcholine) [70].
Sensitivity Analysis
The previous simulations show that the model, with the parameters assigned as in Tables A1 and A2 of Supplementary Materials, can recover all salient features of an object from a single external cue, synchronizing all activities in the gamma range. Moreover, the model can distinguish correctly between categories, creating a taxonomy based on shared vs. distinctive features.
In the following, we will test the robustness of the model by changing some fundamental parameters.
First, we analyzed the effect of the pure delay among the units. In all previous simulations, a delay of 25 ms was used. The simulations show that a decrease of the delay below 20 ms causes poor synchronization (see the upper panel in Fig. 9); the synchronization remains high if the delay is 20–25 ms and then progressively deteriorates (bottom panel in Fig. 9).
Since alterations in GABAergic interneurons occur in neurological disorders, such as schizophrenia, we also analyzed the effect of a change in internal parameters involving these interneurons. Decreasing parameter Cpf, which represents the inhibitory effect of fast GABAergic interneurons on pyramidal neurons, or decreasing Cfp, which represents the excitatory effect of pyramidal neurons on GABAergic fast interneurons, provokes the interruption of the gamma rhythm after a few cycles, with some (or all) oscillating units entering into saturation. Decreasing Cff to zero, representing the self-inhibitory loop of fast interneurons causes poor synchronization and difficulty emerging distinctive features.
A further sensitivity analysis concerns parameters in the Hebb learning rules (Eqs. (1) and (2)). In particular, we systematically modified the threshold and learning rates in Eqs. (1) and (2). Results can be summarized as follows: if the learning rate of the excitatory synapses is augmented too much (\({\gamma }^{ex}\ge 0.00011\)) or the threshold for the postsynaptic term is diminished too much (\({\theta }_{post}^{ex}\le 0.20\)), the excitatory synapses increase too rapidly during training. In particular, the synapses from the shared to the distinctive features (which should fall to zero after learning) can remain significant, causing an erroneous interference between different categories or between members of the same category. For instance, a shared feature of a category becomes able to evoke a shared feature of another category (by way of example, the shared feature “it eats” erroneously evokes the feature “it has for legs”; as a consequence, any bird evokes the feature of mammals); or a feature of a category evokes the distinctive features of a member, causing confusion among members of the same category (the feature “it has four legs” evokes the feature “it purrs” causing any mammal to evoke the properties of cats). An example of the first kind is illustrated in Fig. 10, where the upper panel shows the synapses entering into the shared feature of the mammals after training the network with a learning rate as high as 0.00011. Erroneously, mammals’ features receive synapses from animals’ general features. The bottom panel shows the result of a simulation in response to the feature “it lives at night” using these synapses of the owl. The three properties of the mammals are erroneously produced, together with all the correct properties of the owl, causing a “fantastic” owl with some mammalian characteristics.
The same kind of error occurs if the learning rate of the inhibitory synapse is excessively decreased (\({\gamma }^{in}\le 0.000012\)) or the postsynaptic threshold of inhibitory synapses is excessively increased (\({\theta }_{post}^{in}\ge 0.075\)). Both changes result in a slower increase of inhibitory synapses during training, causing excessive excitation from shared to distinctive features and confusion among members of the same class.
Finally, if the learning rule of inhibitory synapses is too much increased (\({\gamma }^{in}\ge 0.00006\)) or the learning rule of excitatory synapses is too low (\({\gamma }^{ex}\le 0.00004\)), the distinctive features of an animal fail to evoke some other features resulting in incomplete object recovery.
In conclusion, the learning rate must remain relatively small for excitatory synapses, and the balance between excitation and inhibition must be controlled carefully to have the desired behavior, as in Figs. 4, 5, 6, 7, and 8.
However, even though the previous parameter ranges for correct behavior appear pretty small and the model seems not too robust, the interval for acceptable parameter values becomes wider if some parameters are changed together. For instance, the parameter \({\gamma }^{ex}\) can be increased further if the postsynaptic threshold \({\theta }_{post}^{ex}\) is also increased, resulting in a broader range. In fact, increasing the postsynaptic threshold results in a smaller value for the postsynaptic term in the Hebb rule (Eq. (1)) and, consequently, in a minor synaptic reinforcement. An increase in the excitatory learning factor γex can balance the latter.
Implications for pathological possible cases will be discussed in the last section.
Discussion
The present work proposes a model of semantic organization based on a feature representation of objects, attractor dynamics, and gamma-band oscillations. Compared with the recent literature, the fundamental new aspect consists of using an original asymmetric Hebb rule to deal with correlated patterns and distinguish between superordinate and subordinate concepts. Moreover, training is performed in a probabilistic environment, where not all features are simultaneously presented, but some can be lacking at any iteration. As discussed below, these aspects are original and can lead to an enrichment of existing models.
Hebb Rule and Hierarchical Organization of Concepts
Our asymmetrical Hebb rule, already partially exploited in previous works [68, 69], is based on different thresholds for presynaptic and postsynaptic activities. This has been further refined in the present work by including a presynaptic gating mechanism: only if the presynaptic activity is above the threshold is a synaptic change (either potentiation or depotentiation) implemented. This rule automatically allows the formation of categories based on shared properties and implements a distinction between salient and marginal features. An essential aspect of the present learning procedure is that the latter distinction depends on the probability of feature occurrence. For clearness, we used just two probabilities (80 and 30% for salient and marginal features, respectively). Of course, different values can occur in reality, making the final object representation and the pattern of synapses more varied than the one shown in Fig. 4. Moreover, other aspects of learning not included in our training procedure can modify the final semantic representation. These may involve the emotional impact of an experience, which may affect the learning rate, γ, in Eqs. (1) and (2), and the dependence of feature occurrence on a context [71] (so that certain features may frequently occur together with other features and tend to activate reciprocally). These aspects can be analyzed in a future job.
Interestingly, in this work, we tested a hierarchical organization consisting of many subcategories nested one inside the other and with some subcategories partly superimposed (for instance, the category “herbivorous” contains the category “farm” and is partially superimposed on the category “wild”; the latter, in turn, includes the category “dangerous”). Furthermore, some isolated features are shared between a couple of animals without generating a specific category [for instance, “it has spots” (cow and giraffe), or “it lives in herds” (zebra and lion), or “it pecks” (roster and hen)]. In all cases, after training, the model responds to a single feature by correctly restoring the salient features describing the specific category or member. Marginal features are never automatically restored (hence, do not come to mind thinking of an object), but, if given as input, can restore all salient features of the object. Noteworthily, in training the model, we never provided categories as input, only individual members (of course, containing distinctive and shared features). As shown in the simulations, categories emerge spontaneously after training. If a shared property is given as input (for instance, “it eats grass”), only the shared properties of that category are evoked (in that case, herbivorous, mammals, and animals).
An interesting example concerns the subcategory “volatile.” With the present values of training parameters, a feature like “it flies” is not attributed to the roster and the hen, i.e., the network can correctly distinguish between flying and not-flying birds. However, this distinction is quite fragile and depends on the number of flying birds (four in our taxonomy representing 66% of cases) and not-flying birds (just two, i.e., 33%). Since learning is probabilistic, using a more significant number of flying birds would lead to a different conclusion, i.e., that all birds can fly. This is understandable since cases rarely occurring in a category (for instance, that whales are mammals) can probably be managed only using encyclopedic knowledge and not acquired from experience. In a previous work [69], we proposed that categories should have an increasing postsynaptic threshold to deal with rare cases. This can be tested again in future work.
Gamma-Band Synchronization
Since the synchronization of neuron activities in the gamma band can potentially affect information transmission in the brain [72, 73], it is ubiquitously present in the cortex [38, 74] and has been observed in many cognitive and memory tasks such as object recognition [33, 34], working memory [35, 36], sensory processing [38, 43, 75], and attention control [45, 47], we deemed it of value to test the semantic model in a gamma oscillation regimen. All units in the model (coding for different features) exhibit 30–35 Hz oscillatory activity if excited by an external input. To this aim, we used a neural mass model of interconnected populations (pyramidal neurons, excitatory interneurons, inhibitory interneurons with slow and fast synaptic dynamics) arranged in feedback to simulate cortical column dynamics. Neural mass models describe the collective behavior of entire populations of neurons with just a few state variables [61, 63,64,65]. In particular, these models emphasize the pivotal role played by fast inhibitory interneurons in the development of a fast (> 30 Hz) oscillatory behavior [61]. This approach is suitable for testing model behavior vs. mean field potentials or simulating cortical activity reconstructed in an ROI from high-density scalp EEG measurements [62]. Finally, the gamma rhythm can be linked with other rhythms to analyze more complex dynamical scenarios (for instance, theta-gamma during sequential memory [76]).
It is well known that the neocortex exhibits a six-layer structure and that these layers have a different role in sending and receiving information [77]. In the present simplified model, however, we do not use a six-layer arrangement but just four populations without a layer specification: the output emerges from the population of pyramidal neurons and enters into pyramidal neurons and fast GABAergic interneurons of another unit, depending on previous training. A more complex arrangement of populations in six layers may be the subject of further improvements in the neural mass model.
Three aspects emerge from our training algorithm in an oscillatory regimen. First, we assumed that the Hebb rule is based on the mean activity of neurons in a 30 ms time interval. In fact, before learning, the different units in an object representation oscillate out of phase due to noise; hence, their instantaneous activity is uncorrelated and cannot contribute to Hebbian synaptic potentiation using a threshold rule. The idea that the Hebb rule is based on a temporal interval finds much interest in the literature [78, 79]. Of course, our model does not use spiking neurons; hence, our temporal version of the Hebb rule refers to a collective neuron behavior (i.e., a population spike density) rather than a precise spike temporal organization.
Second, to achieve good synchronization among units oscillating with a gamma rhythm, both excitatory and inhibitory (bi-synaptic) connectivity must be considered (see the bottom panels in Fig. 2). Strong excitatory connections (named Wex in the model) realize attractor dynamics and allow the recovery of the overall salient content from an initial cue; weaker inhibitory connections (Win) favor synchronization and avoid excessive excitation to spread over the network. Of course, this result is not new. A role for inhibitory interneurons in gamma synchronization has been demonstrated in recent papers [80,81,82] and is a subject of much active research.
Third, we observed that synchronization is much more robust if a delay higher than 15 ms is included in the connectivity among units. Ideal delays for our models are 20–25 ms. Some recent results emphasize a positive role in the delay. Petkoski and Jirsa [83], using a propagation velocity as high as 5 m/s, computed that mean intra- and interhemispheric time delays are 6.5 and 19.6 ms, respectively. Suppose we assume a similar velocity, long-distance transmission, + a further delay necessary to perform possible additional processing steps (like more complex feature extraction in the visual pathway). In that case, our intervals are compatible with information exchange in the brain.
Possible Training Errors
The present simulations also indicate possible blunders in the semantic network, resulting in inadequate learning. Notable, in case of excessive excitation vs. inhibition (for instance, an increase in parameter γex, or a decrease in γin, or a decrease in parameter \({\theta }_{post}^{ex}\)), excitatory connections can erroneously be created from shared to distinctive features. These erroneous synapses are insufficiently depotentiated, thus leading to confusion between attributes of different categories (for instance, the production of an animal with characteristics of mammals and birds together) or between members of the same category (for example, a mammal that barks and meows at the same time). It is worth noting that this confounding logic can have some similarities with the form of paradoxical thinking occurring during psychiatric disorders (like schizophrenia) characterized by a distorted perception of reality [84]. We know this is just a preliminary result, but it can provide interesting indications for future work (see also “Applications to Neural Disorders” below).
Comparison with Recent Models
It is important to compare our model with the more recent and advanced models in the literature, particularly those developed by Rogers et al. [18, 19] and Garagnani et al. [20,21,22,23,24,25]. These models are based on a multilayer organization, in which different layers represent different brain regions involved in semantic processing, and connections among regions are neuroanatomically grounded. Our model is much simpler, based on a single attraction network (reminiscent of previous models developed in the nineties and early twenties [8,9,10, 13]). We adopted this choice since we aimed to investigate the potentiality of the Hebb rule in dealing with correlated patterns, hierarchical organization of concepts, and probabilistic learning. For this purpose, a single-layer auto-associative network provides a more straightforward and intuitive description of the results.
Despite this substantial simplification, we claim our results introduce some novel aspects. Indeed, the models by Rogers et al. are based on backpropagation. The models by Pulvermuller et al. are trained with a Hebb rule, with different presynaptic and postsynaptic thresholds. Still, the authors do not investigate the role of correlated patterns or probabilistic learning. Another difference, although of less value, is that recent versions of these models, devoted to gamma-band simulations [23,24,25], use spiking neurons, whereas we analyze population dynamics.
Briefly, our results can be helpful, in perspective, to enrich multilayer neurocomputational models, especially the models by Pulvermüller et al. based on Hebbian learning. Of course, in our model (as in similar attractor models), features are assumed as an input, i.e., the model presumes a previous neural processing stream that extracts these features from external data. Moreover, while some features are unimodal, involving just one sensory modality (such as “it barks,” “it meows,” or “it purrs” for the auditory modality or “it is gray” for the visual one), other involves many sensory modalities together (such as “it has fur” which can include visual and tactile modalities or “it is of various sizes”) or more complex abstract concepts (like “it is viviparous,” “it hibernates,” or “it is affectionate”). This opens the problem of where these features can be extracted and organized in the brain. A characteristic of deep neural networks is the capacity to extract more and more abstract features and to combine these features to solve problems like object classification or signal decoding [85]. In this regard, the multilayer organization in the models by Pulvermüller et al. [23, 25] based on sensory modal (visual and auditory), motor, and multimodal areas (semantic hubs) can be enriched by a Hebbian learning procedure that reflects correlated patterns, a hierarchy of concepts and probabilistic differences among features.
Applications to Neural Disorders
A potential application of the present study concerns pathological behavior. The problem is so complex and multifaceted that it deserves much future study. Just two preliminary analyses have been presented here. The first involves the training procedure and points out that insufficient depotentiation of synapses during learning can produce an “illogic” behavior in the network, propagating excitation from shared features toward category members. Interestingly, some psychiatrists suggested that both Aristotelic and not-Aristotelic logic can be implemented in the brain (related to conscious and unconscious modalities, respectively; see also the work by Matteo Blanco, summarized in Rayner [86]) and that a not-Aristotelic logic or paleo-logic can be typical of schizophrenic or autistic subjects [84, 87] and could characterize dreaming. This idea is speculative but may represent a stimulus for investigating this fascinating domain. Second, since dysfunction in GABAergic interneurons has been hypothesized in several psychiatric disorders, such as schizophrenia, autism, and other neurological conditions [88, 89], we simulated the effect of changing internal parameters related to fast inhibitory interneurons. Results indicate that fast inhibitory interneurons are essential to sustain the gamma rhythm. A decrease in the auto-inhibitory loop in this population plays a fundamental role in jeopardizing synchronization. This opens an interesting perspective for further studies devoted to a deeper analysis of the relationships between GABAergic fast interneurons, gamma rhythms, and neurological disorders.
Testable Predictions
Testable predictions are experimentally tricky since the model considers integration among neural activity in the gamma band in distal brain regions. Therefore, just some significant lines are presented here.
The first kind of prediction can involve the asymmetric Hebb rule proposed for feature representation. It can be tested using feature listing tasks after training subjects with new artificial objects (for instance, new “objects” consisting of visual, auditory, motor, or other amodal features presented with a different probability). Features with a smaller probability should be neglected during subsequent feature listing tasks.
Regarding the gamma rhythm, responses revealing a higher level of object recognition and appropriate feature listing should be associated with higher gamma power than poor responses, a difference already suggested by Garagnani et al. using a spiking neurocomputational model [23]. Furthermore, objects able to evoke more features should present higher gamma power. Last, gamma power should be present in unimodal regions (auditory, visual), motor regions, or amodal associative areas, depending on the single object and the kind of features spontaneously evoked. A large amount of literature on this subject is already present, often recognized under the name of “embodied” or “grounded cognition” (see [1, 90] for a review).
Another prediction concerns the idea that marginal features, when perceived, can oscillate out of phase compared with salient features; after their presentation, they do not participate in attractor dynamics and so are no longer evoked if removed from the input (unpublished simulations), whereas salient features, after object recognition, continue to oscillate in synchronism. Furthermore, the model predicts that object recognition from marginal features requires more time. However, this prediction is difficult to test because feature extraction from the sensory data requires additional time, and the latter may differ depending on the kind of feature. The comparison should involve the same features in different individuals, with different saliency or marginality depending on their experience.
Finally, testable predictions may concern the pivotal role of fast inhibitory interneurons in producing gamma oscillations and favoring synchronization. Tests (some already discussed in literature see [91]) can involve a drug reduction of GABAergic activity or brain stimulation and their consequence on gamma rhythms and feature listing responses.
Conclusions
This study presents a neurocomputational model of semantic memory based on a feature representation of objects, Hebbian learning, and gamma-band synchronization. Compared with previous models, we suggest using a new version of the Hebb rule, joined with probabilistic learning, to deal with correlated patterns, hierarchical representations, and different feature saliency. Through simulations, we show that the network can distinguish between superior and inferior categories without errors and provides a different role for salient and marginal features based on their frequency in previous experience. Furthermore, simulations indicate that rapid synchronization among neural populations can be realized in the gamma band through trained excitatory and inhibitory synapses, provided the delay is 20–25 ms. Additionally, alterations in some parameters concerning the learning rule or the GABAergic interneurons provide interesting, although preliminary, indications about the possible causes of semantic disorders, characterized by confusion among categories or the recovery of a distorted reality. The model’s topological structure is relatively more straightforward than in recent sophisticated models implementing different cortical regions and their neuroanatomical connections. Still, in perspective, our results can be incorporated into multilayer networks, improving their capacity to realize a more consistent semantic description based on a hierarchical and probabilistic representation of reality.
Data Availability
After acceptance, the program and the datasets generated during the simulations will be available in the repository Modeldb https://modeldb.yale.edu.
References
Barsalou LW. Grounded cognition. Annu Rev Psychol. 2008;59:617–45.
Martin A. GRAPES—Grounding representations in action, perception, and emotion systems: how object properties and categories are represented in the human brain. Psychon Bull Rev. 2016;23:979–90.
Pulvermüller F. Semantic embodiment, disembodiment, or misembodiment? In search of meaning in modules and neuron circuits. Brain Lang. 2013;127:86–103.
Rumelhart DE. Brain style computation: learning and generalization. In: An introduction to neural and electronic networks. USA: Academic Press Professional, Inc. 1990. page 405–20.
Rogers TT, McClelland JL. Précis of semantic cognition: a parallel distributed processing approach. Behavioral and Brain Sciences. 2008;31:689–714.
Rogers TT, Lambon Ralph MA, Garrard P, Bozeat S, McClelland JL, Hodges JR, et al. Structure and deterioration of semantic memory: a neuropsychological and computational investigation. Psychol Rev. 2004;111:205–35.
Plaut DC, Shallice T. Deep dyslexia: a case study of connectionist neuropsychology. Cogn Neuropsychol. 1993;10:377–500.
Plaut DC. Double dissociation without modularity: evidence from connectionist neuropsychology. J Clin Exp Neuropsychol. 1995;17:291–321.
Cree GS, McNorgan C, McRae K. Distinctive features hold a privileged status in the computation of word meaning: implications for theories of semantic memory. J Exp Psychol Learn Mem Cogn. 2006;32:643–58.
McRae K, Cree GS, Seidenberg MS, McNorgan C. Semantic feature production norms for a large set of living and nonliving things. Behav Res Methods. 2005;37:547–59.
O’Connor CM, Cree GS, McRae K. Conceptual hierarchies in a flat attractor network: dynamics of learning and computations. Cogn Sci. 2009;33:665–708.
Hopfield JJ. Neurons with graded response have collective computational properties like those of two-state neurons. Proc Natl Acad Sci U S A. 1984;81:3088–92.
McRae K, de Sa VR, Seidenberg MS. On the nature and scope of featural representations of word meaning. J Exp Psychol Gen. 1997;126:99–130.
Kawamoto AH. Nonlinear dynamics in the resolution of lexical ambiguity: a parallel distributed processing account. J Mem Lang. 1993;32:474–516.
Miikkulainen R. Dyslexic and category-specific aphasic impairments in a self-organizing feature map model of the lexicon. Brain Lang. 1997;59:334–66.
Silberman Y, Bentin S, Miikkulainen R. Semantic boost on episodic associations: an empirically-based computational model. Cogn Sci. 2007;31:645–71.
Siekmeier PJ, Hoffman RE. Enhanced semantic priming in schizophrenia: a computer model based on excessive pruning of local connections in association cortex. Br J Psychiatry. 2002;180:345–50.
Chen L, Lambon Ralph MA, Rogers TT. A unified model of human semantic knowledge and its disorders. Nat Hum Behav. 2017;1:0039.
Rogers TT, Cox CR, Lu Q, Shimotake A, Kikuchi T, Kunieda T, et al. Evidence for a deep, distributed and dynamic code for animacy in human ventral anterior temporal cortex. eLife 10:e66276.
Garagnani M, Wennekers T, Pulvermüller F. Recruitment and consolidation of cell assemblies for words by way of Hebbian learning and competition in a multi-layer neural network. Cognit Comput. 2009;1:160–76.
Garagnani M, Pulvermüller F. Conceptual grounding of language in action and perception: a neurocomputational model of the emergence of category specificity and semantic hubs. Eur J Neurosci. 2016;43:721–37.
Tomasello R, Garagnani M, Wennekers T, Pulvermüller F. Brain connections of words, perceptions and actions: a neurobiological model of spatio-temporal semantic activation in the human cortex. Neuropsychologia. 2017;98:111–29.
Garagnani M, Lucchese G, Tomasello R, Wennekers T, Pulvermüller F. A spiking neurocomputational model of high-frequency oscillatory brain responses to words and pseudowords. Front Comput Neurosci. 2016;10:145.
Tomasello R, Garagnani M, Wennekers T, Pulvermüller F. A neurobiologically constrained cortex model of semantic grounding with spiking neurons and brain-like connectivity. Front Comput Neurosci. 2018;12:88.
Henningsen-Schomers MR, Pulvermüller F. Modelling concrete and abstract concepts using brain-constrained deep neural networks. Psychol Res. 2022;86:2533–59.
Catricalà E, Della Rosa PA, Ginex V, Mussetti Z, Plebani V, Cappa SF. An Italian battery for the assessment of semantic memory disorders. Neurol Sci. 2013;34:985–93.
Catricalà E, Della Rosa PA, Plebani V, Perani D, Garrard P, Cappa SF. Semantic feature degradation and naming performance. Evidence from neurodegenerative disorders. Brain Lang. 2015;147:58–65.
Blumenfeld B, Preminger S, Sagi D, Tsodyks M. Dynamics of memory representations in networks with novelty-facilitated synaptic plasticity. Neuron. 2006;52:383–94.
Tang H, Li H, Yan R. Memory dynamics in attractor networks with saliency weights. Neural Comput. 2010;22:1899–926.
Kropff E, Treves A. Uninformative memories will prevail: the storage of correlated representations and its consequences. HFSP J. 2007;1:249–62.
Boboeva V, Brasselet R, Treves A. The capacity for correlated semantic memories in the cortex. Entropy (Basel). 2018;20:824.
Pereira U, Brunel N. Attractor dynamics in networks with learning rules inferred from in vivo data. Neuron. 2018;99:227-238.e4.
Tallon-Baudry C, Bertrand O. Oscillatory gamma activity in humans and its role in object representation. Trends Cogn Sci. 1999;3:151–62.
Bertrand O, Tallon-Baudry C. Oscillatory gamma activity in humans: a possible role for object representation. Int J Psychophysiol. 2000;38:211–23.
Axmacher N, Henseler MM, Jensen O, Weinreich I, Elger CE, Fell J. Cross-frequency coupling supports multi-item working memory in the human hippocampus. PNAS. 2010;107:3228–33.
Roux F, Uhlhaas PJ. Working memory and neural oscillations: α-γ versus θ-γ codes for distinct WM information? Trends Cogn Sci. 2014;18:16–25.
Fries P. Rhythms for cognition: communication through coherence. Neuron. 2015;88:220–35.
Fries P. Neuronal gamma-band synchronization as a fundamental process in cortical computation. Annu Rev Neurosci. 2009;32:209–24.
Wang XJ. Neurophysiological and computational principles of cortical rhythms in cognition. Physiol Rev. 2010;90:1195–268.
Merker B. Cortical gamma oscillations: the functional key is activation, not cognition. Neurosci Biobehav Rev. 2013;37:401–17.
Ray S, Maunsell JHR. Do gamma oscillations play a role in cerebral cortex? Trends Cogn Sci. 2015;19:78–85.
Canolty RT, Ganguly K, Kennerley SW, Cadieu CF, Koepsell K, Wallis JD, et al. Oscillatory phase coupling coordinates anatomically dispersed functional cell assemblies. Proc Natl Acad Sci U S A. 2010;107:17356–61.
Başar-Eroglu C, Strüber D, Schürmann M, Stadler M, Başar E. Gamma-band responses in the brain: a short review of psychophysiological correlates and functional significance. Int J Psychophysiol. 1996;24:101–12.
Herrmann CS, Munk MHJ, Engel AK. Cognitive functions of gamma-band activity: memory match and utilization. Trends Cogn Sci. 2004;8:347–55.
Clayton MS, Yeung N, Cohen KR. The roles of cortical oscillations in sustained attention. Trends Cogn Sci. 2015;19:188–95.
Tiitinen H, Sinkkonen J, Reinikainen K, Alho K, Lavikainen J, Näätänen R. Selective attention enhances the auditory 40-Hz transient response in humans. Nature. 1993;364:59–60.
Sauseng P, Klimesch W, Gruber WR, Birbaumer N. Cross-frequency phase synchronization: a brain mechanism of memory matching and attention. Neuroimage. 2008;40:308–17.
Strüber D, Basar-Eroglu C, Hoff E, Stadler M. Reversal-rate dependent differences in the EEG gamma-band during multistable visual perception. Int J Psychophysiol. 2000;38:243–52.
Pulvermüller F, Lutzenberger W, Preissl H, Birbaumer N. Spectral responses in the gamma-band: physiological signs of higher cognitive processes? NeuroReport. 1995;6:2059–64.
Herrmann CS, Fründ I, Lenz D. Human gamma-band activity: a review on cognitive and behavioral correlates and network models. Neurosci Biobehav Rev. 2010;34:981–92.
Colgin LL, Denninger T, Fyhn M, Hafting T, Bonnevie T, Jensen O, et al. Frequency of gamma oscillations routes flow of information in the hippocampus. Nature. 2009;462:353–7.
Lisman J. The theta/gamma discrete phase code occurring during the hippocampal phase precession may be a more general brain coding scheme. Hippocampus. 2005;15:913–22.
Balakrishnan S, Pearce RA. Spatiotemporal characteristics and pharmacological modulation of multiple gamma oscillations in the CA1 region of the hippocampus. Front Neural Circuits. 2015;8:150.
Alekseichuk I, Turi Z, Amador de Lara G, Antal A, Paulus W. Spatial working memory in humans depends on theta and high gamma synchronization in the prefrontal cortex. Curr Biol. 2016;26:1513–21.
Renoult L, Irish M, Moscovitch M, Rugg MD. From knowing to remembering: the semantic-episodic distinction. Trends Cogn Sci. 2019;23:1041–57.
Garrard P, Lambon Ralph MA, Patterson K, Pratt KH, Hodges JR. Semantic feature knowledge and picture naming in dementia of Alzheimer’s type: a new approach. Brain Lang. 2005;93:79–94.
Salmon DP, Butters N, Chan AS. The deterioration of semantic memory in Alzheimer’s disease. Can J Exp Psychol. 1999;53:108–17.
Barr MS, Rajji TK, Zomorrodi R, Radhu N, George TP, Blumberger DM, et al. Impaired theta-gamma coupling during working memory performance in schizophrenia. Schizophr Res. 2017;189:104–10.
Spencer KM, Nestor PG, Niznikiewicz MA, Salisbury DF, Shenton ME, McCarley RW. Abnormal neural synchrony in schizophrenia. J Neurosci. 2003;23:7407–11.
Senkowski D, Gallinat J. Dysfunctional prefrontal gamma-band oscillations reflect working memory and other cognitive deficits in schizophrenia. Biol Psychiatry. 2015;77:1010–9.
Ursino M, Cona F, Zavaglia M. The generation of rhythms within a cortical region: analysis of a neural mass model. Neuroimage. 2010;52:1080–94.
Ursino M, Ricci G, Astolfi L, Pichiorri F, Petti M, Magosso E. A novel method to assess motor cortex connectivity and event related desynchronization based on mass models. Brain Sci. 2021;11:1479.
Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern. 1995;73:357–66.
Wendling F, Ansari-Asl K, Bartolomei F, Senhadji L. From EEG signals to brain connectivity: a model-based evaluation of interdependence measures. J Neurosci Methods. 2009;183:9–18.
Cona F, Ursino M. A neural mass model of place cell activity: theta phase precession, replay and imagination of never experienced paths. J Comput Neurosci. 2015;38:105–27.
Cona F, Zavaglia M, Massimini M, Rosanova M, Ursino M. A neural mass model of interconnected regions simulates rhythm propagation observed via TMS-EEG. Neuroimage. 2011;57:1045–58.
Gerfen CR, Economo MN, Chandrashekar J. Long distance projections of cortical pyramidal neurons. J Neurosci Res. 2018;96:1467–75.
Ursino M, Cuppini C, Magosso E. A neural network for learning the meaning of objects and words from a featural representation. Neural Netw. 2015;63:234–53.
Ursino M, Cuppini C, Cappa SF, Catricalà E. A feature-based neurocomputational model of semantic memory. Cogn Neurodyn. 2018;12:525–47.
Decker AL, Duncan K. Acetylcholine and the complex interdependence of memory and attention. Curr Opin Behav Sci. 2020;32:21–8.
Yee E, Thompson-Schill SL. Putting concepts into context. Psychon Bull Rev. 2016;23:1015–27.
Bonnefond M, Kastner S, Jensen O. Communication between brain areas based on nested oscillations. eNeuro 2017;4:ENEURO.0153–16.2017.
Reyner-Parra D, Huguet G. Phase-locking patterns underlying effective communication in exact firing rate models of neural networks. PLoS Comput Biol. 2022;18: e1009342.
Bosman CA, Lansink CS, Pennartz CMA. Functions of gamma-band synchronization in cognition: from single circuits to functional diversity across cortical and subcortical systems. Eur J Neurosci. 2014;39:1982–99.
Gray CM, Singer W. Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex. Proc Natl Acad Sci U S A. 1989;86:1698–702.
Ursino M, Cesaretti N, Pirazzini G. A model of working memory for encoding multiple items and ordered sequences exploiting the theta-gamma code. Cogn Neurodyn. 2023;17:489–521.
Thomson AM, Bannister AP. Interlaminar connections in the neocortex. Cereb Cortex. 2003;13:5–14.
Shulz DE, Jacob V. Spike-timing-dependent plasticity in the intact brain: counteracting spurious spike coincidences. Front Synaptic Neurosci. 2010;2:137.
Zappacosta S, Mannella F, Mirolli M, Baldassarre G. General differential Hebbian learning: capturing temporal relations between events in neural networks and the brain. PLoS Comput Biol. 2018;14: e1006227.
Salkoff DB, Zagha E, Yüzgeç Ö, McCormick DA. Synaptic mechanisms of tight spike synchrony at gamma frequency in cerebral cortex. J Neurosci. 2015;35:10236–51.
Hasenstaub A, Shu Y, Haider B, Kraushaar U, Duque A, McCormick DA. Inhibitory postsynaptic potentials carry synchronized frequency information in active cortical networks. Neuron. 2005;47:423–35.
Ursino M, Magosso E, Cuppini C. Recognition of abstract objects via neural oscillators: interaction among topological organization, associative memory and gamma band synchronization. IEEE Trans Neural Netw. 2009;20:316–35.
Petkoski S, Jirsa VK. Transmission time delays organize the brain network synchronization. Philos Trans A Math Phys Eng Sci. 2019;377:20180132.
Arieti S. Special logic of schizophrenic and other types of autistic thought. Psychiatry. 1948;11:325–38.
Serre T, Oliva A, Poggio T. A feedforward architecture accounts for rapid categorization. Proc Natl Acad Sci. 2007;104:6424–9.
Rayner E. An introduction to Matte Blanco’s bi-logic and its uses. London: Routledge; 1995.
Arieti S. From schizophrenia to creativity APT. 1979;33:490–505.
Marín O. Interneuron dysfunction in psychiatric disorders. Nat Rev Neurosci. 2012;13:107–20.
Nakazawa K, Zsiros V, Jiang Z, Nakao K, Kolata S, Zhang S, et al. GABAergic interneuron origin of schizophrenia pathophysiology. Neuropharmacology. 2012;62:1574–83.
Martin A. The representation of object concepts in the brain. Annu Rev Psychol. 2007;58:25–45.
Sohal VS. Transforming discoveries about cortical microcircuits and gamma oscillations into new treatments for cognitive deficits in schizophrenia. AJP. 2022;179:267–76.
Funding
Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement. Work supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006)—A Multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022).
Author information
Authors and Affiliations
Contributions
MU: conceptualization, formal analysis, methodology, software, visualization, writing—original draft, writing—review and editing. GP: formal analysis, methodology, software, visualization, writing—review and editing.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ursino, M., Pirazzini, G. Construction of a Hierarchical Organization in Semantic Memory: A Model Based on Neural Masses and Gamma-Band Synchronization. Cogn Comput 16, 326–347 (2024). https://doi.org/10.1007/s12559-023-10202-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-023-10202-y