This was the objective of our special issue:

Arguments about the brain and how it works are endless. Despite some conflicting conjectures and theories that have existed for decades without resolution, we have made significant progress in creating brain-like computational systems to solve some important engineering problems. It would be a good idea to step back and examine where we are in terms of our understanding of the brain and potential problems with the brain-like AI systems that have been successful so far. For this special issue of Cognitive Computation, we invite thoughtful articles on some of the issues that we have failed to address and comprehend in our journey so far in understanding the brain…We plan to publish a collection of short articles on a variety of topics that could be asking new questions, proposing new theories, resolving conflicts between existing theories, and proposing new types of computational models that are brain-like.

We summarize here ten articles that have been accepted for this special issue. Nine of the articles point out various features of the brain that are missing in our current generation of AI models essentially based on deep learning concepts. One article is about using AI as a simulation tool to better understand the mechanisms of the brain.

Aspects or Features of the Brain That Are Missing in Our Current Generation of AI Models

Achler [1] argues that the current deep learning-based AI and the brain are very different from each other. For example, the brain does not use the onerous (independently and identically distributed) rehearsal needed for trial-and-error type learning to adjust weights as some neuro-theorists may implicitly adopt by using back-propagation type algorithms without explicitly implementing the rehearsal mechanisms with neurons. The brain is capable of true “one-shot” learning, does not require as much data, and reveals neuroscience evidence for “regulatory feedback” connections from the outputs to the inputs for recognition, which is absent in feedforward network recognition. He argues that it’s the rehearsal-based learning that makes current AI inflexible, unlike humans. This inflexibility limits automated learning and robotic applications. Moreover, he points to scalable examples where the “regulatory feedback” method of recognition shows cognitive phenomena and is as or more powerful in function than current models. Thus, he concludes that there is little evidence for back-propagation type mechanisms and none for the scale of data storage and rehearsal mechanisms required for it in the brain.

Minai [2] thinks that current AI technology, as found in autonomous driving systems and LLMs, cannot evolve to produce more natural intelligence that is found in living things with a nervous system, from insects to animals and humans. He points out various limitations of current deep learning-based machine learning (DL/ML) technology including that they (1) are meant for specific tasks only; (2) require large amounts of data and computational resources to learn; (3) require mostly supervised learning; (4) require offline data storage; (5) lack out-of-sample generalization capability; (6) are limited in terms of symbol processing required for human capabilities of language learning, reasoning, and planning; (7) have no causal inference capability; (8) do not have any concept of meaning and are, therefore, shallow; and (9) have no built-in “internal motivation” and, therefore, can’t be autonomous. Thus, with the current DL/ML technology, it would be almost impossible to build robots that can learn and perform a variety of tasks over time. He posits that the key enablers of such natural intelligence in insects and animals are integrated embodiment, modularity, synergy, developmental learning, and evolution and that these should be part of any AI framework whose goal is to replicate general intelligence.

Stratton [3] focuses on the real nature of computations inside the brain that uses spiking signals and summarizes the general way the brain functions: (1) real neurons communicate with spikes where spike timing carries information; (2) networks self-organize, spike-timing dependent plasticity (STDP) identifies causal interactions between neurons and is readily explainable; (3) abundant feedback connections build predictive models; (4) oscillations in populations of neurons and quasi-chaotic dynamic state transitions continuously and dynamically reconfigure neural circuits based on computational needs of the task at hand; (5) patterns of neural activity form representations of perceptions, actions, and internal brain states; (6) spike conduction delays, oscillations, and short-term plasticity innately represent time in the brain; and (7) dopamine directly modulates the gain of STDP for model-free reinforcement learning (other neuromodulators have equally important effects). In a further comparison between the brain and the current ANNs (he calls them AI systems), he states: (1) ANNs require large amounts of data whereas the brain, using transfer learning, only needs a handful of extra samples to learn, (2) ANNs require enormous computational resources to train and operate whereas neuromorphic hardware can efficiently implement spiking neural networks (SNNs), (3) ANNs require complete retraining to acquire new knowledge whereas the brain learns continuously using transfer learning and specialized memory structures in a learning process activated during down-time (sleep), (4) ANNs are brittle and can fall apart from adversarial attacks whereas brains can generalize well, are less brittle, and more resistant to adversarial attacks, and (5) ANNs can only perform the task they were trained for whereas brains can learn new tasks and switch between tasks easily. He acknowledges that some AI systems have been developed that do not quite have these weaknesses. He thinks SNNs are perhaps the only way to build AI systems that can truly replicate the abilities of the brain.

Thivierge et al. [4] argue that despite significant improvements over the years to the computational capabilities of ANNs, there is doubt that the fundamental ANN paradigm—characterized by the de novo, statistical and dynamically poor learning approach—can ever replicate the capacities of the human brain. They provide several examples of these weaknesses of the ANNs. They propose a paradigm shift that (1) incorporates “neural building” blocks, (2) is non-statistical, and (3) offers dynamically rich patterns of neural activity. They suggest redesigning ANNs using neuronal building blocks. To understand how neuronal building blocks are assembled and used by the brain, they suggest looking at research on the human cognitome and how it builds cognitive functions from neural assemblies. Understanding the cognitome would lead to the discovery and understanding of the task decomposition approach of the brain. They also emphasize the need for “one-shot” or “a few shots” and hierarchical learning.

Depannemaecker et al. [5] also argue that ANN and deep learning (DL) should not be treated as a model of the brain, as some suggest it to be. Their contention is that DLs cannot reproduce certain neurophysiological phenomenon such as epileptic seizures. And they discuss what’s expected of a model of the brain. They suggest that to fully replicate various behavioral and physical properties of the brain, we need to build a plurality of models at different scales (molecular, cellular, tissue, anatomical regions) and levels (biological implementation, algorithmic, and computational (functional aspects)). They visualize the coexistence of a collection of models, each for a different purpose. They conclude that scientific pluralism is fundamental to constructing a theory and model of the brain.

Schilling et al. [6] take a critical look at the current state of deep reinforcement learning (DRL) and its limitations regarding learning. DRL’s learning limitations are (1) its inability to scale easily to complex control problems and (2) in dealing with unpredictable environments. On the scaling issue, they point out that DRLs are unable to deal with large input spaces and large action spaces and unable to perform abstractions unless they are provided with vast amounts of data to learn from. To resolve this problem, they propose a new architecture for DRL that includes many of the important characteristics of biological systems, specifically the sensorimotor control system. They discuss in depth the following features of biological sensorimotor systems: (1) at the sensory input level, the ability to integrate a multitude of sensory inputs and provide more abstract representations to later processing stages that summarize the larger input spaces, thus reducing the dimensionality of the input, and (2) the ability to operate in a decentralized fashion with local control modules that partitions the high-dimensional action space, thereby inducing a form of hierarchy that allows top-down signals to modulate lower-level control modules, breaking down the complex control problem into manageable smaller modules. Overall, they argue that reinforcement learning can benefit from these biological organizational and decomposition principles to speed up learning and make the controllers more robust and adaptive.

Monaco and Hwang [7], like many others, emphasize that the current version of AI, as reflected in deep learning models, will never be able to produce the fundamental features of biological intelligence. They suggest an integrative approach to constructing brain-like computing models that should include the dynamical systems view of cognition and the distributed feedback mechanisms of perceptual control theory. It has a good discussion of how AI, cognitive science, and computational neuroscience developed their modeling biases, respectively, of technology-driven incrementalism, neurocentric cognitivism, and synaptocentric emergentism. It has good discussions on neurodynamical computing, perceptual control, the appropriate level of modeling for a nonreductive approach, and on how meaning arises in biological systems.

Baladron and Hamker [8] propose the concept of hierarchical interacting cortico-basal ganglia loops to better explain cognitive functions. Computational neuroscience currently views these loops as if they are segregated and operated in parallel to implement different cognitive functions. But recent evidence suggests that there is interaction between these loops. Based on this evidence, the authors propose building new kinds of neuro-computational models that reflect the interactions between these loops. Using one such model, they explain behavioral data linked to the ideomotor theory. In general, these new kinds of neuro-computational models can better represent more complex behaviors.

Garcia-Aguilar [9] points out the limited application of current AI deep learning models to EEG data to only diagnose epilepsy and predict seizures. He argues that EEG research is primarily about the study of brain-body relationship—the study of psychiatric conditions, psychological features, cognitive processes, and brain states. He also argues that EEG research is theory-driven based on prior knowledge about brain anatomy and physiological functioning of brain cells. This theory-driven approach guides experimental research and interpretation of results in EEG. He argues that AI models lack these theoretical underpinnings. He hopes that future AI models for EEG would take these theoretical limitations into account.

Using AI to Understand Brain Mechanisms

Fernandez-Leon and Acosta [10] argue that we still don’t have a complete understanding of how cognitive maps emerge in the brain and that simulation using AI models should be able to resolve the conflicts between competing theories and could be a tool to discover the underlying organizing principles of the brain. They focus on the issue of spatial coding and navigation in the brain that use place cells in the hippocampus and grid cells in the entorhinal cortex. In this context, they review the concept of parallel maps but argue that hierarchical constructs that use such parallel maps could be a better organizing principle for the brain and need to be explored further through AI modeling. Such an exercise, they argue, perhaps can lead to a universal theory of cognition.