Nothing Special   »   [go: up one dir, main page]

CN114372568B - Brain-like chip and electronic equipment - Google Patents

Brain-like chip and electronic equipment Download PDF

Info

Publication number
CN114372568B
CN114372568B CN202210277287.7A CN202210277287A CN114372568B CN 114372568 B CN114372568 B CN 114372568B CN 202210277287 A CN202210277287 A CN 202210277287A CN 114372568 B CN114372568 B CN 114372568B
Authority
CN
China
Prior art keywords
chip
brain
neural network
core
pulse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210277287.7A
Other languages
Chinese (zh)
Other versions
CN114372568A (en
Inventor
乔宁
邢雁南
西克·萨迪克·尤艾尔阿明
图芭·代米尔吉
迪兰·理查德·缪尔
白鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shizhi Technology Co ltd
Original Assignee
Shenzhen Shizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shizhi Technology Co ltd filed Critical Shenzhen Shizhi Technology Co ltd
Priority to CN202210277287.7A priority Critical patent/CN114372568B/en
Publication of CN114372568A publication Critical patent/CN114372568A/en
Application granted granted Critical
Publication of CN114372568B publication Critical patent/CN114372568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a brain-like chip and electronic equipment. Compared with the traditional artificial intelligence chip which is in the dilemma of mole walls, memory walls and energy efficiency walls, although the computing energy efficiency of a large number of types of brain chips is obviously improved, the traditional artificial intelligence chip still has the problems of wall integration and energy efficiency walls, and the requirements on cost and power consumption under the situation of edge computing cannot be substantially met. In order to solve the problem of single cost, power consumption and task capability, the invention integrates various types of special pulse neural network cores in a chip, and the network can call the special cores with different resource scales and characteristics, thereby not only obviously improving the power consumption and the cost, but also improving the performance of the network by multi-network parallel processing multi-mode information and cooperative decision. The brain-like chip disclosed by the invention is more similar to a biological brain due to the capability of multitask parallel processing, and the brain-like chip which is more intelligent, low in cost and high in energy efficiency and disclosed by the invention provides possibility for intelligent interconnection of everything. The invention is suitable for the fields of brain-like chips and AIoT.

Description

Brain-like chip and electronic equipment
Technical Field
The present invention relates to a brain-like chip and an electronic device, and more particularly, to a multi-modal brain-like chip with multiple dedicated neural network kernels and an electronic device.
Background
The three most important indexes of an Artificial Intelligence (AI) chip are cost, power consumption and performance. However, the cost and power consumption of the conventional AI chip based on the Artificial Neural Network (ANN) are relatively high at present, and the conventional AI chip is difficult to adapt to the requirement of edge calculation, so people begin to shift the eyesight to the brain inspired neuromorphic calculation.
Neuromorphic (also called Neuromorphic) computing is a new computing architecture that has developed in recent years. By running a Spiking Neural Network (SNN) on neuromorphic hardware, commonly known as a brain-like chip, real-time reasoning on input signals can be achieved.
IBM's True-North is a typical representation of brain-like chips and belongs to the distributed memory architecture. By means of the interdigital (crossbar) structure, any two neurons in one core (core) can be physically connected through a synapse circuit, as shown in fig. 1, which enables a high flexibility in the design of the spiking neural network. However, the hardware redundancy of the chip is very high, and a lot of hardware resource overhead is needed to ensure the flexibility of the chip, which results in that the power consumption and silicon cost of the chip are high, especially when the pulse neural network has a large scale. As a compromise, the structure shown in fig. 1 is used as a core of a chip, and is arranged in a routing structure, and routing communication is used instead of inter-digital direct connection, which may reduce excessive redundancy of hardware to some extent, so that a brain-like chip is mostly multi-core, and a certain typical architecture may refer to fig. 2, where multiple cores (computing cores) are distributed in a routing system.
However, the redundancy of this solution is still too high, and referring to fig. 3, the neuron integration of the chip is usually difficult to break through 3k/mm2. One technical approach to increase neuron integration/reduce silicon cost is to alleviate the problem by advanced chip process nodes, such as the Loihi-2 chip, but this does not essentially solve the problem. In addition, the known brain-like chip can only deploy a single pulse neural network and only face the requirement of a single type task, and the function of the chip is single.
The invention aims to disclose a brain-like chip and electronic equipment which can simultaneously deploy a plurality of pulse neural networks and can efficiently (high energy efficiency ratio, low silicon cost and high neuron integration level) operate the pulse neural networks.
Disclosure of Invention
In order to solve or alleviate some or all of the technical problems, the invention is realized by the following technical scheme:
a brain-like chip, comprising N cores, wherein N is a positive integer; the N cores comprise at least two types of impulse neural network cores; the brain chip also comprises a routing module; the routing module is configured to establish communication connections between the N cores.
In a certain class of embodiments, the at least two types of spiking neural network cores comprise spiking convolutional neural network cores.
In some class of embodiments, the at least two types of spiking neural network cores are two or more of a spiking convolutional neural network core or a spiking recurrent neural network core or a generic spiking neural network core.
In certain class of embodiments, the universal spiking neural network core has a smaller resource size than the spiking convolutional neural network core or the spiking recurrent neural network core.
In certain class of embodiments, the pulse reel neural network core is a dedicated hardware circuit dedicated to executing a convolutional neural network.
In some class of embodiments, the recurrent neural network core is a dedicated hardware circuit dedicated to executing a recurrent neural network.
In certain embodiments, the recurrent neural network is a pool computing or long-short term memory network.
In certain class of embodiments, the generalized spiking neural network core is a hardware circuit based on an interdigitated structure.
In one class of embodiments, the brain-like chip further comprises an interface module, the interface module comprising: a value-pulse conversion module or an analog pulse conversion module.
In certain class of embodiments, the brain-like chip further comprises a pre-processing module that is used to perform one or more of noise filtering, image segmentation, and split normalization functions.
In certain classes of embodiments, the brain-like chip is used to process one or more of visual, auditory, tactile, olfactory, inertial sensor signals.
In certain class of embodiments, the core in the brain-like chip is implemented as a digital circuit, a digital-to-analog hybrid circuit, or a circuit based on a phase-change or resistance-change material.
In certain class of embodiments, the resistive-switching-material-based circuit refers to a memristor-based circuit.
In certain embodiments, the brain-like chip is implemented as a synchronous circuit, an asynchronous circuit, or a hybrid of synchronous and asynchronous circuits.
In one class of embodiments, the sensor is integrated with the interface module and the N cores in the same chip.
In certain types of embodiments, cores of the same type have different neuron or/and synapse resources, or cores of the same type have different fan-in or/and fan-out capabilities.
In some embodiments, the routing module is a mesh routing, a tree routing, or a hybrid mesh and tree routing.
In some class of embodiments, at least two impulse neural networks are deployed in a chip.
In certain embodiments, the decision is coordinated according to the output results of multiple networks.
In a certain class of embodiments, at least one spiking neural network deployed in the brain-like chip invokes at least two and more types of spiking neural network cores.
In a class of embodiments, the brain-like chip includes a learning engine. The learning engine comprises one or more of a learning engine special for a pulse convolution neural network core, a learning engine special for a pulse circulation neural network core and a learning engine special for a universal pulse neural network core.
In a certain class of embodiments, the brain-like chip includes a configuration parameter updating module, which is configured to download configuration parameters such as synaptic weights from a network.
In a certain class of embodiments, the brain-like chip is a 3D packaged chip or a 3D integrated circuit.
In certain embodiments, the core routers and chip routers, and the mesh routers form a tree structure, and the mesh routers are arranged in a two-dimensional mesh belonging to one layer of a three-dimensional mesh.
An electronic device, wherein the electronic device is provided with the brain-like chip as described in any one of the above items, and the brain-like chip is used for processing environmental signals, and the electronic device is used as a basis for making a corresponding response according to an output result of the brain-like chip.
Some or all embodiments of the invention have the following beneficial technical effects:
1. endowing the multi-mode information processing capability of the brain-like chip for the first time;
2. the brain-like chip multi-network parallel reasoning capability is endowed for the first time, and the single-chip multi-task processing capability is realized;
3. the method provides possibility for multi-network collaborative decision. The multi-network parallel reasoning has the advantages that the received information types can be different, such as voice and vision, vision and smell, hearing and touch, smell and the like, and after the combined information is processed in parallel in the multiple networks, the combined information can be cooperatively decided, so that the false touch is reduced, and a more accurate reasoning result is obtained.
4. High efficiency information processing capability. The speed and efficiency of the chip to perform inference is extremely high due to the inclusion of multiple dedicated cores.
5. Hardware support is provided for a more flexible network model. In the aspect of network design, different special cores can be called to construct a more complex and delicate impulse neural network model. And the generic core provides more flexible options.
Further advantages will be further described in the preferred embodiments.
The technical solutions/features disclosed above are intended to be summarized in the detailed description, and thus the ranges may not be exactly the same. The technical features disclosed in this section, together with technical features disclosed in the subsequent detailed description and parts of the drawings not explicitly described in the specification, disclose further aspects in a mutually rational combination.
The technical scheme combined by all technical features disclosed by any position of the invention is used for supporting the generalization of the technical scheme, the modification of a patent document and the disclosure of the technical scheme.
Drawings
FIG. 1 is a diagram of a distributed memory chip architecture connected by an interdigitated structure;
FIG. 2 is a distribution diagram of routes and cores;
FIG. 3 is a logarithmic quadrant graph of power density and neuron integration for a conventional brain-like chip;
FIG. 4 is a schematic diagram of a brain chip architecture according to the present disclosure;
FIG. 5 is a detailed diagram of a type of brain chip architecture disclosed in the present invention;
FIG. 6 is a schematic diagram of a pool computing core;
FIG. 7 is a schematic diagram of two different networks deployed on a chip in a configuration;
FIG. 8 is a graph of the parameter comparison of silicon area and power consumption for constructing a spiking neural network of mouse neural scale based on known brain-like chips.
Detailed Description
Since various alternatives cannot be exhaustively described, the following will clearly and completely describe the main points in the technical solutions in the embodiments of the present invention with reference to the drawings in the embodiments of the present invention. Other technical solutions and details not disclosed in detail below are generally regarded as technical objects or technical features that are conventionally achieved in the art by means of conventional means, and are not described in detail herein.
Unless defined otherwise, a "/" at any position in the present disclosure means a logical "or". The ordinal numbers such as "first," second, "etc., in any position of the invention are used merely as distinguishing labels in description and do not imply an absolute sequence, either temporally or spatially, or that the terms in such a sequence, and hence the term" similar terms in any other ordinal relation, are necessarily different.
The present invention may be described in terms of various elements combined into various embodiments, which may be combined into various methods, articles of manufacture. In the present invention, even if the points are described only when introducing the method/product scheme, it means that the corresponding product/method scheme explicitly includes the technical features.
When a step, a module or a feature is described or included in any position of the present invention, it does not imply that such existence is exclusively and exclusively existed, and those skilled in the art can fully obtain other embodiments by using other technical means based on the technical solutions disclosed by the present invention; based on the point described in the embodiments of the present invention, those skilled in the art can completely apply the means of substitution, deletion, addition, combination, and order change to some technical features to obtain a technical solution still following the concept of the present invention. Such solutions are also within the scope of protection of the present invention, without departing from the technical idea of the invention.
The invention discloses an efficient and synergistic brain-like chip (chip for short). Referring to fig. 4, a schematic diagram of a preferred brain-like chip architecture of the present disclosure is shown. The chip includes a Spiking Neural Network (SNN) processor, or further includes an interface module portion.
In some embodiments, the SNN processor may be implemented by a synchronous circuit, an asynchronous circuit, or a mixture of a synchronous circuit and an asynchronous circuit, such as a local synchronous but global asynchronous, a core synchronous but neuron asynchronous, or the like. In the same chip, synchronous circuits and asynchronous circuits are designed simultaneously. For a small-scale network core, the contribution degree of the clock to the power consumption is not large, so that the ultra-low power consumption design by adopting the synchronous circuit is feasible.
In a certain embodiment, circuits such as neurons and synapses in the SNN processor may be implemented by a digital circuit, may also be implemented by a digital-analog hybrid circuit, and may also be implemented based on a novel nano device, such as a phase change material and a resistive material (e.g., a memristor). However, fundamental problems such as device mismatch exist in memristors, digital-analog hybrid circuits and the like, so that performance of the network is affected, and the problem can be solved through the prior art 1 (Chinese patent invention 202110550756.3).
The SNN processor can comprise a plurality of cores, and the size of the cores can be equal in scale; preferably, the size of the core may be non-isotactic.
Optionally, the interface module includes a conversion module for converting the non-pulse event signal into a pulse event (pulse for short), which may include a digital-to-pulse conversion module or/and an analog-to-pulse conversion module.
Optionally, the interface module of the chip includes various preprocessing modules. Preprocessing herein may include noise reduction on impulse events, various preprocessing of Dynamic Visual Sensor (DVS) output impulses, such as image segmentation, and the like.
Preferably, the interface module of the chip includes both the conversion module and the preprocessing module.
In some type of embodiment, only the SNN processor is included in the chip, and the interface module is implemented as a stand-alone device.
Reference is made to fig. 5, which is a detailed schematic diagram of a preferred chip architecture of the present disclosure. Preferably, the brain chip can comprise an interface module; optionally, the interface module comprises one or more of a preprocessing module, a digital-to-pulse conversion module, and an analog-to-pulse conversion module.
The digital-to-pulse conversion module is used for converting digital signals generated by various sensors into pulse sequences. For example, the digital signal may be from an image signal generated by a conventional frame-based camera, the converted pulse sequence being obtained by converting pixel values into a pulse sequence. Also for example, the encapsulated data output from the digital horizontal gyroscope is converted from the read values (or transformed) into a pulse sequence.
The analog-to-pulse conversion module is used for converting analog signals generated by various sensors into pulse sequences. For example, the sound signal is subjected to band-pass filtering and full-wave rectification, and then converted into a pulse sequence. In the present invention, there is no limitation on the specific sensor type, and there is no limitation on whether or not the specific data value is still transformed and changed.
The preprocessing module, for example, receives signals output by various sensors, converts the signals into pulse event sequences (pulse sequences for short), and performs various preprocessing operations on the pulse sequences, such as denoising the pulse event sequences according to prior art 2 (chinese patent 202111476156.3), splitting and normalizing the pulse event sequences according to prior art 3 (chinese patent 202210051924.9), and so on.
The preprocessed pulse sequence can come from the digital-pulse conversion module and the analog-pulse conversion module; it is also possible to come directly from a visual sensor, such as a DVS, which can directly generate a pulse sequence consisting of pulse events. Preferably, the chip includes a DVS pre-processing module configured for noise filtering, image segmentation functions, and the like.
The aforementioned sensors may include one or more of a visual sensor, an auditory sensor (such as a microphone), a tactile sensor, an olfactory sensor, and an inertial sensor.
For the vision sensor, in addition to DVS, a conventional frame-based image sensor may be used, but a DVS sensor is preferred. In some class of embodiments, the chip supports binocular sensors, such as binocular DVS sensors, one purpose DVS sensor and another purpose frame-based image sensor, or binocular frame-based image sensors; furthermore, one of the objects is integrated on a chip, and the other object is connected to the SNN processor through a communication cable and a digital-to-pulse conversion module; or both may be electrically coupled to the chip via a communication cable.
Preferably, the interface module and the SNN processor are integrated in the same die, and together form a chip.
In some embodiments, the interface module or a portion thereof is separate from the SNN processor and is located in a different die. The invention is not limited to the specific embodiments of the type of interface module and its combination, the production and manufacture of the interface module and the SNN processor.
An SNN processor includes a number of cores (e.g., a positive integer N). The cores comprise two or more of a pulse convolution neural network (SCNN) core, a pulse circulation (also called recursion) neural network (SRNN) core and a universal SNN core.
Preferably, the sensor and interface module and the N cores of the SNN processor are integrated in the same chip. The sensors herein may be only some or all of the sensors supported by the chip.
Alternatively, in some embodiments, the SCNN core may be implemented as one of the solutions disclosed in prior art 4 (WO 2020/207982a 1).
For example, the core of the SCNN type includes a plurality of cores, and different cores can be called to form a pulse neural network, and preferably, the calling is freely configured by a user. After a plurality of different sets of cores are called, a plurality of different impulse neural networks can be established. These networks may be configured to handle visual information, tasks such as gesture recognition; and can also be configured to handle tasks such as voiceprint recognition. It is generally recognized that SCNN is better at handling visual information, but this is not a limitation.
Since multiple networks can be deployed simultaneously and calls between the networks are independent, the SCNN cores can be used to process multiple tasks in parallel. For example, after the image segmentation preprocessing, the pulse sequences corresponding to the segmented images of different regions are input into different pulse neural networks (different SCNN cores are called respectively), and the inference process for the images of the regions is independently completed to obtain corresponding inference results.
Preferably, different SCNN cores have different hardware sizes. For different SCNN cores, they may be configured with different Fan-in (Fan-in) or/and Fan-out (Fan-out) capabilities. The unequal design is matched with the design of the impulse neural network in hardware, and the network execution is accelerated.
Optionally, in some class of embodiments, the SRNN core is implemented as a core that runs a pulse-cycling (or recursive) neural network exclusively. While not limiting, it is generally believed that SRNN networks are better at handling voice and other timing information.
For example, as a specific example of the SRNN network, the reserve pool (Reservoir) calculation is a low-training cost, low-hardware overhead recurrent neural network. By using a large-scale random sparse network as an information processing medium, the connection weights of only part of neurons, such as the connection weights of the neurons in the reserve pool to the readout layer, are used in network training. Pool computing can be divided into at least Liquid State Machines (LSMs) and Echo State Networks (ESNs). In other words, the SRNN core may be implemented as a core dedicated to implementing pool computations.
Referring to fig. 6, shown is a schematic diagram of an implementation of a reserve pool computing core in one type of embodiment. An input signal, such as a voice signal, is processed by the digital-to-pulse conversion module or the analog-to-pulse conversion module to generate a pulse (spike). The pulses are sent to a reserve pool, and a decision result is obtained by using a reading layer through the reasoning of neurons in the reserve pool. Neuronal connections within the pool may be randomly initialized at generation and not altered once initialized. Preferably, an optimal reserve pool associated with the problem can be selected and generated according to specific task content.
For example, as a special case, the neural network in the reserve pool may also adopt the Wave Sense network topology disclosed in prior art 5 (chinese patent 202110879189.6).
Preferably, the prior art 6 (chinese invention patent 202110950640.9) can be used to eliminate the weight data copy, which is beneficial to further reduce the power consumption of the core circuit.
For example, a Long Short Term Memory (LSTM) network is one type of recurrent neural network suitable for processing and predicting very long-spaced and delayed events in a time series. In one class of embodiments, the SRNN core is a circuit core that runs exclusively an LSTM network.
Optionally, a circuit structure dedicated to the expression of leakage may be included in the core to accommodate neuron models with leakage characteristics.
Preferably, a certain degree of redundancy is designed into the circuitry in the dedicated core to allow for some scalability of the operating private network.
Similar to the aforementioned SCNN cores, different SRNN cores are called by different impulse neural networks, and multiple impulse neural networks independently process corresponding input pulse sequences in parallel.
Optionally, in certain class of embodiments, the generic SNN core is implemented as a cross-bar structure based computational core.
Optionally, the resources of the neurons or/and synapses of the different core configurations are different. In this way, cores with resources of the appropriate size may be invoked according to different task requirements.
The aforementioned various types of cores may be invoked by the same spiking neural network. For example, a certain spiking neural network invokes a first number of SCNN kernels and a second number of SRNN kernels, and a third number of common SNN kernels, wherein at least two of the first number and the second number and the third number are non-zero. This networking approach has the advantage that a more flexible and versatile impulse neural network can be constructed and implemented very efficiently (energy efficient ratio, low silicon cost) by invoking various different types of specialized cores.
The general SNN core is not as efficient in computation as the dedicated core and has high resource redundancy but high flexibility. Therefore, the chip configured with the universal SNN core can call the universal SNN core to obtain hardware support when the impulse neural network needs high flexibility. In other words, the chip architecture makes a breakthrough in both dedicated efficiency and flexibility.
Preferably, the hardware resource size of the general SNN core is relatively low in the whole chip, and the size is smaller than that of the SCNN core and the SRNN core, which are commonly called as "small cores".
Preferably, the chip is designed with a testability module for testing the functional integrity of the entire circuit. For commercial chips, chip testability is one of the most important.
Alternatively, the SCNN core, the SRNN core, or the generic SNN core may be implemented by a synchronous circuit or an asynchronous circuit. For the whole SNN processor, the cores can be independently implemented by different circuit types, such as allowing partial asynchronous circuit cores and partial synchronous circuit cores to exist. For example, the SCNN core is implemented as an asynchronous circuit and the SRNN core is implemented as a synchronous circuit core. Compare in tradition and common synchronous circuit, asynchronous circuit does not have the global clock, through asynchronous handshake protocol communication, and inoperative module does not have dynamic power consumption, and does not have the restriction of longest path, can handshake directly links, has high expansibility, therefore agrees with bionical class brain calculation theory very much.
At the same time, another network may be deployed in the chip, which may be configured to call some of the remaining cores as needed. In some class of embodiments, at least two different spiking neural networks are deployed in the chip, and these networks are used to handle different tasks. For example, one or more input signals of vision, hearing, smell and touch are processed respectively.
Referring to fig. 7, a schematic diagram of two different networks deployed on a chip in a certain configuration is shown. On the left side of the figure, which shows the configuration of two SCNN cores into one spiking neural network, and on the right side of the figure, which shows the routing communication paths between the neurons in the network, a SRNN core and a generic SNN core are configured into another spiking neural network. For different users, multiple cores with different combinations, different types and different scales can be constructed according to different requirements to meet the requirements.
Because the networks can process the characteristics of the input pulse sequences corresponding to the networks respectively in parallel, the chip brings convenience for constructing the networks with simultaneous processing of multiple tasks (multiple types of tasks); and the multi-task (multi-type task) processing capability brings possibility for multi-mode information decision, namely, the chip can make a decision cooperatively according to multi-type/multi-path input information instead of unreliable single input information unilateral decision. This makes the brain-like chip disclosed in the present invention have multi-modal rich information processing capability similar to that of biological brain.
Furthermore, since these networks can be re-customized completely on demand, the networks on the chip are reconfigurable, which enriches the usable purpose of the chip.
Alternatively, the routing in the chip may employ a pure mesh, pure tree routing scheme. Preferably, the routing in the chip may adopt a mixed scheme of mesh routing and tree routing, still referring to fig. 5, that is, the top layer adopts mesh routing, and the bottom layer adopts tree routing, and this routing scheme may minimize storage requirements, but maximize programmable flexibility, specifically referring to prior art 7 (chinese patent 201680024420.0).
Further, for 3D integrated circuit (single chip) or 3D packaged chip (such as a die stacked by through silicon via TSV interconnection) embodiments, it is preferable to adopt the aforementioned tree-and-grid hybrid routing architecture on a single layer integrated circuit (such as a die), and adopt a (three-dimensional) grid routing architecture between multiple layers. In other words, the core routers constitute a tree structure with the chip routers and the mesh routers, and the mesh routers are arranged in a two-dimensional mesh belonging to one layer in a three-dimensional mesh.
Alternatively, the chip may be configured with on-chip or on-line learning capabilities. The on-line learning or on-chip learning in the field generally refers to the update of parameters such as synapse weights supported by a brain-like chip in the operation process of a spiking neural network.
Optionally, the chip is provided with a configuration parameter updating module, and the configuration parameter updating module is configured to download configuration parameters such as synapse weights from a network (internet, internet of things, and the like) and configure the configuration parameters in the SNN processor. The module can realize that a certain chip arranged on the edge side modifies the configuration parameters in the later period and executes different preset functions. For example, the original chip is deployed to identify whether a person falls down, and after the networking upgrade, the chip is configured to identify whether the person speaks, or update the network model to improve the performance.
In certain class of embodiments, on-chip learning functions are performed for different types of cores by the learning engine of SCNN, the learning engine of SRNN, the generic SNN learning engine. The on-chip learning may be a "three-factor" learning engine, an STDP (pulse time dependent synaptic plasticity) learning engine, an SDSP (pulse driven synaptic plasticity) learning engine, a random SDSP synaptic learning rule, a hebry rule, a BCM rule, or the like. The present invention is not limited to a particular rule learning engine.
Optionally, the brain-like chips can build a brain-like chip group on a PCB board for constructing a super-large scale pulse neural network. The construction of ultra-large scale pulse neural networks for brain simulation research is one of the directions of brain science research.
Based on the reports of the existing brain-like chip, a network with the mouse nerve scale (about one hundred million neurons) is constructed, the area of the chip is usually hundreds of square centimeters, and the power consumption is between several watts (asynchronous circuit design is dominant) and hundreds of kilowatts (synchronous circuit design is dominant). For example, a brain-like computer constructed by using a Darwinian 2-type brain chip developed at Zhejiang university has 1.2 hundred million neurons, the total power consumption of the computer system is 350-500 watts, and the power consumption of constructing a one hundred million-scale neural network is about 68 watts without the expense of other auxiliary circuits. However, the brain volume of the mice was about 0.56cm2(1 hundred million neuron case), power consumption is about 20 milliwatts or so, with an energy efficiency ratio thousands of times that of Darwin No. 2. Even though the Intel latest Loihi-2 chip manufactured using the Intel4 process is built, it still requires 31cm2The silicon area of (A), the power consumption of 10W, the energy efficiency ratio of the mouse brain is still hundreds times.
However, the energy efficiency ratio of the chip can be greatly improved by the special core, for example, DYNAP-CNN is usedTMThe (V1, 22nm technology) brain chip only needs 12cm to construct hundred million-scale neural network2The silicon area, a power consumption of 100 milliwatts, differs only by a few times from the energy efficiency ratio of a real mouse brain. Further chip parameters for the construction of mouse neural network scale can be found in figure 8.
Therefore, the brain-like chip constructed by the chip architecture has extremely high energy efficiency ratio, and different chip kernels which are good at processing specific tasks are called or mixed and called by different impulse neural networks to process the specific tasks, so that the chip has richer multi-mode information processing capability, and simultaneously can ensure that information can be processed with extremely high energy efficiency ratio and additionally obtained cooperative information processing and decision-making capability. The chip has extremely high execution efficiency due to the combined design concept of algorithm and hardware; the free collocation of multiple hybrid cores embodies flexibility beyond exclusive use.
In addition, an electronic device is also disclosed, wherein the electronic device is provided with the brain-like chip as described in any one of the above items, the brain-like chip is used for processing the environmental signal, and the electronic device is used as a basis for making a corresponding response according to the output result of the brain-like chip. For example, according to the user speaking and behavior posture information, the basis for whether to turn on the air conditioner is cooperatively made, and because the probability of making misjudgment at the same time by the chip is extremely low, the probability of mistouch is also extremely low, which is very important for the performance of the chip.
While the present invention has been described with reference to particular features and embodiments thereof, various modifications, combinations, and substitutions may be made thereto without departing from the invention. The scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification, and the methods and modules may also be implemented in association with, inter-dependent on, inter-compatible with, and/or before/after one or more other products or methods.
Therefore, the specification and drawings should be considered simply as a description of some embodiments of the technical solutions defined by the appended claims, and therefore the appended claims should be interpreted according to the principles of maximum reasonable interpretation and are intended to cover all modifications, variations, combinations, or equivalents within the scope of the disclosure as possible, while avoiding an unreasonable interpretation.
To achieve better technical results or for certain applications, a person skilled in the art may make further improvements on the technical solution based on the present invention. However, even if the partial improvement/design is inventive or/and advanced, the technical idea of the present invention is covered by the technical features defined in the claims, and the technical solution is also within the protection scope of the present invention.
Several technical features mentioned in the attached claims may be replaced by alternative technical features or the order of some technical processes, the order of materials organization may be recombined. Those skilled in the art can easily understand the alternative means, or change the sequence of the technical process and the material organization sequence, and then adopt substantially the same means to solve substantially the same technical problems to achieve substantially the same technical effects, so that even if the means or/and the sequence are explicitly defined in the claims, the modifications, changes and substitutions shall fall within the protection scope of the claims according to the equivalent principle.
The method steps or modules described in connection with the embodiments disclosed herein may be embodied in hardware, software, or a combination of both, and the steps and components of the embodiments have been described in a functional generic manner in the foregoing description for the sake of clarity in describing the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention as claimed.

Claims (21)

1. A brain-like chip, comprising:
the brain chip comprises N cores, wherein N is a positive integer;
the N cores comprise at least two types of impulse neural network cores;
the brain chip also comprises a routing module;
the routing module is configured to establish communication connections between the N cores.
2. The brain-like chip of claim 1, wherein:
the at least two types of spiking neural network cores include a spiking convolutional neural network core.
3. The brain-like chip of claim 1, wherein:
the at least two types of spiking neural network cores are two or more of a spiking convolutional neural network core, a spiking cyclic neural network core or a general spiking neural network core.
4. The brain-like chip of claim 3, wherein:
the resource scale of the universal pulse neural network core is smaller than that of the pulse convolution neural network core or the pulse circulation neural network core.
5. The brain-like chip of claim 3, wherein:
the core of the pulse convolution neural network is a special hardware circuit which is specially used for executing the convolution neural network;
the core of the pulse recurrent neural network is a special hardware circuit which is specially used for executing the recurrent neural network;
the core of the general pulse neural network is a hardware circuit based on an interdigital structure.
6. The brain-like chip of claim 1, wherein:
the brain-like chip further comprises an interface module, the interface module comprising: a value-pulse conversion module or an analog pulse conversion module.
7. The brain-like chip of claim 6, wherein:
the sensor is integrated with the interface module and the N cores in the same chip.
8. The brain-like chip of claim 1, wherein:
the brain-like chip also includes a pre-processing module that is used to perform one or more of noise filtering, image segmentation, and split normalization functions.
9. The brain-like chip of claim 1, wherein:
the brain-like chip is used to process one or more of visual, auditory, tactile, olfactory, inertial sensor signals.
10. The brain-like chip of claim 1, wherein:
the core in the brain-like chip is implemented as a digital circuit, a digital-analog hybrid circuit, or a circuit based on phase-change or resistance-change materials.
11. The brain-like chip of claim 10, wherein:
the circuit based on the resistance change material is a circuit based on a memristor.
12. The brain-like chip of claim 1, wherein:
the brain-like chip is implemented as a synchronous circuit, an asynchronous circuit, or a hybrid of synchronous and asynchronous circuits.
13. The brain-like chip of claim 1, wherein:
the routing module is a mesh route, a tree route or a mesh and tree mixed route.
14. The brain-like chip according to any one of claims 1 to 13, wherein:
the brain-like chip comprises a learning engine; the learning engine comprises one or more of a learning engine special for a pulse convolution neural network core, a learning engine special for a pulse circulation neural network core and a learning engine special for a universal pulse neural network core.
15. The brain-like chip according to any one of claims 1 to 13, wherein:
the same type of core has different neuronal or/and synaptic resources, or
Cores of the same type have different fan-in or/and fan-out capabilities.
16. The brain-like chip according to any one of claims 1 to 13, wherein:
at least one impulse neural network deployed in the brain-like chip calls at least two types and more than two types of impulse neural network cores.
17. The brain-like chip according to any one of claims 1 to 13, wherein:
at least two pulse neural networks are deployed in the brain-like chip;
and cooperatively deciding according to the output results of the at least two pulse neural networks.
18. The brain-like chip according to any one of claims 1 to 13, wherein:
the brain-like chip comprises a configuration parameter updating module, and the configuration parameter updating module is used for downloading configuration parameters such as synapse weight from a network.
19. The brain-like chip according to any one of claims 1 to 13, wherein:
the brain-like chip is a 3D packaging chip or a 3D integrated circuit.
20. The brain-like chip according to any one of claims 1 to 13, wherein:
the core router and the chip router of the brain-like chip and the grid router form a tree structure, the grid router is arranged in a two-dimensional grid, and the two-dimensional grid belongs to one layer of a three-dimensional grid.
21. An electronic device, characterized in that:
the electronic device is provided with a brain-like chip according to any one of claims 1-20, and the brain-like chip is used for processing environmental signals, and the electronic device is used for responding according to the output result of the brain-like chip.
CN202210277287.7A 2022-03-21 2022-03-21 Brain-like chip and electronic equipment Active CN114372568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210277287.7A CN114372568B (en) 2022-03-21 2022-03-21 Brain-like chip and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210277287.7A CN114372568B (en) 2022-03-21 2022-03-21 Brain-like chip and electronic equipment

Publications (2)

Publication Number Publication Date
CN114372568A CN114372568A (en) 2022-04-19
CN114372568B true CN114372568B (en) 2022-07-15

Family

ID=81146386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210277287.7A Active CN114372568B (en) 2022-03-21 2022-03-21 Brain-like chip and electronic equipment

Country Status (1)

Country Link
CN (1) CN114372568B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114816076A (en) * 2022-06-24 2022-07-29 清华大学 Brain-computer interface computing processing and feedback system and method
CN114861892B (en) * 2022-07-06 2022-10-21 深圳时识科技有限公司 Chip on-loop agent training method and device, chip and electronic device
WO2024135095A1 (en) * 2022-12-23 2024-06-27 ソニーセミコンダクタソリューションズ株式会社 Photodetection device and control method for photodetection device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020173237A1 (en) * 2019-02-25 2020-09-03 北京灵汐科技有限公司 Brain-like computing chip and computing device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11544539B2 (en) * 2016-09-29 2023-01-03 Tsinghua University Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system
US20180174028A1 (en) * 2016-12-20 2018-06-21 Intel Corporation Sparse coding using neuromorphic computing
EP3953866A1 (en) * 2019-04-09 2022-02-16 Chengdu Synsense Technology Co., Ltd. Event-driven spiking convolutional neural network
CN113627603B (en) * 2021-10-12 2021-12-24 成都时识科技有限公司 Method for realizing asynchronous convolution in chip, brain-like chip and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020173237A1 (en) * 2019-02-25 2020-09-03 北京灵汐科技有限公司 Brain-like computing chip and computing device

Also Published As

Publication number Publication date
CN114372568A (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN114372568B (en) Brain-like chip and electronic equipment
CN105719000B (en) A kind of neuron hardware unit and the method with this unit simulation impulsive neural networks
CN104809498B (en) A kind of class brain coprocessor based on Neuromorphic circuit
Huang et al. Universal approximation using incremental constructive feedforward networks with random hidden nodes
CN104809501B (en) A kind of computer system based on class brain coprocessor
WO2010106587A1 (en) Neural network system
Mitchell et al. Caspian: A neuromorphic development platform
CN104685516A (en) Apparatus and methods for spiking neuron network learning
CN109522945A (en) One kind of groups emotion identification method, device, smart machine and storage medium
CN110163016A (en) Hybrid system and mixing calculation method
WO2022012668A1 (en) Training set processing method and apparatus
Chen et al. NN-noxim: High-level cycle-accurate NoC-based neural networks simulator
Loni et al. ADONN: adaptive design of optimized deep neural networks for embedded systems
Meier Special report: Can we copy the brain?-The brain as computer
Davies et al. Population-based routing in the SpiNNaker neuromorphic architecture
James Towards strong AI with analog neural chips
CN112835844B (en) Communication sparsification method for impulse neural network calculation load
Guang-Bin Huang et al. Extreme learning machine: theory and applications
Cai et al. Neuromorphic brain-inspired computing with hybrid neural networks
CN107273970B (en) Reconfigurable platform of convolutional neural network supporting online learning and construction method thereof
Restrepo et al. A networked FPGA-based hardware implementation of a neural network application
Rice et al. Scaling analysis of a neocortex inspired cognitive model on the Cray XD1
Bartolozzi et al. Neuromorphic systems
Zhu et al. Simulation of associative neural networks
Balaji Hardware-Software Co-design for Neuromorphic Computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant