Nothing Special   »   [go: up one dir, main page]

CN113627603A - Method for realizing asynchronous convolution in chip, brain-like chip and electronic equipment - Google Patents

Method for realizing asynchronous convolution in chip, brain-like chip and electronic equipment Download PDF

Info

Publication number
CN113627603A
CN113627603A CN202111185514.5A CN202111185514A CN113627603A CN 113627603 A CN113627603 A CN 113627603A CN 202111185514 A CN202111185514 A CN 202111185514A CN 113627603 A CN113627603 A CN 113627603A
Authority
CN
China
Prior art keywords
convolution
neuron
asynchronous
chip
neuron circuit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111185514.5A
Other languages
Chinese (zh)
Other versions
CN113627603B (en
Inventor
乔宁
邢雁南
西克·萨迪克·尤艾尔阿明
白鑫
周凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shizhi Technology Co ltd
Chengdu Shizhi Technology Co ltd
Original Assignee
Shanghai Shizhi Technology Co ltd
Chengdu Shizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shizhi Technology Co ltd, Chengdu Shizhi Technology Co ltd filed Critical Shanghai Shizhi Technology Co ltd
Priority to CN202111185514.5A priority Critical patent/CN113627603B/en
Publication of CN113627603A publication Critical patent/CN113627603A/en
Application granted granted Critical
Publication of CN113627603B publication Critical patent/CN113627603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for realizing asynchronous convolution in a chip, a brain-like chip and electronic equipment. In order to solve the problem of how to realize asynchronous convolution in a chip, the method comprises the following steps: executing a pulse or pulse event issuance operation of the first neuron circuit upon detecting that the first neuron circuit satisfies an activation condition; determining a first weight value corresponding to the first neuron circuit and located in a first convolution core, and determining a first target neuron circuit corresponding to the first neuron circuit and the first weight value and located in a first target feature map, weighted by the first weight value, to project the pulse or pulse event fired by the first neuron circuit to the first target neuron circuit. The technical scheme converts complex asynchronous convolution operation into projection operation among neuron circuits, and a chip using the asynchronous convolution method can realize the inference function with high efficiency, low delay and low power consumption.

Description

Method for realizing asynchronous convolution in chip, brain-like chip and electronic equipment
Technical Field
The present application relates to a method, a chip and an electronic device for implementing asynchronous convolution in a chip, and more particularly, to a method, a chip and an electronic device for implementing asynchronous convolution in a brain-like chip with a deployed impulse neural network.
Background
Neural networks are an important cornerstone of artificial intelligence. The neurons in the conventional Artificial Neural Network (ANN) simply perform operations such as weighted computation summation on a plurality of input data, and then obtain an activation result through an activation function, and the neurons do not have a memory function. However, in the third generation of the Spiking Neural Network (SNN), a neuron has a memory function, and its membrane voltage memorizes the stimulation effect of a previous input signal (actually a pulse sequence) on the neuron, and even the membrane voltage may be attenuated with time and may be designed to have a refractory period. Such chips are often called brain-like chips, neural mimicry chips, based on which certain reasoning operations are performed, called brain-like calculations, neural mimicry calculations.
As can be seen from the foregoing, compared with the pure computing unit function of the neurons in the conventional artificial neural network, the neurons in the impulse neural network deployed in the brain-like chip are more bionic, and only when the membrane voltage of the neuron exceeds a threshold, a pulse (spike, also called a spike) is emitted. Therefore, different from the conventional artificial intelligence in which in any one neuron state updating process, even if the input is zero, all neurons need to complete matrix operation, nonlinear operation and other very energy-consuming operations, and the neurons in the impulse neural network can only send impulses after being activated to stimulate the next stage of neurons. Thus, a more biomimetic impulse neural network has the advantage of extremely low power consumption.
Referring to fig. 1, a neuron circuit in a spiking neural network may include multiple inputs, where each of the inputs provides a pulse train and is input to the neuron circuit. By way of example, the first through third pulse trains are listed for a total of 3 input pulse trains. The neuron updates its membrane voltage state based on the input pulse sequences, and if the membrane voltage reaches a predetermined threshold, the neuron is excited to fire a pulse, and then the membrane voltage returns to a resting potential. Unlike the conventional method of storing numerical values, in a spiking neural network, information is generally transmitted based on the time and frequency of pulse transmission by neurons, or pulse combinations of multiple neuron transmissions.
A pulse neural network with learning and reasoning capabilities can be obtained by organizing and managing a large number of neurons in the brain-like chip to orderly operate the neurons. For example, after the chip detects that some events (such as vehicle collision, crossing a fence, detecting a keyword, and the like) occur, the chip infers the occurrence of the events and informs a rear-level system to respond (such as snapshot, video recording, system wakeup, and the like), all of which can be completed on the end without network transmission and real-time detection of the cloud based on a high-performance server, and privacy, real-time performance, and energy conservation are achieved.
Convolutional neural networks are a very important type of neural networks. Because of convolution operation in the traditional artificial neural network, the method can be conveniently completed by a traditional standard computing platform based on a von Neumann framework. In hardware for realizing brain-like computation, two different realization paths, namely a synchronous circuit and an asynchronous circuit, can be realized. Synchronous circuits generally contain clock signals, the operation of the circuits depends on clock driving, while asynchronous circuits do not contain clocks, and only when an event occurs, the corresponding circuits are activated to complete the work, so that the asynchronous circuits generally have the advantage of low power consumption, but the design of the asynchronous circuits has larger challenges compared with the mature synchronous circuit design.
Since each item of data in the feature map cannot be stored by the memory as in the conventional artificial neural network, the corresponding data is read and then sent to the processor for calculation when in use. In the brain-like chip, the neuron circuit independently emits pulses, and if whether and when the neurons associated with the convolution operation emit pulses are considered, the chip design becomes extremely complex and the requirement of real-time signal processing cannot be met. Therefore, how to organize and manage a large number of neuron circuits and information processing methods thereof to realize asynchronous convolution is an important and fundamental technical problem in the field of brain-like computation and brain-like chips. However, to date, the inventors have seen no disclosure of a complete, detailed implementation of asynchronous convolution.
Disclosure of Invention
In order to realize asynchronous convolution (asynchronous convolution) with high efficiency and low delay in a chip, the invention is realized by the following modes:
a method of implementing an asynchronous convolution in a chip including a number of neuron circuits configured to perform an asynchronous convolution, a first neuron circuit 20 belonging to one of the number of neuron circuits configured to perform an asynchronous convolution; an asynchronous convolution operation for the first neuron circuit 20, comprising the steps of: step S501: executing a pulse or pulse event issuance operation of the first neuron circuit 20 upon detecting that the first neuron circuit 20 satisfies an activation condition; step S503: determining a first weight value 2010 corresponding to the first neuron circuit 20 and located in a first convolution kernel 200, and determining a first target neuron circuit 2011 corresponding to the first neuron circuit 20 and the first weight value 2010 and located in a first target feature map 205, weighting by the first weight value 2010, and projecting the pulse or pulse event issued by the first neuron circuit 20 to the first target neuron circuit 2011.
In some kind of embodiment, the method of asynchronous convolution further includes step S505: based on the updated neuron circuit state of the first target neuron circuit 2011, it is determined whether the first target neuron circuit 2011 satisfies an activation condition.
In a certain embodiment, the method of asynchronous convolution further includes step S507: if the first target neuron circuit 2011 satisfies the activation condition, a pulse or pulse event issuing operation for the first target neuron circuit 2011 is performed, and an asynchronous convolution operation for the first target neuron circuit 2011 is performed.
In certain types of embodiments, at least one convolution kernel is associated with an asynchronous convolution operation for the first neuron circuit 20; the pulse or pulse event issued by the first neuron circuit 20 is projected to a target neuron circuit corresponding to each weight value of the at least one convolution kernel and the first neuron circuit 20, respectively, by weighting the weight value.
In a certain type of embodiment, after step S501, the method further includes: performing pooling operation; this pooling operation will determine whether to discard the pulse or pulse event issued by the first neuron circuit 20.
In some embodiments, the pulse or pulse event issued by the first neuron circuit 20 is weighted by the first weight value 2010 and projected to the first target neuron circuit 2011, specifically: the first weight value 2010 is read and then added or subtracted to the first weight value 2010 in the current neuron circuit state of the first target neuron 2011.
In some embodiments, when any one of the neuron circuits configured to perform asynchronous convolution is activated to fire a pulse or a pulse event, the pulse or the pulse event fired by the neuron is weighted by each weight value in a predetermined convolution kernel set, and the pulse or the pulse event fired by the neuron is projected to each corresponding target neuron in a target neuron set corresponding to the neuron and the weight value.
In a certain class of embodiments, the activation condition is in particular that a membrane voltage of the neuron circuit reaches a threshold value.
In some class of embodiments, the neuron circuit state is a membrane voltage of the neuron circuit.
A brain chip of the type that includes a number of neuron circuits configured to perform asynchronous convolution, and that is configured to perform the method of implementing asynchronous convolution in a chip as described above.
An electronic device comprising a first interface module, a second interface module, a processing module, and a response module, the electronic device further comprising a brain-like chip as described above; the brain-like chip is coupled with the processing module through the first interface module, and the processing module is coupled with the response module through the second interface module; the brain-like chip identifies the input environmental signal and transmits an identification result to the processing module through the first interface module, and the processing module generates a control instruction according to the identification result and transmits the control instruction to the response module through the second interface module.
The invention has the following beneficial effects:
1) the first time an asynchronous convolution implementation is fully disclosed;
2) the method converts asynchronous convolution operation into simple neuron projection operation, does not relate to complex energy-consuming operation, and has the advantage of low power consumption; in other words, the effect of executing the reasoning function with low power consumption, high efficiency and low delay is realized by organizing and managing the internal resources of the chip;
3) the invention simplifies the chip design logic, improves the asynchronous circuit design efficiency, is easy to realize engineering and reduces the chip design complexity.
The technical solutions, technical features, and technical means disclosed above may not be completely the same as or consistent with those described in the following detailed description. The technical features and technical means disclosed in this section and the technical features and technical means disclosed in the subsequent detailed description are combined with each other reasonably, so that more technical solutions are disclosed, which are beneficial supplements to the detailed description. As such, some details in the drawings may not be explicitly described in the specification, but if a person skilled in the art can deduce the technical meaning of the details based on the description of other related words or drawings, the common technical knowledge in the art, and other prior arts (such as conference, journal articles, etc.), the technical solutions, technical features, and technical means not explicitly described in this section also belong to the technical contents disclosed in the present invention, and the same as the above descriptions can be used in combination to obtain corresponding new technical solutions. The technical scheme combined by all the technical features disclosed at any position of the invention is used for supporting the generalization of the technical scheme, the modification of the patent document and the disclosure of the technical scheme.
Drawings
FIG. 1 is a schematic diagram of the function of neurons in a spiking neural network;
FIG. 2 is a diagram illustrating a convolution method implemented in a conventional artificial neural network;
FIG. 3 is a schematic diagram of an implementation of asynchronous convolution in a chip;
FIG. 4 is a flow chart of an implementation of asynchronous convolution according to a preferred embodiment of the present invention;
FIG. 5 is a simplified schematic of the asynchronous convolution of the present invention.
Detailed Description
The occurrence of said "pulse" at any position in the present invention refers to spike in the field of mimicry, which is also called "spike". The training algorithm, which can be written as a computer program in the form of computer code, stored in a storage medium, and read by a processor of a computer (e.g., GPU device with high performance graphics processor, FPGA, ASIC, etc.), obtains neural network configuration parameters for deployment into a simulated neuromorphic device (e.g., brain-like chip) under training of training data (various data sets) and the training algorithm. The simulated expression device configured with the parameters obtains reasoning capability, carries out reasoning on the simulated expression device according to input signals of sensors (such as a dynamic vision camera DVS for perceiving light and shade changes, a special sound signal acquisition device and the like), and outputs (such as a lead, a wireless communication module and the like) a reasoning result to other external processing modules (such as a single chip microcomputer and the like) so as to realize a linkage effect. Other technical solutions and details which are not disclosed in detail below are generally conventional in the art/are common general knowledge, and the present invention is not described in detail in the context of space limitations.
In the present invention, "/" at any position indicates a logical "or" unless it is a division meaning. The ordinal numbers "first," "second," "third," etc., in any position of the invention are used merely for descriptive purposes and do not imply an absolute sequence, either temporally or spatially, nor that the terms in which such numbers are used must necessarily be construed to correspond to the same terms in which other phrases are used. The various modules, neurons, and synapses described herein may be implemented in software, hardware (e.g., circuitry), or a combination of software and hardware, and the specific embodiments are not limited. The mere presence or absence of a step or element at any position in the present invention does not imply that such presence is the only exclusive presence, and that a person skilled in the art will be able to obtain other solutions from the solution disclosed in the present invention by means of other technical means without departing from the scope of the present invention.
The neurons in the impulse neural network are a simulation of biological neurons, and compared with the traditional neural network, the impulse neural network and the simulation of the operation mechanism of the neurons are more accurate. A chip based on a spiking neural network would have lower power consumption, benefiting from the sparseness of neuron activity. Inspired by biological neurons, some concepts related to biological neurons, such as synapses, membrane voltages, post-synaptic currents, post-synaptic potentials, etc., are also referred to using the same terminology when referring to neuron-related concepts in a spiking neural network, according to expressions that are custom defined in the art. Unless specifically indicated otherwise, references to concepts such as those similar to the biological layer described above in this disclosure refer to the corresponding concepts in the spiking neural network rather than the actual biological cell layer.
Interpretation of terms:
source characteristic graph: in the ANN, the essence is a data matrix in the convolutional neural network, which is stored in a storage space and is an object on which the convolution operation depends. In SNN, it may be considered a set of neuron circuits or/and a pulse or pulse event fired by the set of neuron circuits.
Target feature map: in the ANN, the essence is also one data matrix in the convolutional neural network, which is stored in another storage space and is the object generated by the convolution operation. In SNN, it can be viewed as a collection of neurons that receive the impulses or impulse events fired in the source signature. The source feature map and the target feature map are defined according to the structural sequence of the source feature map and the target feature map in the neural network.
And (3) convolution kernel: the system is essentially a weight matrix, and a plurality of weight data are stored. The method is used for carrying out weighted summation on the data in the source feature map and then writing the result into a corresponding storage space in the target feature map. In SNN, the result of the weighted summation is projected to a specific neuron circuit in the target feature map.
Receptive field: is a region in the source signature, usually a basic unit participating in a certain convolution operation, in which all data is weighted with the convolution kernel. A source signature may include a plurality of different receptive fields.
Asynchronous convolution: is a new convolution operation based on impulse event triggering, which is different from the situation (stored in a certain storage space) that all data in the source feature map in the ANN is determined.
Brain-like chip: a large number of neuron circuits, synapse circuits and the like are designed to imitate the working principle of biological brains, and a specific impulse neural network can be formed by organizing and managing the neuron circuits and the synapse circuits. After receiving the input signal, the impulse neural network can complete the reasoning of the input signal.
Referring to fig. 2, it is a schematic diagram of a convolution method implemented in a conventional artificial neural network. In the entire convolutional neural network, the ANN source signature 1001 includes a matrix of m × n dimensions, where m and n are positive integers, and m =8 and n =8 as examples. In a general-purpose computing platform, it corresponds to m × n storage units, each of which stores a corresponding element value. By way of example, the convolution kernel 100 is a 3 × 3 weight matrix, which corresponds to a 3 × 3 memory space. There may be more than one convolution kernel, and convolution kernel 100 is used as an example only. If padding zero padding techniques, as known in the art, are not used and the step size is set to 1, then the ANN target signature graph 1002 should be a 6 x 6 matrix.
As an example, starting with the first receptive field 101 located in the ANN source signature 1001, a first convolution operation is performed, wherein the first receptive field 101 covers data of the same size as the convolution kernel 100, which is 3 × 3. Because the sizes (or dimensions) are the same, the elements included in the two are in one-to-one correspondence, and after the elements in one-to-one correspondence are multiplied, the obtained result is stored in the first storage unit 1012 of the ANN target feature map 1002. The first field 101 is then shifted by a predetermined step size (example 1) to complete the next convolution operation and store the result in the corresponding memory location in the ANN target feature map 1002. For the second receptive field 102 in the ANN source signature 1001, the corresponding data in the corresponding storage space is multiplied by the corresponding weight data in the weight matrix 100, and the obtained result is stored to the second storage unit 1021. Alternatively, when the step size is not equal to 1, a gap, i.e. a cavity, will appear between the first field 101 and the second field 102, and the fields in this case are not a continuous region.
Thus, after several convolution operations, the entire ANN target feature map 1002 is obtained, and the entirety of these convolution operations may be a part of the operation of the entire artificial neural network and is a very common part.
Referring to fig. 3, a schematic diagram of implementing asynchronous convolution in a chip is shown. The chip (not shown) is in particular a brain-like chip with a spiking neural network deployed, in which several neuron circuits (also referred to in the present invention simply as neurons) are organized to implement the spiking neural network. The impulse neural network can comprise convolution operation of layer-by-layer transmission, namely a target characteristic diagram of the current asynchronous convolution, and can also be used as a source characteristic diagram of the next asynchronous convolution operation; furthermore, the asynchronous convolution operation may be followed by an "event-based max pooling" (or other pooling) operation.
The invention focuses on the implementation of asynchronous convolution methods, without taking as a limitation whether a specific network has other operations.
Referring also to fig. 4, a method of implementing asynchronous convolution in a chip including a number of neuron circuits configured to perform asynchronous convolution, a first neuron circuit (20) belonging to one of the number of neuron circuits configured to perform asynchronous convolution. An asynchronous convolution operation for the first neuron circuit (20) comprising the steps of:
step S501: detecting that the first neuron circuit 20 satisfies an activation condition, a pulse or pulse event issuance operation of the first neuron circuit 20 is performed.
The SNN source signature graph 10 (herein referred to as source signature graph 10) may be viewed as a set of specific neuron circuit sets and/or pulses or pulse events fired by them. For example, the neuron circuit set includes a first neuron circuit 20, a second neuron circuit 21, and a third neuron circuit 22 (hereinafter, referred to as a first neuron, a second neuron, and a third neuron), and other neurons, which are organized together and considered as neurons associated with the source signature graph 10. As an example, the first neuron 20, the second neuron 21, and the third neuron 22 in the source signature graph 10 fire a pulse because they are detected to satisfy an activation condition, for example, after updating their neuron (circuit) states, and membrane voltage is detected to exceed a threshold. The specific pulsing operation may be the sending of a pulse or a pulse event.
For each neuron in the source signature graph 10, it is grouped with certain other neurons into subsets of neuron circuits, which subsets may be considered as neurons associated with corresponding receptive fields. For example, the first neuron 20 and the different neurons are combined into a third receptive field 201 and a fourth receptive field 209, respectively.
In the ANN, all data involved in a certain receptive field can be directly read if a certain convolution operation is performed, but this operation cannot be realized in the SNN because it cannot wait for all neurons in the receptive field to fire pulses, for example, only the first neuron 20 in the third receptive field 201 fires pulses during the aforementioned time period.
Although the concept of the receptive field can be kept similar to the ANN, the concept need not be considered in the process of implementing asynchronous convolution, because when the asynchronous convolution after a certain neuron sends a pulse is processed in the invention, whether other neurons also send pulses or not in any receptive field does not need to be considered. But the invention continues to follow this terminology for the sake of clarity in explaining the correspondence.
Of course, after step S501, an event-based pooling operation may be further included, the final goal of which is to determine whether to discard the pulse or pulse event issued by the first neuron circuit 20. If the pulse event is discarded, a subsequent asynchronous convolution step is not required, and the overall power consumption of the chip is reduced. Whether or not pooling is experienced, and the particular manner of pooling, is not limited by the present invention.
Step S503: determining a first weight value 2010 corresponding to the first neuron circuit 20 and located in a first convolution kernel 200, and determining a first target neuron circuit 2011 corresponding to the first neuron circuit 20 and the first weight value 2010 and located in a first target feature map 205, weighting by the first weight value 2010, and projecting the pulse or pulse event issued by the first neuron circuit 20 to the first target neuron circuit 2011.
Preferably, the step of projecting the pulse issued by the first neuron circuit 20 to the first target neuron circuit (hereinafter, referred to as target neuron) 2011 by the weighting 2010 with the first weight value may specifically be: the first weight value 2010 is read and then added or subtracted to the current neuron circuit state of the first target neuron 2011. The current neuron state of the first target neuron 2011 may be a current membrane voltage of the first target neuron 2011. The advantage of the preferred embodiment is that the weighted projection can be realized by only one addition or subtraction of the basic budget, without performing complex operations such as multiplication, thereby reducing the chip power consumption.
Because the convolution operation essentially projects information under a certain field to a certain output after weighting, each field can be associated to a certain output under the premise of determining the convolution kernel. The same field will be associated to different specific outputs for different convolution kernels. For example, under the weighting of the corresponding third weight value 3010 in the second convolution kernel 300, the first neuron 20 also emits a pulse, which is also the third receptive field 201, but its weighted output is projected to the third target neuron 3011 instead of the first target neuron 2011. The manner in which how the ownership weight value corresponding to the first neuron 20 and the corresponding target neuron are determined is not limited by this application.
For the determined first convolution kernel 200, a plurality of weight values that need to perform weighted projection may be included, such as the second weight value 2090, and the second target neuron 2091 located in the first target feature map 205 corresponding to the second weight value 2090 and the first neuron 20. A weighted projection step similar or identical to that described above is performed, i.e., another weighted projection is completed. The weighted projection operation based on all the weight values preset in the first convolution kernel 200 is performed, that is, all the projection steps of the first neuron 20 for the first convolution kernel 200 are completed.
In addition, the asynchronous convolution may involve a second convolution kernel 300, as well as more convolution kernels. For the second convolution kernel 300, the pulse of the first neuron 20 is projected to a third target neuron 3011 based on the weighting of the third weight value 3010; based on the weighting of the fourth weight value 3090, the pulse of the first neuron 20 is projected to a fourth target neuron 3091. The weighted projection operation based on all the weight values preset in the second convolution kernel 300 is performed, that is, all the projection steps of the first neuron 20 for the second convolution kernel 300 are completed. The projection step for all convolution kernels is performed, i.e. all asynchronous convolution operation steps for the first neuron 20 to issue this pulse are completed.
Preferably, the asynchronous convolution method further includes step S505: based on the updated neuron circuit state of the first target neuron circuit 2011, it is determined whether the first target neuron circuit 2011 satisfies an activation condition.
Based on the weighted projections and the current neuron (circuit) states of the first target neuron 2011, the second target neuron 2091, the third target neuron 3011 and the fourth target neuron 3091, such as membrane voltages, it can be determined whether the respective neurons satisfy the activation condition. When an activation condition is met, such as the membrane voltage reaching a threshold, each will also deliver a pulse.
The updated neuron state of the first target neuron 2011 is a neuron state obtained by adding or subtracting a weighted projection value to or from the current neuron state of the first target neuron 2011 before the pulse issued by the first neuron circuit 20 is projected to the first target neuron 2011. Here, the weighted projected value may be equal to or the first weight value 2010.
Preferably, the asynchronous convolution method further includes step S507: if the first target neuron circuit 2011 satisfies the activation condition, a pulse or pulse event issuing operation for the first target neuron circuit 2011 is performed, and an asynchronous convolution operation for the first target neuron circuit 2011 is performed.
The asynchronous convolution operation is the same or similar operation as the asynchronous convolution of the first neuron 20, and this is similar to a chain reaction, and the asynchronous convolution operation is transferred layer by layer.
In a certain preferred embodiment, after the corresponding pulse issuing step is performed on the first target neuron 2011, a pooling operation is performed to determine whether to discard the pulse or pulse event issued by the first target neuron 2011, if so, the asynchronous convolution operation is not required to be performed, otherwise, the corresponding asynchronous convolution operation is performed on the first target neuron 2011.
Although the second neuron 21 and the third neuron 22 also issue pulses or pulse events within the time period described in the source feature diagram 10, in the present invention, all the asynchronous convolution operations of a specific neuron are respectively completed, that is, once a neuron issues a pulse, the asynchronous convolution operation of the neuron is executed without considering whether other neurons also issue pulses, and also without considering whether the neurons participate in the convolution operation, which simplifies the asynchronous convolution logic in the brain-like chip, simplifies the chip design logic, and improves the asynchronous circuit design efficiency.
Reference is made to fig. 5 which is a simplified schematic diagram of an asynchronous convolution. As can be seen from the above-mentioned process, for the asynchronous convolution, the application macroscopically simplifies the asynchronous convolution into a pulse-emitting neuron (e.g., the first neuron 20), and each weight value in the predetermined convolution kernel set (essentially, synaptic connections with weight values) is weighted to respectively project the neuron and a target neuron in the target neuron set corresponding to the weight value. Through the technical scheme disclosed by the application, the complex asynchronous convolution operation is converted into a simple neuron projection operation, namely a one-to-many projection operation. This will be very favorable to simplifying the design logic of the brain-like chip, promote the chip design efficiency. And the target neuron can also complete the next layer (if necessary) of asynchronous convolution operation based on the same logic.
In addition, the invention also discloses a chip, in particular a brain-like chip, which comprises a plurality of neuron circuits configured to execute asynchronous convolution and is configured to execute the steps S501 to S507 and all other steps.
The invention also discloses an electronic device, which comprises a first interface module, a second interface module, a processing module and a response module, and also comprises the brain-like chip; the brain-like chip is coupled with the processing module through the first interface module, and the processing module is coupled with the response module through the second interface module; the brain-like chip identifies input environmental signals (such as signals of sound, light, heartbeat and the like), transmits an identification result to the processing module through the first interface module, and the processing module generates a control instruction according to the identification result and transmits the control instruction to the response module through the second interface module. The response module can execute functions of snapshot, video recording, system awakening and the like.
While the invention has been described with reference to specific features and embodiments thereof, various modifications and combinations may be made without departing from the invention. Accordingly, the specification and figures are to be regarded in a simplified manner as being illustrative of some embodiments of the invention defined by the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the invention. Thus, although the present invention and its advantages have been described in detail, various changes, substitutions and alterations can be made herein without departing from the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification.
As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
To achieve better technical results or for certain applications, a person skilled in the art may make further improvements on the technical solution based on the present invention. However, even if the partial modification/design is inventive or/and advanced, the technical solution should also fall within the protection scope of the present invention according to the "overall coverage principle" as long as the technical features covered by the claims of the present invention are utilized.
Several technical features mentioned in the attached claims may be replaced by alternative technical features or the order of some technical processes, the order of materials organization may be recombined. Those skilled in the art can easily understand the alternative means, or change the sequence of the technical process and the material organization sequence, and then adopt substantially the same means to solve substantially the same technical problems and achieve substantially the same technical effects, therefore, even if the means or/and the sequence are explicitly defined in the claims, the modifications, changes and substitutions shall fall into the protection scope of the claims according to the "equivalent principle".
Where a claim recites an explicit numerical limitation, one skilled in the art would understand that other reasonable numerical values around the stated numerical value would also apply to a particular embodiment. Such design solutions, which do not depart from the inventive concept by a departure from the details, also fall within the scope of protection of the claims.
The method steps and elements described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and the steps and elements of the embodiments have been described in functional generality in the foregoing description, for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention as claimed.
Reference numerals:
100: and (3) convolution kernel 10: source signature graph 200: first convolution kernel
1001: ANN source signature graph 20: first neuron 2010: first weight value
1002: ANN target feature map 21: second neuron 2090: second weight value
101: first receptive field 22: third neuron 300: second convolution kernel
1012: first storage unit 201: third receptive field 3010: third weight value
102: second receptive field 209: fourth receptive field 3090: fourth weight value
1021: second storage unit 205: first target feature map 2011: first target neuron
2091: second target neuron 305: second target feature map 3011: third target neuron
3091: fourth target neuron

Claims (10)

1. A method for realizing asynchronous convolution in a chip, which is characterized in that a plurality of neuron circuits configured to execute asynchronous convolution are included in the chip, and a first neuron circuit belongs to one of the neuron circuits configured to execute asynchronous convolution; an asynchronous convolution operation for the first neuron circuit, comprising the steps of:
step S501: executing a pulse or pulse event issuance operation of the first neuron circuit upon detecting that the first neuron circuit satisfies an activation condition;
step S503: determining a first weight value corresponding to the first neuron circuit and located in a first convolution core, and determining a first target neuron circuit corresponding to the first neuron circuit and the first weight value and located in a first target feature map, weighted by the first weight value, to project the pulse or pulse event fired by the first neuron circuit to the first target neuron circuit.
2. The method of claim 1 for implementing asynchronous convolution in a chip, wherein:
the asynchronous convolution method further includes step S505: determining whether the first target neuron circuit satisfies an activation condition based on the updated neuron circuit state of the first target neuron circuit.
3. The method of claim 2 for implementing asynchronous convolution in a chip, wherein:
the asynchronous convolution method further includes step S507: if the first target neuron circuit meets the activation condition, executing a pulse or pulse event issuing operation aiming at the first target neuron circuit, and executing an asynchronous convolution operation aiming at the first target neuron circuit.
4. The method of claim 1 for implementing asynchronous convolution in a chip, wherein: at least one convolution kernel is associated to an asynchronous convolution operation for the first neuron circuit; and respectively weighting each weight value in the at least one convolution kernel, and projecting the pulse or pulse event issued by the first neuron circuit to a target neuron circuit corresponding to the weight value and the first neuron circuit.
5. The method of any of claims 1-4 for implementing asynchronous convolution in a chip, wherein: after step S501, the method further includes: performing pooling operation; the pooling operation will determine whether to discard the pulse or pulse event issued by the first neuron circuit.
6. The method of any of claims 1-4 for implementing asynchronous convolution in a chip, wherein: the pulse or pulse event issued by the first neuron circuit is projected to the first target neuron circuit by weighting with the first weight value, specifically:
the first weight value is read and then added or subtracted to the current neuron circuit state of the first target neuron.
7. The method of any of claims 1-4 for implementing asynchronous convolution in a chip, wherein: when any one of the neuron circuits configured to perform asynchronous convolution is activated to emit a pulse or a pulse event, the pulse or the pulse event emitted by the neuron is weighted by each weight value in a preset convolution kernel set respectively, and the pulse or the pulse event emitted by the neuron is projected to each corresponding target neuron in the neuron and a target neuron set corresponding to the weight value respectively.
8. The method of any of claims 1-4 for implementing asynchronous convolution in a chip, wherein: the activation condition is in particular that a membrane voltage of the neuron circuit reaches a threshold value.
9. A brain-like chip, comprising: the brain chip comprises a plurality of neuron circuits configured to perform asynchronous convolution, and is configured to perform the method of implementing asynchronous convolution in a chip according to any one of claims 1 to 8.
10. An electronic device comprising first and second interface modules and a processing module, and a response module, characterized in that: the electronic device further comprises a brain-like chip of claim 9; the brain-like chip is coupled with the processing module through the first interface module, and the processing module is coupled with the response module through the second interface module; the brain-like chip identifies the input environmental signal and transmits an identification result to the processing module through the first interface module, and the processing module generates a control instruction according to the identification result and transmits the control instruction to the response module through the second interface module.
CN202111185514.5A 2021-10-12 2021-10-12 Method for realizing asynchronous convolution in chip, brain-like chip and electronic equipment Active CN113627603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111185514.5A CN113627603B (en) 2021-10-12 2021-10-12 Method for realizing asynchronous convolution in chip, brain-like chip and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111185514.5A CN113627603B (en) 2021-10-12 2021-10-12 Method for realizing asynchronous convolution in chip, brain-like chip and electronic equipment

Publications (2)

Publication Number Publication Date
CN113627603A true CN113627603A (en) 2021-11-09
CN113627603B CN113627603B (en) 2021-12-24

Family

ID=78391199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111185514.5A Active CN113627603B (en) 2021-10-12 2021-10-12 Method for realizing asynchronous convolution in chip, brain-like chip and electronic equipment

Country Status (1)

Country Link
CN (1) CN113627603B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114372568A (en) * 2022-03-21 2022-04-19 深圳时识科技有限公司 Brain-like chip and electronic equipment
CN115329943A (en) * 2022-10-12 2022-11-11 深圳时识科技有限公司 Pooling method in event-driven type chip, chip and electronic device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460457A (en) * 2018-03-30 2018-08-28 苏州纳智天地智能科技有限公司 A kind of more asynchronous training methods of card hybrid parallel of multimachine towards convolutional neural networks
CN110069987A (en) * 2019-03-14 2019-07-30 中国人民武装警察部队海警学院 Based on the single phase ship detecting algorithm and device for improving VGG network
CN110378469A (en) * 2019-07-11 2019-10-25 中国人民解放军国防科技大学 SCNN inference device based on asynchronous circuit, PE unit, processor and computer equipment thereof
US20190385041A1 (en) * 2018-06-13 2019-12-19 The United States Of America As Represented By The Secretary Of The Navy Asynchronous Artificial Neural Network Architecture
US10776665B2 (en) * 2018-04-26 2020-09-15 Qualcomm Incorporated Systems and methods for object detection
CN112037269A (en) * 2020-08-24 2020-12-04 大连理工大学 Visual moving target tracking method based on multi-domain collaborative feature expression
CN112633497A (en) * 2020-12-21 2021-04-09 中山大学 Convolutional pulse neural network training method based on reweighted membrane voltage
CN113255905A (en) * 2021-07-16 2021-08-13 成都时识科技有限公司 Signal processing method of neurons in impulse neural network and network training method
CN113313240A (en) * 2021-08-02 2021-08-27 成都时识科技有限公司 Computing device and electronic device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460457A (en) * 2018-03-30 2018-08-28 苏州纳智天地智能科技有限公司 A kind of more asynchronous training methods of card hybrid parallel of multimachine towards convolutional neural networks
US10776665B2 (en) * 2018-04-26 2020-09-15 Qualcomm Incorporated Systems and methods for object detection
US20190385041A1 (en) * 2018-06-13 2019-12-19 The United States Of America As Represented By The Secretary Of The Navy Asynchronous Artificial Neural Network Architecture
CN110069987A (en) * 2019-03-14 2019-07-30 中国人民武装警察部队海警学院 Based on the single phase ship detecting algorithm and device for improving VGG network
CN110378469A (en) * 2019-07-11 2019-10-25 中国人民解放军国防科技大学 SCNN inference device based on asynchronous circuit, PE unit, processor and computer equipment thereof
CN112037269A (en) * 2020-08-24 2020-12-04 大连理工大学 Visual moving target tracking method based on multi-domain collaborative feature expression
CN112633497A (en) * 2020-12-21 2021-04-09 中山大学 Convolutional pulse neural network training method based on reweighted membrane voltage
CN113255905A (en) * 2021-07-16 2021-08-13 成都时识科技有限公司 Signal processing method of neurons in impulse neural network and network training method
CN113313240A (en) * 2021-08-02 2021-08-27 成都时识科技有限公司 Computing device and electronic device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AI科技评论: "《专访 | 泓观科技:面向IoT首创异步AI芯片,另辟蹊径的潜行者重装上阵》", 《HTTPS://WWW.SOHU.COM/A/219890396_651893》 *
WANG SQ等: "《SIES: A Novel Implementation of Spiking Convolutional Neural Network Inference Engine on Field-Programmable Gate Array》", 《JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY》 *
程海波 等: "《基于4×4卷积核的异步卷积加速算法研究》", 《软件工程与应用》 *
赵蓬辉 等: "《基于异步卷积分解与分流结构的单阶段检测器》", 《北京航空航天大学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114372568A (en) * 2022-03-21 2022-04-19 深圳时识科技有限公司 Brain-like chip and electronic equipment
CN115329943A (en) * 2022-10-12 2022-11-11 深圳时识科技有限公司 Pooling method in event-driven type chip, chip and electronic device

Also Published As

Publication number Publication date
CN113627603B (en) 2021-12-24

Similar Documents

Publication Publication Date Title
US9224090B2 (en) Sensory input processing apparatus in a spiking neural network
US9129221B2 (en) Spiking neural network feedback apparatus and methods
US9098811B2 (en) Spiking neuron network apparatus and methods
US9183493B2 (en) Adaptive plasticity apparatus and methods for spiking neuron network
CN104685516B (en) Apparatus and method for realizing the renewal based on event in spiking neuron network
CN113627603B (en) Method for realizing asynchronous convolution in chip, brain-like chip and electronic equipment
US9111226B2 (en) Modulated plasticity apparatus and methods for spiking neuron network
US9256215B2 (en) Apparatus and methods for generalized state-dependent learning in spiking neuron networks
US20130297539A1 (en) Spiking neural network object recognition apparatus and methods
US20210304005A1 (en) Trace-based neuromorphic architecture for advanced learning
CN104662526B (en) Apparatus and method for efficiently updating spiking neuron network
US4518866A (en) Method of and circuit for simulating neurons
US8515885B2 (en) Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation
US20150074026A1 (en) Apparatus and methods for event-based plasticity in spiking neuron networks
US20140244557A1 (en) Apparatus and methods for rate-modulated plasticity in a spiking neuron network
US20130073491A1 (en) Apparatus and methods for synaptic update in a pulse-coded network
KR20160076520A (en) Causal saliency time inference
US11017288B2 (en) Spike timing dependent plasticity in neuromorphic hardware
KR20160123309A (en) Event-based inference and learning for stochastic spiking bayesian networks
KR20160125967A (en) Method and apparatus for efficient implementation of common neuron models
CN114781633B (en) Processor fusing artificial neural network and impulse neural network
Gutiérrez-Naranjo et al. Hebbian learning from spiking neural P systems view
Mouraud et al. Damned: A distributed and multithreaded neural event-driven simulation framework
EP3798919A1 (en) Hardware architecture for spiking neural networks and method of operating
Gutiérrez Naranjo et al. A first model for Hebbian learning with spiking neural P systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method for implementing asynchronous convolution in chips, brain like chips, and electronic devices

Granted publication date: 20211224

Pledgee: Industrial Bank Co.,Ltd. Shanghai Hongqiao Branch

Pledgor: Shanghai Shizhi Technology Co.,Ltd.|Chengdu Shizhi Technology Co.,Ltd.

Registration number: Y2024310000093