Vo, 2017 - Google Patents
Implementing the on-chip backpropagation learning algorithm on FPGA architectureVo, 2017
- Document ID
- 15789706151130429258
- Author
- Vo H
- Publication year
- Publication venue
- 2017 International Conference on System Science and Engineering (ICSSE)
External Links
Snippet
Scaling CMOS integrated circuit technology leads to decrease the chip price and increase processing performance in complex applications with re-configurability. Thus, VLSI architecture is a promising candidate in implementing neural network models nowadays …
- 238000004422 calculation algorithm 0 title abstract description 24
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding or deleting nodes or connections, pruning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
- G06N3/0472—Architectures, e.g. interconnection topology using probabilistic elements, e.g. p-rams, stochastic processors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/52—Multiplying; Dividing
- G06F7/523—Multiplying only
- G06F7/53—Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/50—Adding; Subtracting
- G06F7/505—Adding; Subtracting in bit-parallel fashion, i.e. having a different digit-handling circuit for each denomination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
- G06F17/5009—Computer-aided design using simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/12—Computer systems based on biological models using genetic models
- G06N3/126—Genetic algorithms, i.e. information processing using digital simulations of the genetic system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/02—Knowledge representation
- G06N5/022—Knowledge engineering, knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/04—Inference methods or devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computer systems based on specific mathematical models
- G06N7/02—Computer systems based on specific mathematical models using fuzzy logic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10740671B2 (en) | Convolutional neural networks using resistive processing unit array | |
Sarwar et al. | Multiplier-less artificial neurons exploiting error resiliency for energy-efficient neural computing | |
Liu et al. | A survey of FPGA-based hardware implementation of ANNs | |
Elmasry | VLSI artificial neural networks engineering | |
Amrutha et al. | Performance analysis of backpropagation algorithm of artificial neural networks in verilog | |
Onizawa et al. | In-hardware training chip based on CMOS invertible logic for machine learning | |
Vo | Implementing the on-chip backpropagation learning algorithm on FPGA architecture | |
Kim et al. | Input-splitting of large neural networks for power-efficient accelerator with resistive crossbar memory array | |
Jeyanthi et al. | Implementation of single neuron using various activation functions with FPGA | |
Muñoz et al. | Hardware opposition-based PSO applied to mobile robot controllers | |
del Campo et al. | A system-on-chip development of a neuro–fuzzy embedded agent for ambient-intelligence environments | |
Cho et al. | An on-chip learning neuromorphic autoencoder with current-mode transposable memory read and virtual lookup table | |
Chen et al. | OCEAN: An on-chip incremental-learning enhanced artificial neural network processor with multiple gated-recurrent-unit accelerators | |
Adeel et al. | Unlocking the potential of two-point cells for energy-efficient and resilient training of deep nets | |
Mukhopadhyay et al. | Systematic realization of a fully connected deep and convolutional neural network architecture on a field programmable gate array | |
Kuninti et al. | Backpropagation algorithm and its hardware implementations: A review | |
Lehnert et al. | Most resource efficient matrix vector multiplication on FPGAs | |
Jia et al. | An energy-efficient Bayesian neural network implementation using stochastic computing method | |
Baek et al. | A memristor-CMOS Braun multiplier array for arithmetic pipelining | |
Nguyen et al. | A low-power, high-accuracy with fully on-chip ternary weight hardware architecture for Deep Spiking Neural Networks | |
Thabit et al. | Implementation three-step algorithm based on signed digit number system by using neural network | |
Bang et al. | An energy-efficient SNN processor design based on sparse direct feedback and spike prediction | |
Ben-Bright et al. | Taxonomy and a theoretical model for feedforward neural networks | |
Perez-Garcia et al. | Multilayer perceptron network with integrated training algorithm in FPGA | |
Abrol et al. | Artificial neural network implementation on FPGA chip |