Buchholz et al., 2007 - Google Patents
Optimal learning rates for Clifford neuronsBuchholz et al., 2007
View PDF- Document ID
- 11139705751550409446
- Author
- Buchholz S
- Tachibana K
- Hitzer E
- Publication year
- Publication venue
- Artificial Neural Networks–ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I 17
External Links
Snippet
Neural computation in Clifford algebras, which include familiar complex numbers and quaternions as special cases, has recently become an active research field. As always, neurons are the atoms of computation. The paper provides a general notion for the Hessian …
- 210000002569 neurons 0 title abstract description 43
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06K9/6232—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
- G06K9/6247—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods based on an approximation criterion, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
- G06F17/5009—Computer-aided design using simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ding | Least squares parameter estimation and multi-innovation least squares methods for linear fitting problems from noisy data | |
Zhao et al. | Quantum-assisted Gaussian process regression | |
Amari | Natural gradient works efficiently in learning | |
Deschamps et al. | Structure of uniformly continuous quantum Markov semigroups | |
Buchholz et al. | Optimal learning rates for Clifford neurons | |
Dong et al. | Reservoir computing meets recurrent kernels and structured transforms | |
Lee et al. | Structure learning of mixed graphical models | |
Yang et al. | Explicit approximations for nonlinear switching diffusion systems in finite and infinite horizons | |
Shi et al. | Independent component analysis | |
Popa | Dissipativity of impulsive matrix-valued neural networks with leakage delay and mixed delays | |
Kong et al. | A unified self-stabilizing neural network algorithm for principal and minor components extraction | |
Hössjer | Coalescence theory for a general class of structured populations with fast migration | |
Chang | Random Tensor Inequalities and Tail bounds for Bivariate Random Tensor Means, Part I | |
Zhang et al. | Augmented quaternion extreme learning machine | |
Favaro et al. | Consistency of subspace methods for signals with almost-periodic components | |
Lv et al. | Quaternion extreme learning machine | |
Fu et al. | Theoretical linear convergence of deep unfolding network for block-sparse signal recovery | |
Qiu et al. | Derivative-enhanced Deep Operator Network | |
Li et al. | On fluctuations for random band Toeplitz matrices | |
KOVAC et al. | CP decomposition and low-rank approximation of antisymmetric tensors | |
Hasegawa et al. | Fluctuations of Marchenko–Pastur limit of random matrices with dependent entries | |
Murshed et al. | Projection assisted dynamic mode decomposition of large scale data | |
Park et al. | Adaptive natural gradient method for learning neural networks with large data set in mini-batch mode | |
Zhao et al. | Generalized extreme learning machine acting on a metric space | |
Jankovic et al. | A new probabilistic approach to independent component analysis suitable for on-line learning in artificial neural networks |