Gradient-adjusted Incremental Target Propagation Provides Effective Credit Assignment in Deep Neural Networks
Abstract: Many of the recent advances in the field of artificial intelligence have been fueled by the
highly successful backpropagation of error (BP) algorithm, which efficiently solves the
credit assignment problem in artificial neural networks. However, it is unlikely that BP is
implemented in its usual form within biological neural networks, because of its reliance on
non-local information in propagating error gradients. Since biological neural networks are
capable of highly efficient learning and responses from BP trained models can be related
to neural responses, it seems reasonable that a biologically viable approximation of BP
underlies synaptic plasticity in the brain. Gradient-adjusted incremental target propagation
(GAIT-prop or GP for short) has recently been derived directly from BP and has been
shown to successfully train networks in a more biologically plausible manner. However,
so far, GP has only been shown to work on relatively low-dimensional problems, such as
handwritten-digit recognition. This work addresses some of the scaling issues in GP and
shows it to perform effective multi-layer credit assignment in deeper networks and on the
much more challenging ImageNet dataset.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: De-anonymized
Date added
Style changed to tmlr[accepted]
Appendix with code link added
Code: https://github.com/artcogsys/GAIT_prop_scaling
Assigned Action Editor: ~Robert_Legenstein1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 503
Loading