1,836
Views
0
CrossRef citations to date
0
Altmetric
Research Paper

Biologically-inspired neuronal adaptation improves learning in neural networks

, &
Article: 2163131 | Received 05 Sep 2022, Accepted 22 Dec 2022, Published online: 17 Jan 2023

References

  • Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature. 2015;518(7540):529–9.
  • Silver D, Huang A, Maddison CJ, et al. Mastering the Game of Go with deep neural networks and tree search. Nature. 2016;529(7587):484–489.
  • Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. La Jolla, CA: California Univ San Diego La Jolla Inst for Cognitive Science; 1985.
  • Crick F. The recent excitement about neural networks. Nature. 1989;337(6203):129–132.
  • Lillicrap TP, Santoro A, Marris L, et al. Backpropagation and the brain. Nat Rev Neurosci. 2020;21(6):335–346.
  • Bengio Y. How auto-encoders could provide credit assignment in deep networks via target propagation. arXiv preprint arXiv. 2014:1407.7906.
  • Hinton GE, McClelland J. In Proceedings of the 1987 International Conference on Neural information processing systems; 1987 Jan 1. Learning representations by recirculation; p. 358–366.
  • Lecun Y. Modeles connexionnistes de l’apprentissage (connectionist learning models) [PhD thesis]; 1987.
  • Lillicrap TP, Cownden D, Tweed DB, et al. Random synaptic feedback weights support error backpropagation for deep learning. Nat Commun. 2016;7(1):1–10.
  • Movellan JR. Contrastive Hebbian learning in the continuous Hopfield model. In: Morgan Kaufmann, editor. Connectionist models. San Francisco, CA: Elsevier; 1991 Jan 1. p. 10–17.
  • O’Reilly RC. Biologically plausible error-driven learning using local activation differences: the generalized recirculation algorithm. Neural Comput. 1996;8(5):895–938.
  • Scellier B, Bengio Y. Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front Comput Neurosci. 2017;11:24.
  • Almeida LB. A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In: Caudil M and Butler C, Editors. Proceedings of the IEEE First International Conference on Neural Networks San Diego, CA; 1987. p. 609–618.
  • Baldi P, Pineda F. Contrastive learning and neural oscillations. Neural Comput. 1991;3(4):526–545.
  • Pineda FJ. Generalization of back-propagation to recurrent neural networks. Phys Rev Lett. 1987;59(19):2229.
  • Ernoult M, Grollier J, Querlioz D, Bengio Y, Scellier B. Updates of equilibrium prop match gradients of backprop through time in an RNN with static input. Adv Neural Inf Process Syst. 2019;32.
  • Laborieux A, Ernoult M, Scellier B, et al. Scaling equilibrium propagation to deep convnets by drastically reducing its gradient estimator bias. Front Neurosci. 2021;15:129.
  • Scellier B, Bengio Y. Equivalence of equilibrium propagation and recurrent backpropagation. Neural Comput. 2019;31(2):312–329.
  • Luczak A, McNaughton BL, Kubo Y. Neurons learn by predicting future activity. Nature Mach Intell. 2022;4(1):1–11.
  • Benda J. Neural adaptation. Curr Biol. 2021;31(3):R110–R116.
  • Whitmire CJ, Stanley GB. Rapid sensory adaptation redux: a circuit perspective. Neuron. 2016;92(2):298–315.
  • Hertã¤g L, Durstewitz D, Brunel N. Analytical approximations of the firing rate of an adaptive exponential integrate-and-fire neuron in the presence of synaptic noise. Front Comput Neurosci. 2014;8:116.
  • Jolivet R, Rauch A, Lüscher HR, Gerstner W. Integrate-and-fire models with adaptation are good enough. Adv Neural Inf Process Syst. 2005;18:595–602.
  • Reutimann J, Yakovlev V, Fusi S, Senn W. Climbing neuronal activity as an event-based cortical representation of time. J Neurosci. 2004;24(13):3295–3303.
  • Stemmler M, Koch C. How voltage-dependent conductances can adapt to maximize the information encoded by neuronal firing rate. Nat Neurosci. 1999;2(6):521–527.
  • Fontaine B, Peña JL, Brette R. Spike-threshold adaptation predicted by membrane potential dynamics in vivo. PLoS Comput Biol. 2014;10(4):e1003560.
  • Carandini M, Ferster D. Membrane potential and firing rate in cat primary visual cortex. J Neurosci. 2000;20(1):470–484.
  • Granit R, Kernell D, Shortess G. Quantitative aspects of repetitive firing of mammalian motoneurones, caused by injected currents. J Physiol. 1963;168(4):911.
  • Treves A. Learning to predict through adaptation. Neuroinformatics. 2004;2(3):361–365.
  • Treves A. Computational constraints that may have favoured the lamination of sensory cortex. J Comput Neurosci. 2003;14(3):271–282.
  • Treves A. Computational constraints between retrieving the past and predicting the future, and the CA3‐CA1 differentiation. Hippocampus. 2004;14(5):539–556.
  • Treves A. Frontal latching networks: a possible neural basis for infinite recursion. Cogn Neuropsychol. 2005;22(3–4):276–291.
  • Luczak A, Kubo Y. Predictive neuronal adaptation as a basis for consciousness. Front Syst Neurosci. 2021;15. DOI:10.3389/fnsys.2021.767461
  • LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proc IEEE. 1998;86(11):2278–2324.
  • Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images; 2009.
  • Li F-F. CS231n: convolutional neural networks for visual recognition; 2021 [cited 2021 October 1]. Available from: https://cs231n.github.io/neural-networks-3/#annealing-the-learning-rate
  • Sun J, Niu Z, Innanen KA, Li J, Trad D. A deep learning perspective of the forward and inverse problems in exploration geophysics. CSEG Geoconvention; 2019.
  • Li Y, Wei C, Ma T. Towards explaining the regularization effect of initial large learning rate in training neural networks. Adv Neural Inf Process Syst. 2019;33:32.
  • Duchi J, Hazan E, Singer Y. Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res. 2011;12(7):2121–2159.
  • Lillicrap TP, Cownden D, Tweed DB, Akerman CJ. Random synaptic feedback weights support error backpropagation for deep learning. Nature communications. 2016;7(1): 1–10.
  • Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15(1):1929–1958.
  • Xie X, Seung HS. Equivalence of backpropagation and contrastive Hebbian learning in a layered network. Neural Comput. 2003;15(2):441–454.
  • Luczak A, Hackett TA, Kajikawa Y, et al. Multivariate receptive field mapping in marmoset auditory cortex. J Neurosci Methods. 2004;136(1):77–85.
  • Luczak A, Narayanan NS. Spectral representation—analyzing single-unit activity in extracellularly recorded neuronal data without spike sorting. J Neurosci Methods. 2005;144(1):53–61.
  • Ponjavic-Conte KD, et al. Neural correlates of auditory distraction revealed in theta-band EEG. Neuroreport: 2012;23(4):240–245.
  • Ryait H, Bermudez-Contreras E, Harvey M, et al. Data-driven analyses of motor impairments in animal models of neurological disorders. PLoS Biol. 2019;17(11):e3000516.
  • Schjetnan AGP, Luczak A. Recording large-scale neuronal ensembles with silicon probes in the anesthetized rat. J Vis Exp. 2011;56:e3282.
  • Chalmers E, Luczak A. Reinforcement learning with brain-inspired modulation can improve adaptation to environmental changes. arXiv preprint arXiv. 2022;2205.09729.
  • Kubo Y, Chalmers E, Luczak A. Combining backpropagation with equilibrium propagation to improve an actor-critic reinforcement learning framework. Front Comput Neurosci. 2022;16:980613.