Novel computational paradigms for memristive architectures
Novel computational paradigms for memristive architectures
Disciplines
Computer Sciences (100%)
Keywords
-
Self-Organization,
Neural Networks,
STDP,
Neuromorphic Engineering,
Memristor,
Artificial Cognition
Overall goal of the project: This project aspires to develop neuromorphic hardware with memristor-based synaptic elements, capable of learning and adapting to stimuli by leveraging on the latest developments of five leading European institutions in neuroscience, nanotechnology, modeling and circuit design. The non-linear dynamics as well as the plasticity of the recently realized memristor1,2 are shown to support biologically inspired synaptic plasticity rules such as Spike-Timing-Dependent-Plasticity (STDP), making this extremely compact device an excellent candidate for realizing large-scale self-adaptive circuits; a step towards "autonomous cognitive systems". The intrinsic properties of real neurons and synapses as well as their organization in forming neural circuits will be exploited for optimizing CMOS-based neurons, memristive grids and the integration of the two into real-time biophysically realistic neuromorphic systems. Finally, novel computing concepts for these architectures will be developed and tested. Specific contributions from TU Graz in this context: It is commonly acknowledged that any engineer would do extremely well in learning from nature, since biological systems are efficient, robust, adaptable, real-time, effective, scalable and reliable. Carver Mead in the 1980`s was one of the first disciples by exploiting analog circuitry in topologies that mimic neurobiological architectures3 , coined as the neuromorphic doctrine; with similar approaches appearing more recently for developing bio-inspired and biomimetic systems. Still, the cognitive abilities of biological neural systems have no counterpart in artificial computing systems so far, partly due to the lack of well- founded theories of computation and self-organization in nervous systems. In recent years, novel concepts based on probabilistic computation, approximate inference, and sampling in neural networks have attracted the interest of researchers in neuroscience, cognitive sciences, and computer science. 4,5 For example, it was shown that STDP can be utilized by spiking neural networks for self-organization such that a network can infer hidden causes of its sensory input.6 Based on this and similar recent results, we will develop and investigate - through theoretical analysis and extensive computer simulation - novel paradigms for probabilistic neural computation and self- organization through STDP. The memristive synapse opens the possibility of large-scale biologically inspired neural network implementations with minimal size-requirements for those elements in the circuit that are most numerous and therefore most space-intense: plastic synaptic connections. First attempts will therefore be made to adapt these paradigms to this new generation of neuromorphic hardware. The specification of the CMOS/memrisitive circuits will serve as the basis for these investigations. We will further identify possible plasticity mechanisms that would increase the learning capabilities of the system and discuss in the consortium possible implementation strategies for such mechanisms in a hybrid CMOS/memristive design. Finally, we will investigate possible applications of such self-adapting circuits and test their functionality in computer simulations. 1 J.J. Yang, M.D. Pickett, X. Li, D.A.A. Ohlberg, D.R. Stewart and R.S. Williams, Memristive switching mechanism for metal/oxide/metal nanodevices, Nature Nanotech., vol. 3, 2008. 2 L.O. Chua, Memristor-The missing circuit element, IEEE Trans. on Circuits Theory, vol. CT-18, no. 5, 1971. 3 C. Mead, Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989 4 Koerding KP and Wolpert DM (2004). Bayesian integration in sensorimotor learning. Nature, 427, 244-7 5 N. Chater, J. Tenenbaum, and A. Yuille, Probabilistic models of cognition: Conceptual foundations, Trends in Cognitive Sciences In Special issue: Probabilistic models of cognition, Vol. 10, No. 7. (July 2006), pp. 287-291. 6 B. Nessler, M. Pfeiffer, and W. Maass. STDP enables spiking neurons to detect hidden causes of their inputs. In Proc. of NIPS 2009: Advances in Neural Information Processing Systems, volume 22, pages 1357-1365. MIT Press, 2010.
The PNEUMA project developed novel brain-inspired architectures for computation and learning that utilize nano-scale circuit elements, so-called memristors. Standard implementations of computing devices based on complementary metal-oxide semiconductor (CMOS) technology are rapidly approaching fundamental limitations since the number of transistors that can be placed on a given unit area (the integration density) cannot exceed fundamental physical limits. Therefore, there is a rising interest in alternative computing architectures based (at least partly) on non-CMOS substrates. In PNEUMA, we focused on neural architectures that are inspired by the architecture of the brain. CMOS circuits that mimic the biophysical properties of neurons were developed. Those were coupled to 2D arrays of memristors that implemented the most salient features of synaptic connections in the brain. The key advantage of this approach is the possibility to use memristive crossbar arrays with orders of magnitude higher integration densities when compared to standard CMOS synaptic circuits. One of the most important features of biological neuronal networks is their ability to adapt their operation based on the properties of incoming stimuli (learning through synaptic plasticity). We showed that such adaptation is also possible in the system developed in PNEUMA, since the artificial memristive synapses exhibit plastic properties similar to their biological counterparts. The Institute for Theoretical Computer Science (TU Graz), leader of the Austrian sub- project, developed a number of novel probabilistic paradigms for computation and learning that are closely related to information processing in biological neuronal networks. Theoretical work provided new concepts for how the major problem of memristive devices, that is, their stochastic behaviour and unreliability, can be mitigated. In fact, we showed that stochastic synaptic plasticity can be utilized in order to improve the learning capabilities of neural circuits. In collaboration with the other project partners, one such architecture was implemented. Synaptic connections were realized by memristive devices produced within the project. We were able to demonstrate that the system is able to adapt its functionality in a self-organized manner. We therefore provided a proof-of-concept for memristive brain-inspired computation in a simple setup, paving the way for much larger future systems with brain-like learning capabilities.
- Technische Universität Graz - 100%
- Robert Plana, Centre National de la Reserche Scientifique - France
- Giacomo Indiveri, University of Zurich - Switzerland
- Chris Toumazou, Imperial College London
Research Output
- 1218 Citations
- 20 Publications
-
2014
Title A compound memristive synapse model for statistical learning through STDP in spiking neural networks DOI 10.3389/fnins.2014.00412 Type Journal Article Author Bill J Journal Frontiers in Neuroscience Pages 412 Link Publication -
2016
Title Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses DOI 10.1038/ncomms12611 Type Journal Article Author Serb A Journal Nature Communications Pages 12611 Link Publication -
2015
Title Network Plasticity as Bayesian Inference DOI 10.48550/arxiv.1504.05143 Type Preprint Author Kappel D -
2016
Title Stochastic inference with spiking neurons in the high-conductance state DOI 10.1103/physreve.94.042312 Type Journal Article Author Petrovici M Journal Physical Review E Pages 042312 Link Publication -
2016
Title Stochastic inference with spiking neurons in the high-conductance state DOI 10.48550/arxiv.1610.07161 Type Preprint Author Petrovici M -
2016
Title The high-conductance state enables neural sampling in networks of LIF neurons DOI 10.48550/arxiv.1601.00909 Type Preprint Author Petrovici M -
2015
Title The high-conductance state enables neural sampling in networks of LIF neurons DOI 10.1186/1471-2202-16-s1-o2 Type Journal Article Author Petrovici M Journal BMC Neuroscience Link Publication -
2015
Title Nanoscale connections for brain-like circuits DOI 10.1038/521037a Type Journal Article Author Legenstein R Journal Nature Pages 37-38 Link Publication -
2015
Title Synaptic Sampling: A Bayesian Approach to Neural Network Plasticity and Rewiring. Type Journal Article Author Kappel D -
2014
Title Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons DOI 10.48550/arxiv.1410.5212 Type Preprint Author Probst D -
2014
Title Ensembles of Spiking Neurons with Noise Support Optimal Probabilistic Inference in a Dynamically Changing Environment DOI 10.1371/journal.pcbi.1003859 Type Journal Article Author Legenstein R Journal PLoS Computational Biology Link Publication -
2013
Title Integration of nanoscale memristor synapses in neuromorphic computing architectures. Type Journal Article Author Indiveri G -
2013
Title Integration of nanoscale memristor synapses in neuromorphic computing architectures DOI 10.1088/0957-4484/24/38/384010 Type Journal Article Author Indiveri G Journal Nanotechnology Pages 384010 Link Publication -
2015
Title Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition DOI 10.7916/d8862g4x Type Other Author Buesing L Link Publication -
2015
Title Deterministic neural networks as sources of uncorrelated noise for probabilistic computations DOI 10.1186/1471-2202-16-s1-p62 Type Journal Article Author Jordan J Journal BMC Neuroscience Link Publication -
2015
Title Network Plasticity as Bayesian Inference DOI 10.1371/journal.pcbi.1004485 Type Journal Article Author Kappel D Journal PLOS Computational Biology Link Publication -
2015
Title Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition DOI 10.1371/journal.pone.0134356 Type Journal Article Author Bill J Journal PLOS ONE Link Publication -
2015
Title Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons DOI 10.3389/fncom.2015.00013 Type Journal Article Author Probst D Journal Frontiers in Computational Neuroscience Pages 13 Link Publication -
2012
Title Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints. Type Journal Article Author Habenschuss S -
2013
Title Stochastic inference with deterministic spiking neurons DOI 10.48550/arxiv.1311.3211 Type Preprint Author Petrovici M