Files

Abstract

Neuromorphic systems provide brain-inspired methods of computing. In a neuromorphic architecture, inputs are processed by a network of neurons receiving operands through synaptic interconnections, tuned in the process of learning. Neurons act simultaneously as asynchronous computational and memory units, which leads to a high degree of parallelism. Furthermore, owing to developments in novel materials, memristive devices were proposed for area- and energy-efficient mixed digital-analog implementation of neurons and synapses. In this dissertation, we propose neuromorphic architectures based on phase-change memristors combined with biologically-inspired synaptic learning rules, and we experimentally demonstrate their pattern- and feature-learning capabilities. Firstly, by exploiting the physical properties of phase-change devices, we propose neuromorphic building blocks comprising phase-change-based neurons and synapses operating according to an unsupervised local learning rule. At the same time, we introduce multiple enhancements for pattern learning: an integration threshold for the phase-change soma to ensure noise-robust operation; selective synaptic depression mechanism to limit negative impact of asymmetric conductance response of phase-change synapses during the learning; WTA (Winner-Take-All) mechanism with level-tuned neurons that decreases power consumption in comparison to the classic lateral inhibition WTA; and learning WTA that enhances the quality of pattern visualization. Experimental results demonstrate the capabilities of the proposed architectures. In particular, a neuron with phase-change synapses was shown to learn and re-learn patterns of correlated activity. Furthermore, an all-phase-change neuron with a record number of 1M synapses successfully detected and visualized weakly-correlated patterns. Lastly, a network of all-phase-change neurons operating with level-tuned neurons accurately learned multiple patterns. Secondly, to scale-up the proposed architectures, we identify the need to improve the knowledge representation to learn features rather than patterns. We determine the key role of the feedback links for controlling the learning process, and combine intraneuronal with interneuronal feedback. Intraneuronal feedback determines what each neuron learns, whereas interneuronal feedback determines how information is distributed between the neurons. We propose two feature-learning architectures: an architecture with interneuronal feedback to the learning rule, and an architecture inspired by the biological observation of synaptic competition for learning-related proteins. Furthermore, we introduce a model of synaptic competition that guides the learning as well as detects novelty in the input, which is then used to dynamically adjust the size of the network. In a series of benchmarks for different feature types, synaptic competition outperformed other common methods, simultaneously adjusting the network to the optimal size. Finally, it was the only method that succeeded for a challenging dataset that violates the common machine learning assumption on the independent and identically-distributed input presentation. To conclude, we proposed phase-change-based neuromorphic architectures and realized them in a large-scale prototype platform. The experimental results demonstrate pattern- and feature-learning capabilities and constitute an important step towards designing unsupervised online learning neuromorphic systems.

Details

Actions

Preview