Iatropoulos, GeorgiosBrea, JohanniGerstner, Wulfram2022-12-122022-12-122022-12-12202210.48550/arxiv.2208.09416https://infoscience.epfl.ch/handle/20.500.14299/193152We consider the problem of training a neural network to store a set of patterns with maximal noise robustness. A solution, in terms of optimal weights and state update rules, is derived by training each individual neuron to perform either kernel classification or interpolation with a minimum weight norm. By applying this method to feed-forward and recurrent networks, we derive optimal models, termed kernel memory networks, that include, as special cases, many of the hetero- and auto-associative memory models that have been proposed over the past years, such as modern Hopfield networks and Kanerva's sparse distributed memory. We modify Kanerva's model and demonstrate a simple way to design a kernel memory network that can store an exponential number of continuous-valued patterns with a finite basin of attraction. The framework of kernel memory networks offers a simple and intuitive way to understand the storage capacity of previous memory models, and allows for new biological interpretations in terms of dendritic non-linearities and synaptic cross-talk. 24 pages, 5 figures. Camera-ready version for NeurIPS 2022Kernel Memory Networks: A Unifying Framework for Memory Modelingtext::journal::journal article::research article