Abstract

All-optical multilayer perceptrons differ in various ways from the ideal neural network model. Examples are the use of non-ideal activation functions which are truncated, asymmetric, and have a non-standard gain, restriction of the network parameters to non-negative values, and the use of limited accuracy for the weights. In this paper an adaptation of the backpropagation learning rule is presented that compensates for these three non-idealities. The good performance of this learning rule is illustrated by a series of experiments. This algorithm enables the implementation of all-optical multilayer perceptrons where learning occurs under control of a computer.

Details

Actions