Moerland, PerryFiesler, EmileSaxena, Indu2006-03-102006-03-102006-03-10199610.1109/ISNFS.1996.603821https://infoscience.epfl.ch/handle/20.500.14299/227597All-optical multilayer perceptrons differ in various ways from the ideal neural network model. Examples are the use of non-ideal activation functions which are truncated, asymmetric, and have a non-standard gain, restriction of the network parameters to non-negative values, and the use of limited accuracy for the weights. In this paper an adaptation of the backpropagation learning rule is presented that compensates for these three non-idealities. The good performance of this learning rule is illustrated by a series of experiments. This algorithm enables the implementation of all-optical multilayer perceptrons where learning occurs under control of a computer.optical multilayer perceptronneuronnon-negative neural networksliquid crystal light valve (LCLV)learningweight discretizationactivation functionOvercoming Inaccuracies in Optical Multilayer Perceptronstext::conference output::conference proceedings::conference paper