Overcoming Inaccuracies in Optical Multilayer Perceptrons

All-optical multilayer perceptrons differ in various ways from the ideal neural network model. Examples are the use of non-ideal activation functions which are truncated, asymmetric, and have a non-standard gain, restriction of the network parameters to non-negative values, and the use of limited accuracy for the weights. In this paper an adaptation of the backpropagation learning rule is presented that compensates for these three non-idealities. The good performance of this learning rule is illustrated by a series of experiments. This algorithm enables the implementation of all-optical multilayer perceptrons where learning occurs under control of a computer.


Published in:
Proceedings of the First International Symposium on Neuro-Fuzzy Systems (AT'96)
Presented at:
Proceedings of the First International Symposium on Neuro-Fuzzy Systems (AT'96), Lausanne, Switzerland
Year:
1996
Publisher:
AATI
Keywords:
Laboratories:




 Record created 2006-03-10, last modified 2018-03-17


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)