Discrete All-Positive Multilayer Perceptrons for Optical Implementation

All-optical multilayer perceptrons differ in various ways from the ideal neural network model. Examples are the use of non-ideal activation functions which are truncated, asymmetric, and have a non-standard gain, restriction of the network parameters to non-negative values, and the limited accuracy of the weights. In this paper, a backpropagation-based learning rule is presented that compensates for these non-idealities and enables the implementation of all-optical multilayer perceptrons where learning occurs under control of a computer. The good performance of this learning rule, even when using a small number of weight levels, is illustrated by a series of experiments including the non-idealities.


Year:
1997
Publisher:
IDIAP
Keywords:
Note:
Accepted for publication in {\em Optical Engineering}
Laboratories:




 Record created 2006-03-10, last modified 2018-03-17

n/a:
Download fulltextPDF
External link:
Download fulltextURL
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)