Optimization of high order perceptrons
Neural networks are widely applied in research and industry. However, their broader application is hampered by various technical details. Among these details are several training parameters and the choice of the topology of the network. The subject of this dissertation is therefore the elimination and determination of usually user specified learning parameters. Furthermore, suitable application domains for neural networks are discussed. Among all training parameters, special attention is given to the learning rate, the gain of the sigmoidal function, and the initial weight range. A theorem is proven which permits the elimination of one of these parameters. Furthermore, it is shown that for high order perceptrons, very small random initial weights are usually optimal in terms of training time and generalization. Another important problem in the application of neural networks is to find a network topology that suits a given data set. This favors high order perceptrons over several other neural network architectures, as they do not require layers of hidden neurons. However, the order and the connectivity of a network have to be determined, which is possible by two approaches. The first is to remove connections from an initially big network while training it. The other approach is to increase gradually the network size. Both types of approaches are studied, corresponding algorithms are developed, and applied to high order perceptrons. The (dis-)advantages of both approaches are gone into and their performance experimentally compared. Then, an outlook on future research on the interpretation and analysis of high order perceptrons and their feasibility is given. Finally, high order perceptrons and the developed algorithms are applied to a number of real world applications, and, in order to show their efficiency, the obtained performances are compared to those of other approaches.
Dissertation number 1633
Record created on 2010-02-11, modified on 2016-08-08