000032110 001__ 32110
000032110 005__ 20190316233309.0
000032110 0247_ $$2doi$$a10.5075/epfl-thesis-1633
000032110 02471 $$2nebis$$a1866425
000032110 037__ $$aTHESIS
000032110 041__ $$aeng
000032110 088__ $$a1633
000032110 245__ $$aOptimization of high order perceptrons
000032110 269__ $$a1997
000032110 260__ $$aLausanne$$bEPFL$$c1997
000032110 300__ $$a138
000032110 336__ $$aTheses
000032110 520__ $$aNeural networks are widely applied in research and industry. However, their broader application is hampered by various technical details. Among these details are several training parameters and the choice of the topology of the network. The subject of this dissertation is therefore the elimination and determination of usually user specified learning parameters. Furthermore, suitable application domains for neural networks are discussed. Among all training parameters, special attention is given to the learning rate, the gain of the sigmoidal function, and the initial weight range. A theorem is proven which permits the elimination of one of these parameters. Furthermore, it is shown that for high order perceptrons, very small random initial weights are usually optimal in terms of training time and generalization. Another important problem in the application of neural networks is to find a network topology that suits a given data set. This favors high order perceptrons over several other neural network architectures, as they do not require layers of hidden neurons. However, the order and the connectivity of a network have to be determined, which is possible by two approaches. The first is to remove connections from an initially big network while training it. The other approach is to increase gradually the network size. Both types of approaches are studied, corresponding algorithms are developed, and applied to high order perceptrons. The (dis-)advantages of both approaches are gone into and their performance experimentally compared. Then, an outlook on future research on the interpretation and analysis of high order perceptrons and their feasibility is given. Finally, high order perceptrons and the developed algorithms are applied to a number of real world applications, and, in order to show their efficiency, the obtained performances are compared to those of other approaches.
000032110 6531_ $$aneuron
000032110 6531_ $$alearning
000032110 700__ $$aThimm, Georg
000032110 720_2 $$aKunt, Murat$$edir.
000032110 8564_ $$s6010039$$uhttps://infoscience.epfl.ch/record/32110/files/EPFL_TH1633.pdf$$yTexte intégral / Full text$$zTexte intégral / Full text
000032110 909C0 $$0252623$$pLTS1
000032110 909C0 $$0252189$$pLIDIAP$$xU10381
000032110 909CO $$ooai:infoscience.tind.io:32110$$pthesis$$pSTI$$pDOI$$qDOI2$$qGLOBAL_SET
000032110 918__ $$bSSC$$cIEL
000032110 919__ $$aLTS1
000032110 919__ $$aLIDIAP
000032110 920__ $$b1997
000032110 970__ $$a1633/THESES
000032110 973__ $$aEPFL$$sPUBLISHED
000032110 980__ $$aTHESIS