000082400 001__ 82400
000082400 005__ 20180317093226.0
000082400 037__ $$aBOOK_CHAP
000082400 245__ $$aNeural Network Adaptations to Hardware Implementations
000082400 269__ $$a1997
000082400 260__ $$aNew York$$bInstitute of Physics Publishing and Oxford University Publishing$$c1997
000082400 336__ $$aBook Chapters
000082400 500__ $$aIDIAP-RR 97-17
000082400 520__ $$aIn order to take advantage of the massive parallelism offered by artificial neural networks, hardware implementations are essential. However, most standard neural network models are not very suitable for implementation in hardware and adaptations are needed. In this section an overview is given of the various issues that are encountered when mapping an ideal neural network model onto a compact and reliable neural network hardware implementation, like quantization, handling nonuniformities and nonideal responses, and restraining computational complexity. Furthermore, a broad range of hardware-friendly learning rules is presented, which allow for simpler and more reliable hardware implementations. The relevance of these neural network adaptations to hardware is illustrated by their application in existing hardware implementations.
000082400 6531_ $$aneuron
000082400 6531_ $$alearning
000082400 6531_ $$aperry
000082400 700__ $$aMoerland, Perry
000082400 700__ $$aFiesler, Emile
000082400 720_1 $$aFiesler, Emile$$eed.
000082400 720_1 $$aBeale, R.$$eed.
000082400 773__ $$qE1.2:1-13$$tHandbook of Neural Computation
000082400 8564_ $$uhttp://publications.idiap.ch/downloads/reports/1997/rr97-17.pdf$$zURL
000082400 8564_ $$s261940$$uhttps://infoscience.epfl.ch/record/82400/files/rr97-17.pdf$$zn/a
000082400 909CO $$ooai:infoscience.tind.io:82400$$pchapter$$pSTI
000082400 909C0 $$0252189$$pLIDIAP$$xU10381
000082400 937__ $$aEPFL-CHAPTER-82400
000082400 970__ $$aMoerland-97.1/LIDIAP
000082400 973__ $$aEPFL$$sPUBLISHED
000082400 980__ $$aCHAPTER