000126392 001__ 126392
000126392 005__ 20190316234331.0
000126392 037__ $$aCONF
000126392 245__ $$aIA Generative Model for True Orthorectification
000126392 269__ $$a2008
000126392 260__ $$c2008
000126392 336__ $$aConference Papers
000126392 520__ $$aOrthographic images compose an efficient and economic way to represent aerial images. This kind of information allows to measure two-dimensional objects and relate these to Geographic Information Systems. This paper deals with the computation of a true orthographic image given a set of overlapping perspective images. These are, together with the internal and external calibration the only input to our approach. These few requirements form a large advantage to systems where the digital surface model (DSM), e.g. provided by LIDAR data, is necessary. We used a Bayesian approach and define a generative model of the input images. In this, the input images are regarded as noisy measurements of an underlying true and hence unknown orthoimage. These measurements are obtained by an image formation process (generative model) that involves apart from the true orthoimage several additional parameters. Our goal is to invert the image formation process by estimating those parameters which make our input images most likely. We present results on aerial images of a complex urban environment.
000126392 6531_ $$atrue orthoimage
000126392 6531_ $$amulti-view stereo
000126392 6531_ $$aDSM
000126392 6531_ $$agenerative models
000126392 700__ $$0244088$$g182325$$aStrecha, Christoph
000126392 700__ $$aVan Gool, Luc
000126392 700__ $$0240252$$g112366$$aFua, Pascal
000126392 7112_ $$d2008$$cBeijing$$aISPRS Congress
000126392 773__ $$jXXXVII$$tISPRS$$kPart B3a$$q303-308
000126392 8564_ $$uhttp://www.isprs.org/congresses/beijing2008/proceedings/tc3a.html$$zURL
000126392 8564_ $$uhttps://infoscience.epfl.ch/record/126392/files/isprs2008.pdf$$zn/a$$s1316853
000126392 909C0 $$xU10659$$0252087$$pCVLAB
000126392 909CO $$qGLOBAL_SET$$pconf$$ooai:infoscience.tind.io:126392$$pIC
000126392 917Z8 $$x112366
000126392 937__ $$aCVLAB-CONF-2008-011
000126392 973__ $$rREVIEWED$$sPUBLISHED$$aEPFL
000126392 980__ $$aCONF