000126373 001__ 126373
000126373 005__ 20190316234330.0
000126373 037__ $$aCONF
000126373 245__ $$aFeature Harvesting for Tracking-by-Detection
000126373 269__ $$a2006
000126373 260__ $$aBerlin / Heidelberg$$bSpringer$$c2006
000126373 336__ $$aConference Papers
000126373 490__ $$aLecture Notes in Computer Science$$v3953
000126373 520__ $$aWe propose a fast approach to 3D object detection and pose estimation that owes its robustness to a training phase during which the target object slowly moves with respect to the camera. No additional information is provided to the system, save a very rough initialization in the first frame of the training sequence. It can be used to detect the target object in each video frame independently. Our approach relies on a Randomized Tree-based approach to wide baseline feature matching. Unlike previous classification-based approaches to 3-D pose estimation, we do not require an a priori 3-D model. Instead, our algorithm learns both geometry and appearance. In the process, it collects, or harvests, a list of features that can be reliably recognized even when large motions and aspect changes cause complex variations of feature appearances. This is made possible by the great fl exibility of Randomized Trees, which lets us add and remove feature points to our list as needed with a minimum amount of extra computation.
000126373 6531_ $$aComputer Vision
000126373 6531_ $$aObject Detection
000126373 6531_ $$aTracking by Detection
000126373 6531_ $$aPose Estimation
000126373 700__ $$aOzuysal, Mustafa
000126373 700__ $$0240235$$aLepetit, Vincent$$g149007
000126373 700__ $$0240254$$aFleuret, Francois$$g146262
000126373 700__ $$0240252$$aFua, Pascal$$g112366
000126373 7112_ $$aEuropean Conference on Computer Vision$$cGraz$$dMay 7-13, 2006
000126373 773__ $$q592-605$$tComputer Vision – ECCV 2006
000126373 8564_ $$uhttp://eccv2006.tugraz.at/$$zURL
000126373 8564_ $$s2243012$$uhttps://infoscience.epfl.ch/record/126373/files/OzuysalLFF06.pdf$$zn/a
000126373 909C0 $$0252087$$pCVLAB$$xU10659
000126373 909CO $$ooai:infoscience.tind.io:126373$$pconf$$pIC$$qGLOBAL_SET
000126373 937__ $$aCVLAB-CONF-2008-002
000126373 973__ $$aEPFL$$rREVIEWED$$sPUBLISHED
000126373 980__ $$aCONF