000202143 001__ 202143
000202143 005__ 20190416055543.0
000202143 037__ $$aREP_WORK
000202143 245__ $$aSemi-Supervised Facial Animation Retargeting
000202143 269__ $$a2014
000202143 260__ $$c2014
000202143 300__ $$a6
000202143 336__ $$aReports
000202143 520__ $$aThis paper presents a system for facial animation retargeting that al- lows learning a high-quality mapping between motion capture data and arbitrary target characters. We address one of the main chal- lenges of existing example-based retargeting methods, the need for a large number of accurate training examples to define the corre- spondence between source and target expression spaces. We show that this number can be significantly reduced by leveraging the in- formation contained in unlabeled data, i.e. facial expressions in the source or target space without corresponding poses. In contrast to labeled samples that require time-consuming and error-prone manual character posing, unlabeled samples are easily obtained as frames of motion capture recordings or existing animations of the target character. Our system exploits this information by learning a shared latent space between motion capture and character param- eters in a semi-supervised manner. We show that this approach is resilient to noisy input and missing data and significantly improves retargeting accuracy. To demonstrate its applicability, we integrate our algorithm in a performance-driven facial animation system.
000202143 6531_ $$afacial animation retargeting
000202143 700__ $$0244485$$g179749$$aBouaziz, Sofien
000202143 700__ $$0244286$$g196500$$aPauly, Mark
000202143 8564_ $$uhttps://infoscience.epfl.ch/record/202143/files/SSGPLVM_EPFL14.pdf$$zn/a$$s5574050$$yn/a
000202143 909C0 $$xU12168$$0252282$$pLGG
000202143 909CO $$ooai:infoscience.tind.io:202143$$qGLOBAL_SET$$pIC$$preport
000202143 917Z8 $$x179749
000202143 937__ $$aEPFL-REPORT-202143
000202143 973__ $$aEPFL
000202143 980__ $$aREPORT