000192626 001__ 192626
000192626 005__ 20180913062234.0
000192626 037__ $$aCONF
000192626 245__ $$aInferring Mood in Ubiquitous Conversational Video
000192626 269__ $$a2013
000192626 260__ $$bACM Press$$c2013
000192626 336__ $$aConference Papers
000192626 520__ $$aConversational social video is becoming a worldwide trend. Video communication allows a more natural interaction, when aiming to share personal news, ideas, and opinions, by transmitting both verbal content and nonverbal behavior. However, the automatic analysis of natural mood is challenging, since it is displayed in parallel via voice, face, and body. This paper presents an automatic approach to infer 11 natural mood categories in conversational social video using single and multimodal nonverbal cues extracted from video blogs (vlogs) from YouTube. The mood labels used in our work were collected via crowdsourcing. Our approach is promising for several of the studied mood categories. Our study demonstrates that although multimodal features perform better than single channel features, not always all the available channels are needed to accurately discriminate mood in videos.
000192626 6531_ $$amood
000192626 6531_ $$aNonverbal behavior
000192626 6531_ $$aSentiment Analysis
000192626 6531_ $$aVerbal content
000192626 700__ $$aSanchez-Cortes, Dairazalia
000192626 700__ $$aBiel, Joan-Isaac
000192626 700__ $$aKumano, Shiro
000192626 700__ $$aYamato, Junji
000192626 700__ $$aOtsuka, Kazuhiro
000192626 700__ $$0241066$$aGatica-Perez, Daniel$$g171600
000192626 7112_ $$a12th International Conference on Mobile and Ubiquitous Multimedia$$cLuleå, Sweden
000192626 8564_ $$s263231$$uhttps://infoscience.epfl.ch/record/192626/files/Sanchez-Cortes_MUM_2013.pdf$$yn/a$$zn/a
000192626 909C0 $$0252189$$pLIDIAP$$xU10381
000192626 909CO $$ooai:infoscience.tind.io:192626$$pconf$$pSTI
000192626 937__ $$aEPFL-CONF-192626
000192626 970__ $$aSanchez-Cortes_MUM_2013/LIDIAP
000192626 980__ $$aCONF