Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Inferring Mood in Ubiquitous Conversational Video
 
conference paper

Inferring Mood in Ubiquitous Conversational Video

Sanchez-Cortes, Dairazalia
•
Biel, Joan-Isaac
•
Kumano, Shiro
Show more
2013
MUM '13: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia
12th International Conference on Mobile and Ubiquitous Multimedia

Conversational social video is becoming a worldwide trend. Video communication allows a more natural interaction, when aiming to share personal news, ideas, and opinions, by transmitting both verbal content and nonverbal behavior. However, the automatic analysis of natural mood is challenging, since it is displayed in parallel via voice, face, and body. This paper presents an automatic approach to infer 11 natural mood categories in conversational social video using single and multimodal nonverbal cues extracted from video blogs (vlogs) from YouTube. The mood labels used in our work were collected via crowdsourcing. Our approach is promising for several of the studied mood categories. Our study demonstrates that although multimodal features perform better than single channel features, not always all the available channels are needed to accurately discriminate mood in videos.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

Sanchez-Cortes_MUM_2013.pdf

Access type

openaccess

Size

257.06 KB

Format

Adobe PDF

Checksum (MD5)

42bede7e42895393391ee29204a46acd

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés