000192624 001__ 192624
000192624 005__ 20180913062234.0
000192624 0247_ $$2doi$$a10.1007/s12193-012-0101-0
000192624 037__ $$aARTICLE
000192624 245__ $$aEmergent leaders through looking and speaking: from audio-visual data to multimodal recognition
000192624 269__ $$a2013
000192624 260__ $$c2013
000192624 336__ $$aJournal Articles
000192624 520__ $$aIn this paper we present a multimodal analysis of emergent leadership in small groups using audio-visual features and discuss our experience in designing and collecting a data corpus for this purpose. The ELEA Audio-Visual Synchronized corpus (ELEA AVS) was collected using a light portable setup and contains recordings of small group meetings. The participants in each group performed the winter survival task and filled in questionnaires related to personality and several social concepts such as leadership and dominance. In addition, the corpus includes annotations on participants’ performance in the survival task, and also annotations of social concepts from external viewers. Based on this corpus, we present the feasibility of predicting the emergent leader in small groups using automatically extracted audio and visual features, based on speaking turns and visual attention, and we focus specifically on multimodal features that make use of the looking at participants while speaking and looking at while not speaking measures. Our findings indicate that emergent leadership is related, but not equivalent, to dominance, and while multimodal features bring a moderate degree of effectiveness in inferring the leader, much simpler features extracted from the audio channel are found to give better performance.
000192624 6531_ $$aEmergent leadership
000192624 6531_ $$amultimodal cues
000192624 6531_ $$aNonverbal behavior
000192624 6531_ $$asmall group interactions
000192624 700__ $$aSanchez-Cortes, Dairazalia
000192624 700__ $$aAran, Oya
000192624 700__ $$0243365$$aJayagopi, Dinesh Babu$$g176559
000192624 700__ $$aSchmid Mast, Marianne
000192624 700__ $$0241066$$aGatica-Perez, Daniel$$g171600
000192624 773__ $$j7$$k1-2$$q39–53$$tJournal on Multimodal User Interfaces
000192624 8564_ $$s662225$$uhttps://infoscience.epfl.ch/record/192624/files/Sanchez-Cortes_JMUI_2012.pdf$$yn/a$$zn/a
000192624 909C0 $$0252189$$pLIDIAP$$xU10381
000192624 909CO $$ooai:infoscience.tind.io:192624$$pSTI$$particle
000192624 917Z8 $$x148230
000192624 937__ $$aEPFL-ARTICLE-192624
000192624 970__ $$aSanchez-Cortes_JMUI_2012/LIDIAP
000192624 973__ $$aEPFL$$rREVIEWED$$sPUBLISHED
000192624 980__ $$aARTICLE