000192624 001__ 192624
000192624 005__ 20190316235803.0
000192624 0247_ $$2doi$$a10.1007/s12193-012-0101-0
000192624 037__ $$aARTICLE
000192624 245__ $$aEmergent leaders through looking and speaking: from audio-visual data to multimodal recognition
000192624 269__ $$a2013
000192624 260__ $$c2013
000192624 336__ $$aJournal Articles
000192624 520__ $$aIn this paper we present a multimodal analysis of emergent leadership in small groups using audio-visual features and discuss our experience in designing and collecting a data corpus for this purpose. The ELEA Audio-Visual Synchronized corpus (ELEA AVS) was collected using a light portable setup and contains recordings of small group meetings. The participants in each group performed the winter survival task and filled in questionnaires related to personality and several social concepts such as leadership and dominance. In addition, the corpus includes annotations on participants’ performance in the survival task, and also annotations of social concepts from external viewers. Based on this corpus, we present the feasibility of predicting the emergent leader in small groups using automatically extracted audio and visual features, based on speaking turns and visual attention, and we focus specifically on multimodal features that make use of the looking at participants while speaking and looking at while not speaking measures. Our findings indicate that emergent leadership is related, but not equivalent, to dominance, and while multimodal features bring a moderate degree of effectiveness in inferring the leader, much simpler features extracted from the audio channel are found to give better performance.
000192624 6531_ $$aEmergent leadership
000192624 6531_ $$amultimodal cues
000192624 6531_ $$aNonverbal behavior
000192624 6531_ $$asmall group interactions
000192624 700__ $$aSanchez-Cortes, Dairazalia
000192624 700__ $$aAran, Oya
000192624 700__ $$0243365$$g176559$$aJayagopi, Dinesh Babu
000192624 700__ $$aSchmid Mast, Marianne
000192624 700__ $$aGatica-Perez, Daniel$$g171600$$0241066
000192624 773__ $$j7$$tJournal on Multimodal User Interfaces$$k1-2$$q39–53
000192624 8564_ $$uhttps://infoscience.epfl.ch/record/192624/files/Sanchez-Cortes_JMUI_2012.pdf$$zn/a$$s662225$$yn/a
000192624 909C0 $$xU10381$$0252189$$pLIDIAP
000192624 909CO $$qGLOBAL_SET$$pSTI$$ooai:infoscience.tind.io:192624$$particle
000192624 917Z8 $$x148230
000192624 937__ $$aEPFL-ARTICLE-192624
000192624 970__ $$aSanchez-Cortes_JMUI_2012/LIDIAP
000192624 973__ $$rREVIEWED$$sPUBLISHED$$aEPFL
000192624 980__ $$aARTICLE