000266587 001__ 266587
000266587 005__ 20190812204803.0
000266587 020__ $$a978-1-4503-5692-3
000266587 02470 $$a000457913100006$$2isi
000266587 0247_ $$a10.1145/3242969.3243027$$2doi
000266587 037__ $$aCONF
000266587 245__ $$aPredicting Group Performance in Task-Based Interaction
000266587 260__ $$c2018$$aNew York$$bASSOC COMPUTING MACHINERY
000266587 269__ $$a2018-01-01
000266587 336__ $$aConference Papers
000266587 520__ $$aWe address the problem of automatically predicting group performance on a task, using multimodal features derived from the group conversation. These include acoustic features extracted from the speech signal, and linguistic features derived from the conversation transcripts. Because much work on social signal processing has focused on nonverbal features such as voice prosody and gestures, we explicitly investigate whether features of linguistic content are useful for predicting group performance. The conclusion is that the best-performing models utilize both linguistic and acoustic features, and that linguistic features alone can also yield good performance on this task. Because there is a relatively small amount of task data available, we present experimental approaches using domain adaptation and a simple data augmentation method, both of which yield drastic improvements in predictive performance, compared with a target-only model.
000266587 650__ $$aComputer Science, Cybernetics
000266587 650__ $$aComputer Science, Theory & Methods
000266587 650__ $$aEngineering, Electrical & Electronic
000266587 650__ $$aComputer Science
000266587 650__ $$aEngineering
000266587 6531_ $$agroup interaction
000266587 6531_ $$atask performance
000266587 6531_ $$amultimodal interaction
000266587 6531_ $$ameetings
000266587 6531_ $$asocial signal processing
000266587 6531_ $$adata augmentation
000266587 6531_ $$adomain adaptation
000266587 6531_ $$asemi-supervised learning
000266587 6531_ $$acorpus
000266587 700__ $$aMurray, Gabriel
000266587 700__ $$aOertel, Catharine$$0251590$$g292051
000266587 7112_ $$a20th ACM International Conference on Multimodal Interaction (ICMI)$$dOct 16-20, 2018$$cBoulder, CO
000266587 773__ $$q14-20$$tIcmi'18: Proceedings Of The 20Th Acm International Conference On Multimodal Interaction
000266587 8560_ $$fpierre.dillenbourg@epfl.ch
000266587 909C0 $$mpierre.dillenbourg@epfl.ch$$zGrolimund, Raphael$$0252475$$yApproved$$pCHILI$$xU12753
000266587 909CO $$pconf$$pIC$$ooai:infoscience.epfl.ch:266587
000266587 961__ $$abeatrice.marselli@epfl.ch
000266587 973__ $$aEPFL$$sPUBLISHED$$rREVIEWED
000266587 980__ $$aCONF
000266587 980__ $$aWoS
000266587 981__ $$aoverwrite