Publication:

Non-verbal Communication between Humans and Robots: Imitation, Mutual Understanding and Inferring Object Properties

cris.lastimport.scopus

2024-08-07T10:00:43Z

cris.legacyId

302642

cris.virtual.author-scopus

7006389948

cris.virtual.department

LASA

cris.virtual.parent-organization

EDOC

cris.virtual.parent-organization

ETU

cris.virtual.parent-organization

EPFL

cris.virtual.parent-organization

EPFL

cris.virtual.parent-organization

IEM

cris.virtual.parent-organization

STI

cris.virtual.parent-organization

EPFL

cris.virtual.parent-organization

STI

cris.virtual.parent-organization

EPFL

cris.virtual.sciperId

115671

cris.virtual.sciperId

191460

cris.virtual.sciperId

268216

cris.virtual.unitId

10660

cris.virtual.unitManager

Billard, Aude

cris.virtualsource.author-scopus

ab997393-e8b4-4aa1-bb77-76d1bf09d3e8

cris.virtualsource.author-scopus

47d5fdc8-fb3f-426c-a84b-8ff939026820

cris.virtualsource.author-scopus

a5788d24-0f7a-4886-bdc5-74ceb29ace37

cris.virtualsource.department

ab997393-e8b4-4aa1-bb77-76d1bf09d3e8

cris.virtualsource.department

47d5fdc8-fb3f-426c-a84b-8ff939026820

cris.virtualsource.department

a5788d24-0f7a-4886-bdc5-74ceb29ace37

cris.virtualsource.orcid

ab997393-e8b4-4aa1-bb77-76d1bf09d3e8

cris.virtualsource.orcid

47d5fdc8-fb3f-426c-a84b-8ff939026820

cris.virtualsource.orcid

a5788d24-0f7a-4886-bdc5-74ceb29ace37

cris.virtualsource.parent-organization

cd81d08e-ebe1-437f-86a8-6ff284e9c4d1

cris.virtualsource.parent-organization

cd81d08e-ebe1-437f-86a8-6ff284e9c4d1

cris.virtualsource.parent-organization

cd81d08e-ebe1-437f-86a8-6ff284e9c4d1

cris.virtualsource.parent-organization

cd81d08e-ebe1-437f-86a8-6ff284e9c4d1

cris.virtualsource.parent-organization

e241245b-0e63-4d9e-806e-b766e62006ef

cris.virtualsource.parent-organization

e241245b-0e63-4d9e-806e-b766e62006ef

cris.virtualsource.parent-organization

f4dcb7c5-b61f-4eb2-b7ab-5be67c473a7a

cris.virtualsource.parent-organization

f4dcb7c5-b61f-4eb2-b7ab-5be67c473a7a

cris.virtualsource.parent-organization

f4dcb7c5-b61f-4eb2-b7ab-5be67c473a7a

cris.virtualsource.parent-organization

f4dcb7c5-b61f-4eb2-b7ab-5be67c473a7a

cris.virtualsource.parent-organization

ec3b4bf8-b0c7-44c3-b835-a7061e89778b

cris.virtualsource.parent-organization

ec3b4bf8-b0c7-44c3-b835-a7061e89778b

cris.virtualsource.parent-organization

ec3b4bf8-b0c7-44c3-b835-a7061e89778b

cris.virtualsource.rid

ab997393-e8b4-4aa1-bb77-76d1bf09d3e8

cris.virtualsource.rid

47d5fdc8-fb3f-426c-a84b-8ff939026820

cris.virtualsource.rid

a5788d24-0f7a-4886-bdc5-74ceb29ace37

cris.virtualsource.sciperId

ab997393-e8b4-4aa1-bb77-76d1bf09d3e8

cris.virtualsource.sciperId

47d5fdc8-fb3f-426c-a84b-8ff939026820

cris.virtualsource.sciperId

a5788d24-0f7a-4886-bdc5-74ceb29ace37

cris.virtualsource.unitId

f4dcb7c5-b61f-4eb2-b7ab-5be67c473a7a

cris.virtualsource.unitManager

f4dcb7c5-b61f-4eb2-b7ab-5be67c473a7a

datacite.rights

openaccess

dc.contributor.advisor

Billard, Aude

dc.contributor.advisor

Santos-Victor, José

dc.contributor.author

Ferreira Duarte, Nuno Ricardo

dc.date.accepted

2023

dc.date.accessioned

2023-05-30T16:03:09

dc.date.available

2023-05-30T16:03:09

dc.date.created

2023-05-30

dc.date.issued

2023

dc.date.modified

2025-02-19T14:34:18.946158Z

dc.description.abstract

Humans can express their actions and intentions, resorting to verbal and/or non-verbal communication. In verbal communication, humans use language to express, in structured linguistic terms, the desired action they wish to perform. Non-verbal communication refers to the expressiveness of the human body movements during the interaction with other humans, while manipulating objects, or simply navigating in the world. In a sense, all actions require moving our musculoskeletal system which in return contribute to expressing the intention concerning the completion of that action. Moreover, considering that all humans share a common motor-repertoire, i.e. the degrees of freedom and joint limits, excluding cultural or society-based influences, all humans express action intentions using a common non-verbal language. From walking along a corridor, to pointing to a painting on a wall, or handing over a cup to someone, communication is provided in the form of non-verbal ``cues'', that express action intentions.

This thesis objective is hence threefold: (i) improve robot imitation of human actions by incorporating human-inspired non-verbal cues onto robots; (ii) explore how humans communicate their goals and intention non-verbally and how robots can use the same non-verbal cues to also communicate its goals and intentions to humans; and (iii) extract latent properties of objects that are revealed by human non-verbal cues during manipulation and incorporate them onto the robot non-verbal cue system in order to express those properties.

One of the contributions is the creation of multiple publicly available datasets of synchronized videos, gaze, and body motion data. We conducted several Human-human interaction experiments with three objectives in mind: (i) study the motion behaviors of both perspectives in human-human interactions, (ii) understand how the participants manage to predict the observed actions of the other; (iii) use the collected data to model the human eye-gaze behavior and arm behavior.

The second contribution is an extension to the legibility concept to include eye-gaze cues. This extension proved that humans can correctly predict the robot action as early, and with the same cues, as if it were a human doing it.

The third contribution is developing a human-to-human synchronized non-verbal communication model, i.e. the \textit{Gaze Dialogue}, which shows the inter-personal communication of motor and gaze cues that occur during action execution and observation, and apply it to a human-to-robot experiment. During the interaction, the robot can: (i) adequately infer the human action from gaze cues, (ii) adjust its gaze fixation according to the human eye-gaze behavior, and (iii) signal non-verbal cues that correlate with the robot's own action intentions.

The fourth and final contribution is to demonstrate that non-verbal cues information extracted from human can be used by robots in recognizing the types of actions (individual or action-in-interactions), the types of intentions (to polish or to handover), and the types of manipulations (careful or careless).

Overall, the communication tools developed in this thesis contribute to enhance of human robot interaction experience, by incorporating the non-verbal communication "protocols" used when humans interact with each other.

dc.description.notes

Co-supervision with Instituto Superior Técnico (IST) da Universidade de Lisboa, Doutoramento em Engenharia Informática e de Computadores

dc.description.sponsorship

LASA

dc.identifier.doi

10.5075/epfl-thesis-11681

dc.identifier.uri

https://infoscience.epfl.ch/handle/20.500.14299/197862

dc.language.iso

en

dc.publisher

EPFL

dc.publisher.place

Lausanne

dc.relation

https://infoscience.epfl.ch/record/302642/files/EPFL_TH11681.pdf

dc.size

221

dc.subject

Non-verbal Cues

dc.subject

Human-Human Interaction

dc.subject

Eyes and Body Tracking

dc.subject

Mutual Understanding

dc.subject

Human-Robot Interaction

dc.title

Non-verbal Communication between Humans and Robots: Imitation, Mutual Understanding and Inferring Object Properties

dc.type

thesis::doctoral thesis

dspace.entity.type

Publication

dspace.file.type

n/a

dspace.legacy.oai-identifier

oai:infoscience.epfl.ch:302642

epfl.legacy.itemtype

Theses

epfl.legacy.submissionform

THESIS

epfl.oai.currentset

fulltext

epfl.oai.currentset

DOI

epfl.oai.currentset

STI

epfl.oai.currentset

thesis

epfl.oai.currentset

thesis-bn

epfl.oai.currentset

OpenAIREv4

epfl.publication.version

http://purl.org/coar/version/c_970fb48d4fbd8a85

epfl.thesis.doctoralSchool

EDRS

epfl.thesis.faculty

STI

epfl.thesis.institute

IEM

epfl.thesis.jury

Prof. Colin Neil Jones (président) ; professeure Aude Billard, Prof. José Santos-Victor (directeurs) ; Prof. Alexandre José Malheiro Bernardino, Dr Serena Ivaldi, Dr Alessandra Sciutti (rapporteurs)

epfl.thesis.number

11681

epfl.thesis.originalUnit

LASA

epfl.thesis.publicDefenseYear

2023-04-06

epfl.writtenAt

EPFL

oaire.licenseCondition

copyright

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
EPFL_TH11681.pdf
Size:
45.8 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description:

Collections