Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Datasets and Code
  4. EPFL-Smart-Kitchen-30 Annotations and Poses
 
dataset

EPFL-Smart-Kitchen-30 Annotations and Poses

Bonnetto, Andy  
•
Qi, Haozhe  
•
Leong, Franklin  
Show more
May 30, 2025
Zenodo

Understanding behavior requires datasets that capture humans while carrying out complex tasks. The kitchen is an excellent environment for assessing human motor and cognitive function, as many complex actions are naturally exhibited in kitchens from chopping to cleaning. Here, we introduce the EPFL-Smart-Kitchen-30 dataset, collected in a noninvasive motion capture platform inside a kitchen environment. Nine static RGB-D cameras, inertial measurement units (IMUs) and one head-mounted HoloLens~2 headset were used to capture 3D hand, body, and eye movements. The EPFL-Smart-Kitchen-30 dataset is a multi-view action dataset with synchronized exocentric, egocentric, depth, IMUs, eye gaze, body and hand kinematics spanning 29.7 hours of 16 subjects cooking four different recipes. Action sequences were densely annotated with 33.78 action segments per minute. Leveraging this multi-modal dataset, we propose four benchmarks to advance behavior understanding and modeling through

  1. a vision-language benchmark,
  2. a semantic text-to-motion generation benchmark,
  3. a multi-modal action recognition benchmark,
  4. a pose-based action segmentation benchmark.

> ⚠️ videos and other collected data can be found at https://zenodo.org/records/15535461

General informations

  • Authors: Andy Bonnetto, Haozhe Qi, Franklin Leong, Matea Tashkovska, Mahdi Rad, Solaiman Shokur, Friedhelm Hummel, Silvestro Micera, Marc Pollefeys, Alexander Mathis
  • Affiliation: 1) Ecole Polytechnique de Lausanne (EPFL), 2) Eidgenössische Technische Hochschule Zürich (ETHZ), 3) Microsoft
  • Date of collection: 05.2023 - 01.2024 (MM.YYYY - MM.YYYY)
  • Geolocation data: Campus Biotech, Genève, Switzerland
  • Associated publication URL: https://arxiv.org/abs/2506.01608
  • Funding: Our work was funded by EPFL and Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We are grateful to the Brain Mind Institute for providing funds for the cameras and to the Neuro-X Institute for providing funds to annotate data.

Dataset availability

  • License: This dataset is released under the non-commercial CC BY-NC 4.0 license.
  • Citation: Please consider citing the associated publication when using our data.
  • Repository URL: https://github.com/amathislab/EPFL-Smart-Kitchen
  • Repository DOI: 10.5281/zenodo.15551913
  • Dataset version: v1

Data and files overview

  • Data preparation: unzip Public_release_pose.zip
  • Repository structure:
Public_release_pose
├── README.md
├── train
|   ├── YH2002 (participant)
|   |   ├── 2023_12_04_10_15_23 (session)
|   |   |   ├── annotations
|   |   |   |   ├── action_annotations.xlsx
|   |   |   |   └── activity_annotations.json
|   |   |   ├── pose_3d
|   |   |   |   ├── pose3d_mano.csv
|   |   |   |   └── pose3d_smpl.csv
|   |   └── ...
|   └── ...   
└── test
    └── ...
  • train and test: Contains the train and test data for the action recognition task, the actions segmentation task and the full-body motion generation task. These folders are structured in participants and sessions. Each session contains 2 modalities:
    • annotations: contains the action and activity annotation data.
    • pose_3d: 3D pose estimation for the hand (MANO) and for the body (SMPL).

> We refer the reader to the associated publication for details about data processing and tasks description.

Naming conventions

  • Exocentric camera names are the following : output0 , Aoutput0, Aoutput1, Aoutput2, Aoutput3, Boutput0, Boutput1, Boutput2, Boutput3.
  • Participant are identified with YH and a random identifier, sessions are given by the date and time of recording.

File characteristics

  • action_annotations.xlsx: Table with the following fields:
    • Start : start time in second of an action
    • End : end time in second of an action
    • Verbs : annotated verb for the segment
    • Nouns : annotated noun for the segment
    • Confusion: confusion of annotator for this segment (0-1)
  • activity_annotations.json : Json file with the following fields:
    • datetime : time of the annotation
    • video_file : annotated session (corresponds to all cameras)
    • annotations:
      • start: start time in second of an action
      • end : end time in second of an action
      • Activities : annotated activity
  • pose3d_mano and pose3d_smpl: 3D pose estimation for the hand and body, contain the following fields:
    • kp3ds : 3D pose estimation (42 keypoints for the hands (left/rights) and 17 keypoints for the body)
    • left_poses/right_poses : pose parameters of the fitted mesh model
    • left_RH/right_RH : rotation matrices for the fitted mesh model
    • left_TH/right_TH : translation matrices for the fitted mesh model
    • left_shapes/right_shapes: shape parameters for the fitted mesh model > We refer the reader to the associated publication for details about data processing and tasks description.

Methodological informations

Benchmark evaluation code: Will be available soon > We refer the reader to the associated publication for details about data processing and tasks description.

Acknowledgements

Our work was funded by EPFL and Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We are grateful to the Brain Mind Institute for providing funds for the cameras and to the Neuro-X Institute for providing funds to annotate data

Change log (DD.MM.YYYY)

[03.06.2025]: First data release !

  • Details
  • Metrics
Type
dataset
DOI
10.5281/zenodo.15551913
ACOUA ID

2fa92d26-f477-415a-b884-667388ab8778

Author(s)
Bonnetto, Andy  

EPFL

Qi, Haozhe  

EPFL

Leong, Franklin  

EPFL

Tashkovska, Matea  

EPFL

Hamidi Rad, Mahdi  

Microsoft (Switzerland)

Shokur, Solaiman  

EPFL

Hummel, Friedhelm Christoph  

EPFL

Micera, Silvestro  

EPFL

Pollefeys, Marc

Microsoft ; ETH Zurich

Mathis, Alexander  

EPFL

Date Issued

2025-05-30

Version

1

Publisher

Zenodo

License

CC BY

Subjects

pose estimation

•

kitchen

•

cooking

•

action segmentation

•

action recognition

•

motion generation

•

full-body

•

eye-gaze

•

3D pose

•

absolute position

•

actions

•

activities

•

hierarchical behavior

•

behavior

•

motor control

EPFL units
UPAMATHIS  
EDIC  
TNE  
Show more
FunderFunding(s)Grant NOGrant URL

Swiss National Science Foundation

Joint behavior and neural data modeling for naturalistic behavior

10000950

RelationRelated workURL/DOI

Continues

EPFL-Smart-Kitchen-30 Collected data

https://infoscience.epfl.ch/handle/20.500.14299/251269

IsSupplementTo

EPFL-Smart-Kitchen-30: Densely annotated cooking dataset with 3D
kinematics to challenge video and language models

https://infoscience.epfl.ch/handle/20.500.14299/251268

IsVersionOf

https://doi.org/10.5281/zenodo.15551912
Available on Infoscience
June 4, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/251051
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés