Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Datasets and Code
  4. Acquiring musculoskeletal skills with curriculum-based reinforcement learning - model weights
 
dataset

Acquiring musculoskeletal skills with curriculum-based reinforcement learning - model weights

Chiappa, Alberto  
•
Tano, Pablo
•
Patel, Nisheet
Show more
2024
Zenodo

Here we provide the weights of the neural network policies used for the analysis presented in our article.

The archives whose names start with a number (01 - 32) correspond to the 32 curriculum steps to train the Baoding Balls policy which ranked first at the MyoChallenge 2022. The code used for the training and which can be used to test the policies can be found at https://github.com/amathislab/myochallenge.

The archives hand_pose, hand_reach, pen and reorient correspond to the other policies used in the article. They were developed in the paper Latent exploration for reinforcement learning, Chiappa et al., NeurIPS 2023. They can be loaded and tested with the code at https://github.com/amathislab/lattice.

The archive datasets includes three subfolders: rollouts, umap and csi.

The files in rollouts are the datasets of transitions resulting from the interaction between a policy and the environment. 
The files in umap are the pre-computed projections of specific subsets fo the datasets included in rollouts using UMAP.
The files in csi report the performance of the policies described in our paper when applying Control Subspace Inactivation (CSI).

These datasets are necessary to run the notebooks to reproduce the paper's figures and main results, with the code at https://github.com/amathislab/MyoChallengeAnalysis

If you find these weights useful, please cite: @article{chiappa2024acquiring, title={Acquiring musculoskeletal skills with curriculum-based reinforcement learning}, author={Chiappa, Alberto Silvio and Tano, Pablo and Patel, Nisheet and Ingster, Abigail and Pouget, Alexandre and Mathis, Alexander}, journal={bioRxiv}, pages={2024--01}, year={2024}, publisher={Cold Spring Harbor Laboratory} }

@article{chiappa2024latent, title={Latent exploration for reinforcement learning}, author={Chiappa, Alberto Silvio and Marin Vargas, Alessandro and Huang, Ann and Mathis, Alexander}, journal={Advances in Neural Information Processing Systems}, volume={36}, year={2024} }

  • Details
  • Metrics
Type
dataset
DOI
10.5281/zenodo.13753695
ACOUA ID

c4a008ab-c07f-4549-bdbf-0d2d762052bb

Author(s)
Chiappa, Alberto  

EPFL

Tano, Pablo

University of Geneva

Patel, Nisheet

University of Geneva

Ingster, Abigaïl Rebecca Lise  

EPFL

Pouget, Alexandre

University of Geneva

Mathis, Alexander  

EPFL

Date Issued

2024

Version

2.0

Publisher

Zenodo

License

CC BY

EPFL units
UPAMATHIS  
FunderFunding(s)Grant NOGrant URL

Swiss National Science Foundation

A theory-driven approach to understanding the neural circuits of proprioception

212516

RelationRelated workURL/DOI

IsSupplementTo

Acquiring musculoskeletal skills with curriculum-based reinforcement learning

https://infoscience.epfl.ch/handle/20.500.14299/241561

Continues

Latent Exploration for Reinforcement Learning

https://arxiv.org/abs/2305.20065

Continues

MyoChallenge 2022: Learning contact-rich manipulation using a musculoskeletal hand

https://proceedings.mlr.press/v220/caggiano23a.html
Show more
Available on Infoscience
October 10, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/241559
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés