Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
 
conference paper

ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning

Zhou, Zhehua
•
Song, Jiayang
•
Yao, Kunpeng  
Show more
2024
Proceedings - IEEE International Conference on Robotics and Automation
IEEE International Conference on Robotics and Automation

Motivated by the substantial achievements of Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by a validator. We examine the performance of ISR-LLM across three distinct planning domains. Our experimental results show that ISR-LLM is able to achieve markedly higher success rates in sequential task planning compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/ICRA57147.2024.10610065
Scopus ID

2-s2.0-85202436191

Author(s)
Zhou, Zhehua

University of Alberta

Song, Jiayang

University of Alberta

Yao, Kunpeng  

École Polytechnique Fédérale de Lausanne

Shu, Zhan

University of Alberta

Ma, Lei

The University of Tokyo

Date Issued

2024

Publisher

Institute of Electrical and Electronics Engineers Inc.

Published in
Proceedings - IEEE International Conference on Robotics and Automation
ISBN of the book

9798350384574

Start page

2081

End page

2088

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LASA  
Event nameEvent acronymEvent placeEvent date
IEEE International Conference on Robotics and Automation

Yokohama, Japan

2024-05-13 - 2024-05-17

FunderFunding(s)Grant NumberGrant URL

University of Alberta

JSPS

JST-Mirai

Show more
Available on Infoscience
January 26, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/244921
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés