0

分享

[学术论文] Prompt, Plan, Perform: LLM-based Humanoid Control via Quantized Imitation Learning (IROS 2024)

18 0
发表于 2025-4-2 14:08:48 | 显示全部楼层 阅读模式

In recent years, reinforcement learning and imitation learning have shown great potential for controlling humanoid robots' motion. However, these methods typically create simulation environments and rewards for specific tasks, resulting in the requirements of multiple policies and limited capabilities for tackling complex and unknown tasks. To overcome these issues, we present a novel approach that combines adversarial imitation learning with large language models (LLMs).

This innovative method enables the agent to learn reusable skills with a single policy and solve zero-shot tasks under the guidance of LLMs. In particular, we utilize the LLM as a strategic planner for applying previously learned skills to novel tasks through the comprehension of task-specific prompts.

This empowers the robot to perform the specified actions in a sequence. To improve our model, we incorporate codebook-based vector quantization, allowing the agent to generate suitable actions in response to unseen textual commands from LLMs. Furthermore, we design general reward functions that consider the distinct motion features of humanoid robots, ensuring the agent imitates the motion data while maintaining goal orientation without additional guiding direction approaches or policies.

To the best of our knowledge, this is the first framework that controls humanoid robots using a single learning policy network and LLM as a planner. Extensive experiments demonstrate that our method exhibits efficient and adaptive ability in complicated motion tasks.


arxiv : https://arxiv.org/abs/2309.11359

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

×

回复

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

加入群聊

Copyright © 2021-2025 Open X-Humanoid 版权所有 All Rights Reserved.

相关侵权、举报、投诉及建议等,请发 E-mail:opensource@x-humanoid.com

Powered by Discuz! X5.0|京ICP备2024078606号-2|京公网安备11011202101078号

在本版发帖返回顶部
快速回复 返回顶部 返回列表