0

分享

[学术论文] Modality-Composable Diffusion Policy via Inference-Time Distribution-level Composition (Generative Models for Robot Learning Workshop @ ICLR 2025)

18 0
发表于 2025-4-2 14:06:55 | 显示全部楼层 阅读模式
Diffusion Policy (DP) has attracted significant attention as an effective method for policy representation due to its capacity to model multi-distribution dynamics. However, current DPs are often based on a single visual modality (e.g., RGB or point cloud), limiting their accuracy and generalization potential.

Although training a generalized DP capable of handling heterogeneous multimodal data would enhance performance, it entails substantial computational and data-related costs.

To address these challenges, we propose a novel policy composition method: by leveraging multiple pre-trained DPs based on individual visual modalities, we can combine their distributional scores to form a more expressive Modality-Composable Diffusion Policy (MCDP), without the need for additional training.

Through extensive empirical experiments on the RoboTwin dataset, we demonstrate the potential of MCDP to improve both adaptability and performance. This exploration aims to provide valuable insights into the flexible composition of existing DPs, facilitating the development of generalizable cross-modality, cross-domain, and even cross-embodiment policies. Our code is open-sourced at https://github.com/AndyCao1125/MCDP




本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

×

回复

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

加入群聊

Copyright © 2021-2025 Open X-Humanoid 版权所有 All Rights Reserved.

相关侵权、举报、投诉及建议等,请发 E-mail:opensource@x-humanoid.com

Powered by Discuz! X5.0|京ICP备2024078606号-2|京公网安备11011202101078号

在本版发帖返回顶部
快速回复 返回顶部 返回列表