0

分享

[学术论文] EmbodiedVSR: Dynamic Scene Graph-Guided Chain-of-Thought Reasoning for Visual Spatial Tasks

12 0
发表于 2025-4-2 14:01:40 | 显示全部楼层 阅读模式

While multimodal large language models (MLLMs) have made groundbreaking progress in embodied intelligence,they still face significant challenges in spatial reasoning for complex long-horizon tasks. To address this gap,

we propose EmbodiedVSR (Embodied Visual Spatial Reasoning), a novel framework that integrates dynamic scene graph-guided Chain-of-Thought (CoT) reasoning to enhance spatial understanding for embodied agents. By explicitly constructing structured knowledge representations through dynamic scene graphs, our method enables zeroshot spatial reasoning without task-specific fine-tuning.

This approach not only disentangles intricate spatial relationships but also aligns reasoning steps with actionable environmental dynamics. To rigorously evaluate performance, we introduce the eSpatial-Benchmark, a comprehensive dataset including real-world embodied scenarios with finegrained spatial annotations and adaptive task difficulty levels.

Experiments demonstrate that our framework significantly outperforms existing MLLM-based methods in accuracy and reasoning coherence, particularly in long-horizon tasks requiring iterative environment interaction.

The results reveal the untapped potential of MLLMs for embodied intelligence when equipped with structured, explainable reasoning mechanisms, paving the way for more reliable deployment in real-world spatial applications. The codes and datasets will be released soon.

arxiv : https://arxiv.org/pdf/2503.11089


本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

×

回复

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

加入群聊

Copyright © 2021-2025 Open X-Humanoid 版权所有 All Rights Reserved.

相关侵权、举报、投诉及建议等,请发 E-mail:opensource@x-humanoid.com

Powered by Discuz! X5.0|京ICP备2024078606号-2|京公网安备11011202101078号

在本版发帖返回顶部
快速回复 返回顶部 返回列表