要約
Vision-Language Models (VLMs) are known to struggle with spatial reasoning and visual alignment.
これらの制限を克服するために、エージェントとして機能するVLMの空間推論能力を評価するために設計されたインタラクティブなマルチモーダルベンチマークであるIvisparを紹介します。
Ivisparは、スライドタイルパズルのバリアントに基づいています。論理的計画、空間認識、およびマルチステップの推論を要求する古典的な問題です。
The benchmark supports visual 2D, 3D, and text-based input modalities, enabling comprehensive assessments of VLMs’ planning and reasoning skills.
We evaluate a broad suite of state-of-the-art open-source and closed-source VLMs, comparing their performance while also providing optimal path solutions and a human baseline to assess the task’s complexity and feasibility for humans.
Results indicate that while some VLMs perform well on simple spatial tasks, they encounter difficulties with more complex configurations and problem properties.
Notably, while VLMs generally perform better in 2D vision compared to 3D or text-based representations, they consistently fall short of human performance, illustrating the persistent challenge of visual alignment.
This highlights critical gaps in current VLM capabilities, highlighting their limitations in achieving human-level cognition.
要約(オリジナル)
Vision-Language Models (VLMs) are known to struggle with spatial reasoning and visual alignment. To help overcome these limitations, we introduce iVISPAR, an interactive multi-modal benchmark designed to evaluate the spatial reasoning capabilities of VLMs acting as agents. iVISPAR is based on a variant of the sliding tile puzzle-a classic problem that demands logical planning, spatial awareness, and multi-step reasoning. The benchmark supports visual 2D, 3D, and text-based input modalities, enabling comprehensive assessments of VLMs’ planning and reasoning skills. We evaluate a broad suite of state-of-the-art open-source and closed-source VLMs, comparing their performance while also providing optimal path solutions and a human baseline to assess the task’s complexity and feasibility for humans. Results indicate that while some VLMs perform well on simple spatial tasks, they encounter difficulties with more complex configurations and problem properties. Notably, while VLMs generally perform better in 2D vision compared to 3D or text-based representations, they consistently fall short of human performance, illustrating the persistent challenge of visual alignment. This highlights critical gaps in current VLM capabilities, highlighting their limitations in achieving human-level cognition.
arxiv情報
著者 | Julius Mayer,Mohamad Ballout,Serwan Jassim,Farbod Nosrat Nezami,Elia Bruni |
発行日 | 2025-02-05 14:29:01+00:00 |
arxivサイト | arxiv_id(pdf) |
提供元, 利用サービス
arxiv.jp, Google