Text-Based Reasoning About Vector Graphics

๐ŸŒ Homepage โ€ข ๐Ÿ“ƒ Paper โ€ข ๐Ÿค— Data (PVD-160k) โ€ข ๐Ÿค— Model (PVD-160k-Mistral-7b) โ€ข ๐Ÿ’ป Code

We observe that current large multimodal models (LMMs) still struggle with seemingly straightforward reasoning tasks that require precise perception of low-level visual details, such as identifying spatial relations or solving simple mazes. In particular, this failure mode persists in question-answering tasks about vector graphicsโ€”images composed purely of 2D objects and shapes.

Teaser

To solve this challenge, we propose Visually Descriptive Language Model (VDLM), a visual reasoning framework that operates with intermediate text-based visual descriptionsโ€”SVG representations and learned Primal Visual Description, which can be directly integrated into existing LLMs and LMMs. We demonstrate that VDLM outperforms state-of-the-art large multimodal models, such as GPT-4V, across various multimodal reasoning tasks involving vector graphics. See our paper for more details. Overview

Downloads last month
4
Safetensors
Model size
7B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mikewang/PVD-160k-Mistral-7b

Quantizations
2 models

Dataset used to train mikewang/PVD-160k-Mistral-7b

Spaces using mikewang/PVD-160k-Mistral-7b 9

Paper for mikewang/PVD-160k-Mistral-7b