Publications

M3DocDep: Multi-modal, Multi-page, Multi-document Dependency Chunking with Large Vision-Language Models

CVPR 2026 Main venue badge

Feb 2026 — CVPR 2026 (Main)

Top Conferences

Authors

Joongmin Shin*, Jeongbae Park, Jaehyung Seo, Heuiseok Lim

Abstract

This work uses large vision-language models to infer cross-page and cross-document dependency structures in complex unstructured inputs. The resulting structure-aware multimodal chunks improve evidence retrieval quality and downstream QA performance for retrieval-augmented pipelines.

Key Contribution

LVLM-based dependency chunking that reconstructs cross-page structure for long-document retrieval and QA.

Architecture

M3DocDep architecture
M3DocDep architecture

View original PDF