PROGRESSLM: Towards Progress Reasoning in Vision-Language Models
Abstract
Vision-Language Models struggle with task progress estimation from partial observations, requiring new approaches like ProgressLM-45K and ProgressLM-3B to improve reasoning capabilities.
Estimating task progress requires reasoning over long-horizon dynamics rather than recognizing static visual content. While modern Vision-Language Models (VLMs) excel at describing what is visible, it remains unclear whether they can infer how far a task has progressed from partial observations. To this end, we introduce Progress-Bench, a benchmark for systematically evaluating progress reasoning in VLMs. Beyond benchmarking, we further explore a human-inspired two-stage progress reasoning paradigm through both training-free prompting and training-based approach based on curated dataset ProgressLM-45K. Experiments on 14 VLMs show that most models are not yet ready for task progress estimation, exhibiting sensitivity to demonstration modality and viewpoint changes, as well as poor handling of unanswerable cases. While training-free prompting that enforces structured progress reasoning yields limited and model-dependent gains, the training-based ProgressLM-3B achieves consistent improvements even at a small model scale, despite being trained on a task set fully disjoint from the evaluation tasks. Further analyses reveal characteristic error patterns and clarify when and why progress reasoning succeeds or fails.
Community
Towards General Progress Understanding for Embodied Agents
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CASHEW: Stabilizing Multimodal Reasoning via Iterative Trajectory Aggregation (2026)
- BabyVision: Visual Reasoning Beyond Language (2026)
- Reassessing the Role of Supervised Fine-Tuning: An Empirical Study in VLM Reasoning (2025)
- ViRectify: A Challenging Benchmark for Video Reasoning Correction with Multimodal Large Language Models (2025)
- Visual Reasoning Tracer: Object-Level Grounded Reasoning Benchmark (2025)
- More Than the Final Answer: Improving Visual Extraction and Logical Consistency in Vision-Language Models (2025)
- Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper