Steve-Evolving: Open-World Embodied Self-Evolution via Fine-Grained Diagnosis and Dual-Track Knowledge Distillation
Abstract
A self-evolving framework for open-world embodied agents that couples execution diagnosis with knowledge distillation to improve long-horizon task performance through structured experience organization and closed-loop learning.
Open-world embodied agents must solve long-horizon tasks where the main bottleneck is not single-step planning quality but how interaction experience is organized and evolved. To this end, we present Steve-Evolving, a non-parametric self-evolving framework that tightly couples fine-grained execution diagnosis with dual-track knowledge distillation in a closed loop. The method follows three phases: Experience Anchoring, Experience Distillation, and Knowledge-Driven Closed-Loop Control. In detail, Experience Anchoring solidifies each subgoal attempt into a structured experience tuple with a fixed schema (pre-state, action, diagnosis-result, and post-state) and organizes it in a three-tier experience space with multi-dimensional indices (e.g., condition signatures, spatial hashing, and semantic tags) plus rolling summarization for efficient and auditable recall. To ensure sufficient information density for attribution, the execution layer provides compositional diagnosis signals beyond binary outcomes, including state-difference summaries, enumerated failure causes, continuous indicators, and stagnation/loop detection. Moreover, successful trajectories of Experience Distillation are generalized into reusable skills with explicit preconditions and verification criteria, while failures are distilled into executable guardrails that capture root causes and forbid risky operations at both subgoal and task granularities. Besides, Knowledge-Driven Closed-Loop Control retrieved skills and guardrails are injected into an LLM planner, and diagnosis-triggered local replanning updates the active constraints online, forming a continual evolution process without any model parameter updates. Experiments on the long-horizon suite of Minecraft MCU demonstrate consistent improvements over static-retrieval baselines.
Community
This paper presents Steve-Evolving, a non-parametric self-evolving framework for open-world embodied agents. It tightly couples fine-grained execution diagnosis with dual-track knowledge distillation: successful trajectories are distilled into reusable skills with explicit preconditions and verification criteria, while failures are distilled into executable guardrails that capture root causes and block risky operations. The distilled knowledge is then injected back into the LLM planner to support diagnosis-triggered replanning and closed-loop improvement without model parameter updates. Experiments on the Minecraft MCU long-horizon benchmark show consistent gains over static-retrieval baselines, with especially clear benefits on harder task groups as experience accumulates.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CoWork-X: Experience-Optimized Co-Evolution for Multi-Agent Collaboration System (2026)
- Experience-Driven Multi-Agent Systems Are Training-free Context-aware Earth Observers (2026)
- V-CAGE: Context-Aware Generation and Verification for Scalable Long-Horizon Embodied Tasks (2026)
- ProcMEM: Learning Reusable Procedural Memory from Experience via Non-Parametric PPO for LLM Agents (2026)
- EvoCUA: Evolving Computer Use Agents via Learning from Scalable Synthetic Experience (2026)
- SPIRAL: A Closed-Loop Framework for Self-Improving Action World Models via Reflective Planning Agents (2026)
- XSkill: Continual Learning from Experience and Skills in Multimodal Agents (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper