File size: 4,651 Bytes
ff17bdc
 
 
 
 
 
 
 
 
 
 
5859cd6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: cc
language:
- en
base_model:
- burakkizil/LAMP-Qwen-2.5-VL
pipeline_tag: text-to-video
tags:
- camera
- cinematography
---
<p align="center">

<h1 align="center">LAMP: Language-Assisted Motion Planning</h1>
<p align="center">
    <strong>M. Burak Kizil</strong>

    <strong>Enes Sanli</strong>

    <strong>Niloy J. Mitra</strong>

    <strong>Erkut Erdem</strong>

    <strong>Aykut Erdem</strong>

    <strong>Duygu Ceylan</strong>
    <br>
    <br>
        <a href="https://arxiv.org/abs/2512.03619">arXiv</a>&nbsp;&nbsp;&nbsp;
        <a href="https://cyberiada.github.io/LAMP/">Webpage</a>&nbsp;&nbsp;&nbsp;
        <a href="https://github.com/mbkizil/LAMP/">GitHub</a>
    <br>
</p>


## Introduction
<strong>LAMP</strong> defines a motion domain-specific language (DSL), inspired by cinematography conventions. By harnessing program synthesis capabilities of LLMs, LAMP generates structured motion programs from natural language, which are deterministically mapped to 3D trajectories.

<img src='./assets/teaser.jpg'>


## 馃帀 News 
- [ ] Client inference is coming soon.
- [x] Dec 7, 2025: Gradio demo is ready to use. 
- [x] Dec 7, 2025: We propose [LAMP](https://cyberiada.github.io/LAMP/) 



## 鈿欙笍 Installation
The codebase was tested with Python 3.11.13, CUDA version 12.8, and PyTorch >= 2.8.0

### Setup for Model Inference
You can setup for LAMP model inference by running:
```bash
git clone https://github.com/mbkizil/LAMP.git && cd LAMP
pip install torch==2.8.0 torchvision==0.23.0 --index-url https://download.pytorch.org/whl/cu128  # If PyTorch is not installed.
pip install -r requirements.txt
pip install wan@git+https://github.com/Wan-Video/Wan2.1  
```

## Download Models

Download the [VACE](https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B) and finetuned [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) model weight using [download.sh](download.sh)


```bash
chmod +x download.sh
./download.sh
```

## 馃殌 Usage
In LAMP, users act as a director, providing natural language descriptions for both object and camera behaviors. The system translates these prompts into precise 3D Motion Programs and conditions the video generation process to produce cinematic shots.

### Interactive Demo (Gradio)
To explore the full pipeline鈥攆rom text-to-motion planning to final video synthesis鈥攚e provide an interactive Gradio interface. This single entry point handles the loading of the Motion Planner (Qwen2.5-VL) and the Video Generator (VACE) seamlessly.
```bash

python -m src.serve.app --model-path ./qwen_checkpoints/LAMP-Qwen-2.5-VL

```
This script will:

> Load the LLM Motion Planner (Qwen2.5-based) into memory.

> Initialize the embedded VACE pipeline for trajectory-conditioned generation.

> Launch a local web server (default at http://127.0.0.1:8890).




> 馃挕**Notes from VACE**: 
> (1) Please refer to [vace/vace_wan_inference.py](./src/vace_lib/vace/vace_wan_inference.py)  for the inference args.
> (2) For English language Wan2.1 users, you need prompt extension to unlock the full model performance. 
Please follow the [instruction of Wan2.1](https://github.com/Wan-Video/Wan2.1?tab=readme-ov-file#2-using-prompt-extension) and set `--use_prompt_extend` while running inference.



## Acknowledgement


We are grateful for the following awesome projects that served as the foundation for LAMP, including [VACE](https://github.com/ali-vilab/VACE) for the powerful all-in-one video generation backbone and [Qwen](https://github.com/QwenLM/Qwen3-VL?ref=xxzz.info) for the robust language reasoning capabilities. We also extend our thanks to [Qwen-VL-Series-Finetune](https://github.com/2U1/Qwen-VL-Series-Finetune), which provided an efficient framework for training our motion planner.

Additionally, we acknowledge the pioneering works in camera control and trajectory generation, specifically [GenDoP](https://github.com/3DTopia/GenDoP) and [Exceptional Trajectories](https://github.com/robincourant/DIRECTOR). Their contributions to motion datasets and evaluation methodologies have brought immense inspiration to this project and established essential baselines for the field of controllable video generation.

## BibTeX

```bibtex
@misc{kizil2025lamplanguageassistedmotionplanning,
    title={LAMP: Language-Assisted Motion Planning for Controllable Video Generation}, 
    author={Muhammed Burak Kizil and Enes Sanli and Niloy J. Mitra and Erkut Erdem and Aykut Erdem and Duygu Ceylan},
    year={2025},
    eprint={2512.03619},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2512.03619}, 
}