Update README.md
Browse files
README.md
CHANGED
|
@@ -12,16 +12,16 @@ datasets:
|
|
| 12 |
- internlm/CapRL-2M
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# CapRL
|
| 16 |
π<a href="https://arxiv.org/abs/2509.22647">Paper</a> | π <a href="https://github.com/InternLM/CapRL">Github</a> | π€<a href="https://huggingface.co/collections/long-xing1/caprl-68d64ac32ded31596c36e189">CapRL Collection</a> | π€<a href="https://huggingface.co/papers/2509.22647">Daily Paper</a>
|
| 17 |
|
| 18 |
-
###
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
π€
|
| 23 |
-
π€<a href="https://huggingface.co/datasets/internlm/CapRL-2M">CapRL-2M Dataset</a> ο½π€<a href="https://huggingface.co/mradermacher/CapRL-3B-GGUF">CapRL-3B-GGUF</a> ο½π€<a href="https://huggingface.co/mradermacher/CapRL-3B-i1-GGUF">CapRL-3B-i1-GGUF</a>
|
| 24 |
|
|
|
|
| 25 |
We are excited to release the **CapRL 2.0 series**: **CapRL-Qwen3VL-2B** and **CapRL-Qwen3VL-4B**. These models feature fewer parameters while delivering even more powerful captioning performance.
|
| 26 |
Notably, **CapRL-Qwen3VL-2B outperforms both CapRL-Qwen2.5VL-3B and Qwen2.5VL-72B in captioning tasks**.
|
| 27 |
This leap in efficiency is driven by our upgraded training recipe, which includes a more rigorous QA data filter and a significantly more diverse image dataset. We welcome everyone to try them out!
|
|
|
|
| 12 |
- internlm/CapRL-2M
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# CapRL
|
| 16 |
π<a href="https://arxiv.org/abs/2509.22647">Paper</a> | π <a href="https://github.com/InternLM/CapRL">Github</a> | π€<a href="https://huggingface.co/collections/long-xing1/caprl-68d64ac32ded31596c36e189">CapRL Collection</a> | π€<a href="https://huggingface.co/papers/2509.22647">Daily Paper</a>
|
| 17 |
|
| 18 |
+
### CapRL Series Model & Dataset
|
| 19 |
+
| Series | Models & Resources |
|
| 20 |
+
| :--- | :--- |
|
| 21 |
+
| **CapRL 2.0 Series** | [π€ CapRL-Qwen3VL-2B](https://huggingface.co/internlm/CapRL-Qwen3VL-2B) \| [π€ CapRL-Qwen3VL-4B](https://huggingface.co/internlm/CapRL-Qwen3VL-4B) |
|
| 22 |
+
| **CapRL 1.0 Series** | [π€ CapRL-Qwen2.5VL-3B](https://huggingface.co/internlm/CapRL-3B) \| [π€ CapRL-InternVL3.5-8B](https://huggingface.co/yuhangzang/CapRL-InternVL3.5-8B) \| [π CapRL-2M Dataset](https://huggingface.co/datasets/internlm/CapRL-2M) \| [π¦ CapRL-3B-GGUF](https://huggingface.co/mradermacher/CapRL-3B-GGUF) \| [π¦ CapRL-3B-i1-GGUF](https://huggingface.co/mradermacher/CapRL-3B-i1-GGUF) |
|
|
|
|
| 23 |
|
| 24 |
+
### CapRL-Qwen3VL-2B
|
| 25 |
We are excited to release the **CapRL 2.0 series**: **CapRL-Qwen3VL-2B** and **CapRL-Qwen3VL-4B**. These models feature fewer parameters while delivering even more powerful captioning performance.
|
| 26 |
Notably, **CapRL-Qwen3VL-2B outperforms both CapRL-Qwen2.5VL-3B and Qwen2.5VL-72B in captioning tasks**.
|
| 27 |
This leap in efficiency is driven by our upgraded training recipe, which includes a more rigorous QA data filter and a significantly more diverse image dataset. We welcome everyone to try them out!
|