Update README.md
Browse files
README.md
CHANGED
|
@@ -1,22 +1,24 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
-
|
| 6 |
-
|
| 7 |
license: other
|
| 8 |
license_name: falcon-llm-license
|
| 9 |
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
|
| 10 |
-
|
| 11 |
-
-
|
| 12 |
---
|
| 13 |
|
| 14 |
<img src="https://huggingface.co/datasets/tiiuae/reasoning-images/resolve/main/falcon-h1r-logo.png" alt="drawing" width="800"/>
|
| 15 |
|
| 16 |
# Falcon-H1R-7B
|
| 17 |
|
| 18 |
-
This repository presents **Falcon-H1R-7B**, a reasoning-specialized model
|
| 19 |
-
|
|
|
|
| 20 |
|
| 21 |
## Model Description
|
| 22 |
|
|
@@ -428,7 +430,7 @@ TTS represents test time scaling results on few of the benchmarks that we evalua
|
|
| 428 |
# Useful links
|
| 429 |
|
| 430 |
- View [our release blogpost](https://falcon-lm.github.io/blog/falcon-h1r-7b).
|
| 431 |
-
- View [our technical report](https://
|
| 432 |
- Feel free to join [our discord server](https://discord.gg/Cbek57PrZE) if you have any questions or to interact with our researchers and developers.
|
| 433 |
|
| 434 |
# Citation
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- tiiuae/Falcon-H1-7B-Base
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
+
library_name: transformers
|
| 7 |
+
pipeline_tag: text-generation
|
| 8 |
license: other
|
| 9 |
license_name: falcon-llm-license
|
| 10 |
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
|
| 11 |
+
tags:
|
| 12 |
+
- falcon-h1r
|
| 13 |
---
|
| 14 |
|
| 15 |
<img src="https://huggingface.co/datasets/tiiuae/reasoning-images/resolve/main/falcon-h1r-logo.png" alt="drawing" width="800"/>
|
| 16 |
|
| 17 |
# Falcon-H1R-7B
|
| 18 |
|
| 19 |
+
This repository presents **Falcon-H1R-7B**, a reasoning-specialized model introduced in the paper [Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling](https://huggingface.co/papers/2601.02346).
|
| 20 |
+
|
| 21 |
+
Built on top of [Falcon-H1-7B-Base](https://huggingface.co/tiiuae/Falcon-H1-7B-Base), it was trained via cold-start supervised fine-tuning with long reasoning traces and further enhanced by scaling RL with GRPO. The model demonstrates outstanding performance across various benchmark evaluations, including mathematics, programming, instruction following, and general logic.
|
| 22 |
|
| 23 |
## Model Description
|
| 24 |
|
|
|
|
| 430 |
# Useful links
|
| 431 |
|
| 432 |
- View [our release blogpost](https://falcon-lm.github.io/blog/falcon-h1r-7b).
|
| 433 |
+
- View [our technical report](https://huggingface.co/papers/2601.02346).
|
| 434 |
- Feel free to join [our discord server](https://discord.gg/Cbek57PrZE) if you have any questions or to interact with our researchers and developers.
|
| 435 |
|
| 436 |
# Citation
|