LFM2.5-1.2B-Saiga-It-v3
Continued attempt to make a Russian-language Saiga out of LFM2.5 1.2B by Liquid AI. This is v3 — a significant upgrade over v2 with much more CPT data.
Inspired by the amazing work of Ilya Gusev and the Saiga project.
What changed vs v2?
CPT dataset grew from ~1.25M to ~2.69M examples:
- Full Russian Wikipedia (~1.9M articles, up from 350k)
- Added Habr (~295k technical articles in Russian)
- Replaced c4 ru with CulturaX ru (200k)
- Kept English c4 (300k) for retention
SFT dataset expanded:
- Added
IlyaGusev/ru_turbo_saiga - Added
lksy/ru_instruct_gpt4
Training
Stage 1 — Continued Pre-Training (CPT):
wikimedia/wikipedia(Russian, ~1.9M articles)IlyaGusev/habr(Russian, ~295k technical articles)uonlp/CulturaX(Russian, 200k)allenai/c4(English, 300k for retention)- 10,000 steps, lr=3e-5, loss 2.26 → 2.13
Stage 2 — Supervised Fine-Tuning (SFT):
IlyaGusev/saiga_scored(opus_score ≥ 8, ~27k)d0rj/alpaca-cleaned-ru(15k)IlyaGusev/ru_sharegpt_cleanedIlyaGusev/ru_turbo_saigalksy/ru_instruct_gpt4- 2 epochs, lr=5e-6, final loss ~1.21
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"NickupAI/LFM2.5-1.2B-Saiga-It-v3",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
"NickupAI/LFM2.5-1.2B-Saiga-It-v3",
trust_remote_code=True,
)
messages = [{"role": "user", "content": "Привет! Как дела?"}]
text = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.1,
top_k=50,
top_p=0.1,
repetition_penalty=1.05,
do_sample=True,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
Honest Warning ⚠️
Still hallucinates. Training a model of this size properly requires significantly more compute and time than was available for this project — the CPT data barely scratches the surface of what would be needed for reliable factual knowledge. The model knows the format of answers but sometimes invents the content.
Recommended for:
- Creative nonsense generation, lol
- Experiments and research
- Russian language generation tasks
Not recommended for:
- Factual questions
- Medicine, law or any serious topics
- Astronomy (the dwarf planet situation has not improved)
— Назови все карликовые планеты Солнечной системы
- Марс, 2. Венус, 3. Земля, 4. Юпитер... 8. Эритрея
v2 had Kvass and Gamma-Tit. v3 replaced them with Eritrea. Progress.
GGUF versions
Quantized versions available at NickupAI/LFM2.5-1.2B-Saiga-It-v3-GGUF.
What's next
LoRA experiments on larger base models (7B+). The 1.2B size has fundamental limitations for factual knowledge — time to scale up.
- Downloads last month
- 14