gemma-3-270m-14L-distilled

Model Description

This model is a surgically optimized and distilled version of google/gemma-3-270m, created as the final challenge of Chapter 6 in the book "Rearchitecting LLMs".

  • Book: Rearchitecting LLMs
  • Framework: OptiPFair
  • Technique: Depth Pruning + Knowledge Distillation (Labels-Only with Skew KL Divergence)
  • Chapter: Chapter 6 - Knowledge Recovery

linkedin-profile-banner-martra


Performance & Retention Metrics

The goal of this optimization was to maximize parameter efficiency while maintaining the highest possible retention of the Teacher's capabilities.

Retention Summary (vs Teacher Baseline)

Metric Value Description
PPL Retention 117.21% Linguistic quality preserved (Teacher PPL / Student PPL × 100)
Capabilities Retention 88.47% Reasoning power retained across benchmarks (Avg Student / Avg Teacher × 100)
Overall Retention 92.57% Combined health score (average of PPL + Capabilities retention)

Capability Benchmarks (LM Evaluation Harness)

Recovery = How much of the pruning degradation was recovered through distillation.

Benchmark Teacher Pruned (No KD) Student (After KD) Recovery
Arc Easy 57.0% 42.2% 48.9% 45.0%
Winogrande 54.1% 51.6% 52.9% 50.0%
Hellaswag 41.4% 34.1% 35.9% 25.4%
Lambada Openai 42.7% 17.8% 32.2% 57.7%
Piqa 68.3% 60.8% 63.3% 33.6%
Average 52.7% 41.3% 46.6% 46.7%

Linguistic Quality

  • Final Perplexity (PPL): 11.01
  • Teacher Baseline PPL: 12.91
  • Pruned (No KD) PPL: 120.66

Architecture Details

  • Teacher Model: google/gemma-3-270m (18 transformer blocks, 268,098,176 parameters)
  • Student Model: Pruned to 14 transformer blocks (245,803,648 parameters)
  • Layers Removed: 4 layers (indices: [9, 8, 14, 16])
  • Parameter Reduction: 8.32%

Training Procedure

Dataset

  • Source: Cosmopedia-v2
  • Samples: 40,000 (balanced across 4 subsets: stories, wikihow, openstax, web_samples)
  • Train/Val Split: 80% / 20%

Hyperparameters

  • Epochs: 5
  • Batch Size: 16 (effective: 64 with gradient accumulation)
  • Learning Rate: 4e-05
  • Loss Function: α·CrossEntropy + β·Skew-KLD
    • Task Loss Weight (α): 0.5
    • Logits Loss Weight (β): 0.5
    • Skew Interpolation Factor: 0.0
    • Temperature: 2.0
  • Optimizer: AdamW
  • Gradient Clipping: 1.0

Hardware & Training Time

  • GPU: NVIDIA A100-SXM4-80GB
  • Training Time: 4109.8s (68.50 minutes)
  • Avg Time per Epoch: 822.0s

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model_id = "oopere/gemma-3-270m-14L-distilled"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Generate text
prompt = "Paris is the capital of"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
    **inputs,
    max_new_tokens=50,
    do_sample=False,
    num_beams=3
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations & Intended Use

Intended Use

This is an educational model created as part of the Hands-on Lab in Chapter 6 of "Rearchitecting LLMs". It demonstrates:

  • Surgical depth pruning using data-driven layer importance analysis
  • Knowledge recovery through labels-only distillation with Skew KL Divergence
  • The complete optimization pipeline: Prune → Distill → Evaluate

Not intended for production use. This model serves as a learning artifact and baseline for readers to improve upon.

Limitations

  • Training Data: General-purpose Cosmopedia corpus (not domain-specialized)
  • Knowledge Coverage: Reduced compared to full-scale models due to structural pruning
  • Capabilities: Best suited for simple completion tasks; complex reasoning may be degraded
  • Language: English only

Citation

If you use this model or the techniques described in your research or projects, please cite:

Book

@book{martra2026rearchitecting,
  author    = {Pere Martra},
  title     = {Rearchitecting LLMs: Structural techniques for efficient models},
  publisher = {Manning Publications},
  year      = {2026},
  url       = {https://hubs.la/Q040tvtp0}
}

Framework

@software{optipfair2024,
  author = {Pere Martra},
  title  = {OptiPFair: Structural Pruning and Bias Analysis for LLMs},
  year   = {2024},
  url    = {https://github.com/peremartra/optipfair}
}

Acknowledgments

This model was created following the methodologies taught in "Rearchitecting LLMs" (Manning Publications, 2026). Special thanks to the Manning editorial team and the open-source community behind Hugging Face Transformers and PyTorch.

Challenge for readers: Can you improve the retention metrics beyond 92.6%? Try adjusting:

  • Layer selection strategy (use cosine similarity analysis)
  • Distillation dataset (domain-specific data)
  • Loss function weights (α, β, temperature)
  • Training epochs and learning rate

Share your results in the book's discussion forum!

Downloads last month
113
Safetensors
Model size
0.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for oopere/gemma-3-270m-14L-distilled

Finetuned
(129)
this model

Dataset used to train oopere/gemma-3-270m-14L-distilled