dystrio/Llama-3.2-3B-Instruct-sculpt-default
7% smaller, quality improved (0.9678x PPL), drop-in replacement. No custom kernels. No runtime changes.
Dystrio Sculpt structurally compresses transformer models, producing dense models that load with standard transformers — no custom code, no new ops, no deployment friction.
This is the Default tier of Llama 3.2 3B Instruct.
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("dystrio/Llama-3.2-3B-Instruct-sculpt-default", torch_dtype="bfloat16", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("dystrio/Llama-3.2-3B-Instruct-sculpt-default")
inputs = tokenizer("The future of AI inference is", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Benchmark Results
All tiers compiled from Llama 3.2 3B Instruct on A100 80GB, bf16:
| Model | PPL | PPL Ratio | Weights (GB) | Chat Prefill TPS | RAG TTFT p95 (ms) | Decode TPS |
|---|---|---|---|---|---|---|
| Baseline | 17.7333 | 1.0 | 5.984213 | 20742.1 | 75.219 | 74.7 |
| sculpt-default | 17.1627 | 0.9678 | 5.553549 | 21777.1 | 71.177 | 74.7 |
| sculpt-production | 21.554 | 1.2155 | 5.307455 | 22728.2 | 70.16 | 72.7 |
| sculpt-throughput | 26.9519 | 1.5198 | 4.999838 | 23116.0 | 69.412 | 72.3 |
| sculpt-experimental | 37.844 | 2.1341 | 4.446127 | 25457.5 | 68.204 | 73.1 |
Key Metrics (this model)
| Metric | Value |
|---|---|
| Weights memory | 5.553549 GB (7% smaller) |
| PPL ratio | 0.9678 |
| Chat prefill TPS | 21777.1 (+5%) |
| RAG TTFT p95 | 71.177 ms (-5%) |
| Decode TPS | 74.7 (flat) |
| Parameters | 2.98B |
All Sculpt Tiers
| Tier | HuggingFace | Size | PPL Ratio | Use Case |
|---|---|---|---|---|
| default | dystrio/Llama-3.2-3B-Instruct-sculpt-default 👈 this model | 5.553549 GB | 0.9678 | Zero-regret: quality preserved, smaller footprint |
| production | dystrio/Llama-3.2-3B-Instruct-sculpt-production | 5.307455 GB | 1.2155 | Practical savings with modest quality tradeoff |
| throughput | dystrio/Llama-3.2-3B-Instruct-sculpt-throughput | 4.999838 GB | 1.5198 | Maximum usable compression for speed/edge |
| experimental | dystrio/Llama-3.2-3B-Instruct-sculpt-experimental | 4.446127 GB | 2.1341 | Boundary exploration, maximum structural compression |
What is Dystrio Sculpt?
Dystrio Sculpt compiles transformer models into smaller, faster variants. Output models:
- Are dense (not sparse) — standard architecture, fewer parameters
- Load with standard HuggingFace Transformers — no custom code needed
- Require no custom kernels and no runtime changes
- Work as a one-step compile before deployment
- Stack with quantization (AWQ, GPTQ, GGUF) for compound savings
Compatibility
- ✅ HuggingFace Transformers
- ✅ vLLM
- ✅ TGI (Text Generation Inference)
- ✅ llama.cpp / GGUF conversion
- ✅ AWQ / GPTQ quantization
- ✅ Any framework that loads standard safetensors
Benchmark Environment
- GPU: NVIDIA A100-SXM4-80GB
- dtype: bf16
- Torch: 2.10.0+cu128
- Transformers: 5.3.0
- Deterministic: True
- Single-GPU, standard HuggingFace Transformers, no custom kernels.
Metric Definitions
- PPL ratio: WikiText-103 perplexity relative to baseline. <1.0 = quality improved.
- Prefill TPS: Tokens per second during prompt encoding (higher = faster).
- TTFT p95: Time to first token at 95th percentile (lower = faster).
- Decode TPS: Tokens per second during generation (higher = faster).
- Weights (GB): Model parameter memory (deterministic, runtime-independent).
Citation
@misc{dystrio_sculpt_2026,
title={Dystrio Sculpt: Structural Compilation for Transformer LLMs},
author={Dystrio},
year={2026},
url={https://huggingface.co/dystrio}
}
Downstream Benchmarks (lm-eval)
Evaluated with lm-eval-harness on A100-80GB, bf16, zero-shot.
| Benchmark | Baseline | This Model | Delta |
|---|---|---|---|
| ARC-Challenge | 0.4360 | 0.3737 | -0.0623 |
| HellaSwag | 0.5329 | 0.4971 | -0.0358 |
| MMLU | 0.6223 | 0.5272 | -0.0951 |
| TruthfulQA MC2 | 0.5138 | 0.4625 | -0.0513 |
- Downloads last month
- 268
Model tree for dystrio/Llama-3.2-3B-Instruct-sculpt-default
Base model
meta-llama/Llama-3.2-3B-InstructDataset used to train dystrio/Llama-3.2-3B-Instruct-sculpt-default
Evaluation results
- perplexity on WikiText-103 (validation)self-reported17.163
- ppl_ratio on WikiText-103 (validation)self-reported0.968