dystrio/Qwen2.5-7B-Instruct-sculpt-throughput
30% smaller, +34% faster prefill, drop-in replacement. No custom kernels. No runtime changes.
Dystrio Sculpt structurally compresses transformer models, producing dense models that load with standard transformers — no custom code, no new ops, no deployment friction.
This is the Throughput tier of Qwen 2.5 7B Instruct.
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("dystrio/Qwen2.5-7B-Instruct-sculpt-throughput", torch_dtype="bfloat16", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("dystrio/Qwen2.5-7B-Instruct-sculpt-throughput")
inputs = tokenizer("The future of AI inference is", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Benchmark Results
All tiers compiled from Qwen 2.5 7B Instruct on A100 80GB, bf16:
| Model | PPL | PPL Ratio | Weights (GB) | Chat Prefill TPS | RAG TTFT p95 (ms) | Decode TPS |
|---|---|---|---|---|---|---|
| Baseline | 12.4633 | 1.0 | 14.185191 | 11510.6 | 117.869 | 71.1 |
| sculpt-default | 12.334 | 0.9896 | 12.964976 | 12352.7 | 110.714 | 72.7 |
| sculpt-production | 21.9239 | 1.7591 | 10.596324 | 14700.3 | 95.291 | 73.5 |
| sculpt-throughput | 23.2366 | 1.8644 | 9.950328 | 15386.6 | 91.914 | 73.3 |
Key Metrics (this model)
| Metric | Value |
|---|---|
| Weights memory | 9.950328 GB (30% smaller) |
| PPL ratio | 1.8644 |
| Chat prefill TPS | 15386.6 (+34%) |
| RAG TTFT p95 | 91.914 ms (-22%) |
| Decode TPS | 73.3 (flat) |
| Parameters | 5.34B |
All Sculpt Tiers
| Tier | HuggingFace | Size | PPL Ratio | Use Case |
|---|---|---|---|---|
| default | dystrio/Qwen2.5-7B-Instruct-sculpt-default | 12.964976 GB | 0.9896 | Zero-regret: quality preserved, smaller footprint |
| production | dystrio/Qwen2.5-7B-Instruct-sculpt-production | 10.596324 GB | 1.7591 | Practical savings with modest quality tradeoff |
| throughput | dystrio/Qwen2.5-7B-Instruct-sculpt-throughput 👈 this model | 9.950328 GB | 1.8644 | Maximum usable compression for speed/edge |
What is Dystrio Sculpt?
Dystrio Sculpt compiles transformer models into smaller, faster variants. Output models:
- Are dense (not sparse) — standard architecture, fewer parameters
- Load with standard HuggingFace Transformers — no custom code needed
- Require no custom kernels and no runtime changes
- Work as a one-step compile before deployment
- Stack with quantization (AWQ, GPTQ, GGUF) for compound savings
Compatibility
- ✅ HuggingFace Transformers
- ✅ vLLM
- ✅ TGI (Text Generation Inference)
- ✅ llama.cpp / GGUF conversion
- ✅ AWQ / GPTQ quantization
- ✅ Any framework that loads standard safetensors
Benchmark Environment
- GPU: NVIDIA A100-SXM4-80GB
- dtype: bf16
- Torch: 2.10.0+cu128
- Transformers: 5.3.0
- Deterministic: True
- Single-GPU, standard HuggingFace Transformers, no custom kernels.
Metric Definitions
- PPL ratio: WikiText-103 perplexity relative to baseline. <1.0 = quality improved.
- Prefill TPS: Tokens per second during prompt encoding (higher = faster).
- TTFT p95: Time to first token at 95th percentile (lower = faster).
- Decode TPS: Tokens per second during generation (higher = faster).
- Weights (GB): Model parameter memory (deterministic, runtime-independent).
Citation
@misc{dystrio_sculpt_2026,
title={Dystrio Sculpt: Structural Compilation for Transformer LLMs},
author={Dystrio},
year={2026},
url={https://huggingface.co/dystrio}
}
- Downloads last month
- 36
Model tree for dystrio/Qwen2.5-7B-Instruct-sculpt-throughput
Dataset used to train dystrio/Qwen2.5-7B-Instruct-sculpt-throughput
Evaluation results
- perplexity on WikiText-103 (validation)self-reported23.237
- ppl_ratio on WikiText-103 (validation)self-reported1.864