NousCoder-14B-GGUF

GGUF format model files for NousCoder-14B.

Available Files

Filename Quant Type Size Description
NousCoder-14B.q2_k.gguf Q2_K 5.36 GB Smallest size, lowest quality. Best for testing if model works.
NousCoder-14B.q3_k_l.gguf Q3_K_L (Large) 7.36 GB Better quality 3-bit quantization.
NousCoder-14B.q3_k_m.gguf Q3_K_M (Medium) 6.82 GB Small size with slightly better quality than Q3_K_S.
NousCoder-14B.q3_k_s.gguf Q3_K_S (Small) 6.20 GB Very small, low quality. Not recommended for most uses.
NousCoder-14B.q4_0.gguf Q4_0 7.93 GB Basic 4-bit quantization. Good balance of size and quality.
NousCoder-14B.q4_1.gguf Q4_1 8.74 GB 4-bit with higher accuracy than Q4_0.
NousCoder-14B.q4_k_m.gguf Q4_K_M (Medium) 8.38 GB K-quant 4-bit, best balance of size and quality. Most popular choice.
NousCoder-14B.q4_k_s.gguf Q4_K_S (Small) 7.98 GB K-quant 4-bit, optimized for smaller size.
NousCoder-14B.q5_0.gguf Q5_0 9.56 GB Basic 5-bit quantization. Higher quality than Q4.
NousCoder-14B.q5_1.gguf Q5_1 10.37 GB 5-bit with higher accuracy than Q5_0.
NousCoder-14B.q5_k_m.gguf Q5_K_M (Medium) 9.79 GB K-quant 5-bit, excellent quality-to-size ratio.
NousCoder-14B.q5_k_s.gguf Q5_K_S (Small) 9.56 GB K-quant 5-bit, good quality with reasonable size.
NousCoder-14B.q6_k.gguf Q6_K 11.29 GB 6-bit quantization. Very high quality, larger size.
NousCoder-14B.q8_0.gguf Q8_0 14.62 GB 8-bit quantization. Near-original quality, largest quantized size.

Usage

# With llama.cpp
./llama-cli -m NousCoder-14B.q4_k_m.gguf -p "Your prompt" -n 128

Original Model Card

NousCoder-14B

apache 2.0

We introduce NousCoder-14B, a competitive programming model post-trained on Qwen3-14B via reinforcement learning. On LiveCodeBench v6 (08/01/2024 - 05/01/2025), we achieve a Pass@1 accuracy of 67.87%, up 7.08% from the baseline Pass@1 accuracy of 60.79% of Qwen3-14B. We trained on 24k verifiable coding problems using 48 B200s over the course of four days.

test test

Acknowledgements

I would like to thank my mentor, Roger Jin, Dakota Mahan, Teknium, and others at the Nous Research team for their invaluable support throughout this project. I would also like to thank Together AI and Agentica for their immensely helpful blog posts on DeepCoder-14B. Finally, thank you to Modal and Lambda for their generous support by providing me with credits.

Downloads last month
867
GGUF
Model size
15B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AaryanK/NousCoder-14B-GGUF

Finetuned
Qwen/Qwen3-14B
Quantized
(25)
this model