-
-
-
-
-
-
Inference Providers
Active filters: Q8
cibernicola/FLOR-6.3B-xat-Q8_0
Text Generation
• 6B • Updated
• 14
cibernicola/FLOR-1.3B-xat-Q8
Text Generation
• 1B • Updated
• 4
cibernicola/FLOR-6.3B-xat-Q5_K
Text Generation
• 6B • Updated
• 11
prithivMLmods/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
• 8B • Updated
• 186
• 2
prithivMLmods/Qwen2.5-Coder-7B-GGUF
Text Generation
• 8B • Updated
• 140
• 4
prithivMLmods/Qwen2.5-Coder-3B-GGUF
Text Generation
• 3B • Updated
• 43
• 4
prithivMLmods/Qwen2.5-Coder-1.5B-GGUF
Text Generation
• 2B • Updated
• 404
• 5
prithivMLmods/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
• 2B • Updated
• 142
• 3
prithivMLmods/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation
• 3B • Updated
• 107
• 5
prithivMLmods/Llama-3.2-3B-GGUF
Text Generation
• 3B • Updated
• 114
• 2
harisnaeem/Phi-4-mini-instruct-GGUF-Q8
Text Generation
• 4B • Updated
• 3
ykarout/llama3-deepseek_Q8
Text Generation
• 8B • Updated
• 1
michelkao/Ollama-3.2-GGUF
Text Generation
• 3B • Updated
• 203
SiddhJagani/gpt-oss-20b-no-think-mlx-Q8
Text Generation
• 21B • Updated
• 109
• 1
0.1B • Updated
• 2