How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cstr/pixie-rune-v1-GGUF:Q8_0
# Run inference directly in the terminal:
llama-cli -hf cstr/pixie-rune-v1-GGUF:Q8_0
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cstr/pixie-rune-v1-GGUF:Q8_0
# Run inference directly in the terminal:
llama-cli -hf cstr/pixie-rune-v1-GGUF:Q8_0
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cstr/pixie-rune-v1-GGUF:Q8_0
# Run inference directly in the terminal:
./llama-cli -hf cstr/pixie-rune-v1-GGUF:Q8_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cstr/pixie-rune-v1-GGUF:Q8_0
# Run inference directly in the terminal:
./build/bin/llama-cli -hf cstr/pixie-rune-v1-GGUF:Q8_0
Use Docker
docker model run hf.co/cstr/pixie-rune-v1-GGUF:Q8_0
Quick Links

pixie-rune-v1 GGUF

GGUF format of telepix/PIXIE-Rune-v1.0 for use with CrispEmbed and Ollama.

Files

File Quantization Size
pixie-rune-v1-q4_k.gguf Q4_K 0 MB
pixie-rune-v1-q8_0.gguf Q8_0 0 MB
pixie-rune-v1.gguf F32 0 MB

Recommended: Q8_0 for quality (cos vs HF: cross-lingual OK), Q4_K for size (cross-lingual OK).

Quick Start

CrispEmbed

./crispembed -m pixie-rune-v1 "Hello world"
./crispembed-server -m pixie-rune-v1 --port 8080

Ollama (with CrispStrobe fork)

echo "FROM pixie-rune-v1-q8_0.gguf" > Modelfile
ollama create pixie-rune-v1 -f Modelfile
curl http://localhost:11434/api/embed -d '{"model":"pixie-rune-v1","input":["Hello world"]}'

Python (CrispEmbed)

from crispembed import CrispEmbed
model = CrispEmbed("pixie-rune-v1-q8_0.gguf")
vectors = model.encode(["Hello world", "Goodbye world"])

Model Details

Property Value
Architecture XLM-R
Parameters 560M
Embedding Dimension 1024
Layers 24
Pooling CLS
Tokenizer SentencePiece
Language multilingual
Q8_0 vs HuggingFace cross-lingual OK
Q4_K vs HuggingFace cross-lingual OK

Server API

CrispEmbed server supports four API dialects:

  • POST /embed -- native
  • POST /v1/embeddings -- OpenAI-compatible
  • POST /api/embed -- Ollama-compatible
  • POST /api/embeddings -- Ollama legacy

Credits

Downloads last month
993
GGUF
Model size
0.6B params
Architecture
xlmr
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cstr/pixie-rune-v1-GGUF

Quantized
(3)
this model