Weighted least-squares approximation with determinantal point processes and generalized volume sampling
Paper
β’
2312.14057
β’
Published
Ahoy! Welcome aboard, matey.
This be Skilgrimr-Qwen3-14B β a rowdy, myth-brewinβ, logic-slinginβ, treasure-huntinβ pirate bard AI!
Built atop the mighty Qwen3-14B model, this fine beast switches between deep logical thinking and drunken freestyle lore faster than you can down a flagon of rum.
Born from a 420-fueled dream of pirate taverns, cursed islands, and chaotic wisdom, Skilgrimr be here to weave legends, riddle ya wit riddles, and maybe, just maybe, guide ya to treasure.
Modelcard template blessed by the stars: Huggingface Official Template.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "your-username/Skilgrimr-Qwen3-14B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Spin me a yarn about a one-eyed mermaid who steals stars."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
output = tokenizer.decode(generated_ids[0][len(model_inputs.input_ids[0]):], skip_special_tokens=True)
print(output)
π΄ Training Details
What He Was Fed
Synthetic pirate myths, bard songs, ghost ship tales, shanties, and ancient riddles
Based on Qwen3βs multilingual and coding superbrain
Lovingly fine-tuned by the Mind Expander pirate council
Training Vibe
BF16 mixed precision brew
Heavy thinking mode enabled for smart piratey logic
π Evaluation
How He Was Tested
Pirate Lore Challenge β custom benchmark (Arrr rated πππππ)
Multi-turn chaotic conversations and riddle-solving
Human feedback ("Would drink with him?" survey: 93% said Aye)
π± Environmental Impact
Hardware: 8x A100s (ye mighty GPUs)
Training Hours: 40 groggy hours
Cloud: Green-powered cloud node
COβe Emissions: ~12.4 kg (offset by planting 1.2 drunken sailor trees)
βοΈ Technical Rum-ology
14.8B parameters
32,768 tokens context
GQA attention heads
Reasoning Mode toggle: /think vs /no_think
Supports huge context lengths (with YaRN up to 131k tokens)
π Citation
If ye find treasure using Skilgrimr, do us a solid and cite this project:
bibtex
Copy
Edit
@misc{mindexpander2025skilgrimr,
title={Skilgrimr-Qwen3-14B: The Pirate Bard AI Model},
author={Mind Expander Collective},
year={2025},
url={https://huggingface.co/your-username/Skilgrimr-Qwen3-14B}
}
π¬ Model Card Contact
Contact: [Twitter/X @ProjectMindBot] π¦
SOS Flag: DM if the Kraken eats your ship π’
π΄ββ οΈ "In ale and code we trust β in storms and songs we sail." π»
yaml
Copy
Edit
---
### π₯ This version is WAY more **on-brand** with the pirate bard energy youβre creating!
### π₯ Itβs still clean enough to be Hugging Face ready β but it's *fun* as hell for people reading it.
---
If you want, I can also:
- Write a **fun "banner" slogan** for the Hugging Face thumbnail.
- Design a **pirate shipwrecked AI logo** you can use as your model image. π΄ββ οΈπ§
Would you like me to do that next too?? π¨π₯ (seriously would only take a few mins)
**If yes, what vibe?** (dark epic? cartoon funny? psychedelic pirate?)