🧠 dnai-humour-0.5B-instruct

A lightweight, fast, and surprisingly witty instruction-tuned language model fine-tuned on curated OpenAssistant conversations. Built to respond clearly, efficiently, and with a touch of humor β€” without pretending to be a superintelligence.


πŸ” Overview

dnai-humour-0.5B-instruct is a fine-tuned variant of Qwen2.5-0.5B-Instruct, trained using a carefully selected subset of the OpenAssistant v1 dataset.
The focus is instruction following, conversational clarity, low-latency responses, and efficient deployment on modest hardware.

This model is small, fast, and does its job without unnecessary drama.


🎯 Main Capabilities

  • 🧾 Instruction following
  • πŸ’¬ Conversational AI & chatbots
  • 🧠 Reasonable reasoning (for 0.5B β€” let’s stay honest)
  • πŸ˜„ Light humor & friendly tone
  • ⚑ Fast inference and low memory usage
  • πŸ–₯️ Suitable for edge devices & low-resource systems

🧠 Model Details

Item Description
Base Model Qwen2.5-0.5B-Instruct
Model Type Decoder-only Transformer
Parameters ~0.5 Billion
Fine-Tuning Method Supervised Fine-Tuning (SFT)
Frameworks PyTorch, Hugging Face Transformers, TRL
Precision Support FP16 / INT8 (quantization-friendly)

πŸ“š Dataset

OpenAssistant v1 (OASST1)

  • Source: OpenAssistant Project
  • Type: Human-written multi-turn conversations
  • Domains:
    • Question answering
    • Reasoning
    • Coding help
    • General knowledge
    • Casual chat

πŸ”’ Data Used for Fine-Tuning

  • Subset Size: ~15,000 conversations (smallest curated split)
  • Selection Goal:
    • High-quality instruction-response pairs
    • Reduced noise
    • Faster convergence
    • Better alignment per token

Less data, more discipline.


⚑ Performance & Efficiency

  • πŸš€ Fast inference due to small parameter size
  • 🧠 Low VRAM usage (runs comfortably on consumer GPUs)
  • πŸ“¦ Easy to deploy on:
    • Google Colab
    • Lightning AI
    • Local machines
    • Edge setups

This model won’t melt your GPU or your patience.


πŸ˜„ Personality & Humor

  • Polite, friendly, and occasionally funny
  • Avoids being robotic when possible
  • Does not hallucinate confidence like it knows everything
  • Knows when to explain and when to shut up

Basically: helpful, not annoying.


🚫 Limitations

  • Not designed for:
    • Medical or legal advice
    • High-stakes reasoning
    • Large-context document analysis
  • Still a 0.5B model β€” expectations should match reality

Small brain, well-trained.


πŸ› οΈ Intended Use Cases

  • Educational chatbots
  • Personal AI assistants
  • Instruction-based tools
  • Lightweight LLM experiments
  • Fine-tuning & research demos

πŸ“œ License & Ethics

  • Base model and dataset licenses apply
  • Trained on publicly available, human-generated data
  • No intentional harmful or restricted content

Use responsibly. Don’t blame the model for human mistakes.


πŸ§ͺ Training Note

This model was fine-tuned using a minimal but high-quality dataset to balance performance and efficiency.
The goal was alignment per token, not brute-force scaling.

Quality > Quantity.


πŸ‘€ Author

Fine-tuned by DarkNeuronAI
Built by a student. Powered by curiosity.
Optimized because resources are expensive.


⭐ Final Words

If you need a small, fast, instruction-following model that doesn’t pretend to be GPT-4 β€” this one knows its place and performs it well.

Downloads last month
4
Safetensors
Model size
0.5B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for DarkNeuron-AI/dnai-humour-0.5B-instruct

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(584)
this model

Dataset used to train DarkNeuron-AI/dnai-humour-0.5B-instruct