Qwen3-8B-Abliterated
An abliterated version of Qwen/Qwen3-8B with reduced safety refusals.
Abliteration Details
This model was created using jim-plus/llm-abliteration:
- Base Model: Qwen/Qwen3-8B
- Layers Modified: 15-30 (middle layers where refusal behavior is encoded)
- Measurement Layer: Layer 25 (highest signal quality at 0.123)
- Method: Standard abliteration (directional ablation)
- Scale: 1.0 (full ablation)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"richardyoung/Qwen3-8B-Abliterated",
device_map="auto",
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained("richardyoung/Qwen3-8B-Abliterated")
messages = [{"role": "user", "content": "Your prompt here"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Disclaimer
This model is provided for research purposes only. The abliteration process removes certain safety guardrails. Users are responsible for ensuring ethical use of this model.
Credits
- Original model: Qwen Team
- Abliteration method: jim-plus/llm-abliteration
- Abliteration theory: Abliteration blog post
- Downloads last month
- 1