Instructions to use huihui-ai/MicroThinker-1B-Preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use huihui-ai/MicroThinker-1B-Preview with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="huihui-ai/MicroThinker-1B-Preview") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("huihui-ai/MicroThinker-1B-Preview") model = AutoModelForCausalLM.from_pretrained("huihui-ai/MicroThinker-1B-Preview") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use huihui-ai/MicroThinker-1B-Preview with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "huihui-ai/MicroThinker-1B-Preview" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "huihui-ai/MicroThinker-1B-Preview", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/huihui-ai/MicroThinker-1B-Preview
- SGLang
How to use huihui-ai/MicroThinker-1B-Preview with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "huihui-ai/MicroThinker-1B-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "huihui-ai/MicroThinker-1B-Preview", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "huihui-ai/MicroThinker-1B-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "huihui-ai/MicroThinker-1B-Preview", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use huihui-ai/MicroThinker-1B-Preview with Docker Model Runner:
docker model run hf.co/huihui-ai/MicroThinker-1B-Preview
MicroThinker-1B-Preview
MicroThinker-1B-Preview, a new model fine-tuned from the huihui-ai/Llama-3.2-1B-Instruct-abliterated model, focused on advancing AI reasoning capabilities.
Use with ollama
You can use huihui_ai/microthinker directly
ollama run huihui_ai/microthinker
Training Details
This is just a test, but the performance is quite good.
Now, I'll introduce the test environment.
The model was trained using 1 RTX 4090 GPU(24GB) .
The fine-tuning process used only 20,000 records from each dataset.
The SFT (Supervised Fine-Tuning) process is divided into several steps, and no code needs to be written.
- Create the environment.
conda create -yn ms-swift python=3.11
conda activate ms-swift
git clone https://github.com/modelscope/ms-swift.git
cd ms-swift
pip install -e .
cd ..
- Download the model and dataset.
huggingface-cli download huihui-ai/Llama-3.2-1B-Instruct-abliterated --local-dir ./huihui-ai/Llama-3.2-1B-Instruct-abliterated
huggingface-cli download --repo-type dataset huihui-ai/QWQ-LONGCOT-500K --local-dir ./data/QWQ-LONGCOT-500K
huggingface-cli download --repo-type dataset huihui-ai/LONGCOT-Refine-500K --local-dir ./data/LONGCOT-Refine-500K
- Used only the huihui-ai/QWQ-LONGCOT-500K dataset (#20000), Trained for 1 epoch:
swift sft --model huihui-ai/Llama-3.2-1B-Instruct-abliterated --model_type llama3_2 --train_type lora --dataset "data/QWQ-LONGCOT-500K/qwq_500k.jsonl#20000" --torch_dtype bfloat16 --num_train_epochs 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --learning_rate 1e-4 --lora_rank 8 --lora_alpha 32 --target_modules all-linear --gradient_accumulation_steps 16 --eval_steps 50 --save_steps 50 --save_total_limit 2 --logging_steps 5 --max_length 16384 --output_dir output/Llama-3.2-1B-Instruct-abliterated/lora/sft --system "You are a helpful assistant. You should think step-by-step." --warmup_ratio 0.05 --dataloader_num_workers 4 --model_author "huihui-ai" --model_name "MicroThinker"
- Save the fine-tuned model. After you're done, input
exitto exit. Replace the directories below with specific ones.
swift infer --model huihui-ai/Llama-3.2-1B-Instruct-abliterated --adapters output/Llama-3.2-1B-Instruct-abliterated/lora/sft/v0-20250102-153619/checkpoint-1237 --stream true --merge_lora true
This should create a new model directory: checkpoint-1237-merged, Copy or move this directory to the huihui directory.
- Perform inference on the fine-tuned model.
swift infer --model huihui/checkpoint-1237-merged --stream true --infer_backend pt --max_new_tokens 8192
- Combined training with huihui-ai/QWQ-LONGCOT-500K (#20000) and huihui-ai/LONGCOT-Refine datasets (#20000), Trained for 1 epoch:
swift sft --model huihui-ai/checkpoint-1237-merged --model_type llama3_2 --train_type lora --dataset "data/QWQ-LONGCOT-500K/qwq_500k.jsonl#20000" "data/LONGCOT-Refine-500K/refine_from_qwen2_5.jsonl#20000" --torch_dtype bfloat16 --num_train_epochs 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --learning_rate 1e-4 --lora_rank 8 --lora_alpha 32 --target_modules all-linear --gradient_accumulation_steps 16 --eval_steps 50 --save_steps 50 --save_total_limit 2 --logging_steps 5 --max_length 16384 --output_dir output/Llama-3.2-1B-Instruct-abliterated/lora/sft2 --system "You are a helpful assistant. You should think step-by-step." --warmup_ratio 0.05 --dataloader_num_workers 4 --model_author "huihui-ai" --model_name "MicroThinker"
- Save the final fine-tuned model. After you're done, input
exitto exit. Replace the directories below with specific ones.
swift infer --model huihui-ai/checkpoint-1237-merged --adapters output/Llama-3.2-1B-Instruct-abliterated/lora/sft2/v0-20250103-121319/checkpoint-2474 --stream true --merge_lora true
This should create a new model directory: checkpoint-2474-merged, Rename the directory to MicroThinker-1B-Preview, Copy or move this directory to the huihui directory.
- Perform inference on the final fine-tuned model.
swift infer --model huihui/MicroThinker-1B-Preview --stream true --infer_backend pt --max_new_tokens 8192
- Test examples.
How many 'r' characters are there in the word "strawberry"?
- Downloads last month
- 22
Model tree for huihui-ai/MicroThinker-1B-Preview
Base model
meta-llama/Llama-3.2-1B-Instruct