mistral-300m-sft
Overview
Welcome to my model card!
This Model feature is ...
- LoRA fine-tuning model of ce-lery/mistral-300m-base
- Use of Mistral 300M
Yukkuri shite ittene!
How to use the model
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
def inference(model_path:str, device: str = "cuda", prompt:str = ""):
if (device != "cuda" and device != "cpu"):
device = "cpu"
if not torch.cuda.is_available():
device = "cpu"
print("device:", device)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path,trust_remote_code=True).to(device)
messages = [{"role": "user", "content": prompt}]
tokenized_chat = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
# token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
# print("token_ids:",token_ids)
with torch.no_grad():
generated_tokens = model.generate(
tokenized_chat.to("cuda"),
use_cache=True,
early_stopping=False,
max_new_tokens=1024,
top_p=0.95,
top_k=50,
temperature=0.2,
do_sample=True,
no_repeat_ngram_size=2,
num_beams=3,
)
generated_text = tokenizer.decode(generated_tokens[0])
print(generated_text.replace(tokenizer.eos_token, "\n"))
prompt = "ใใใซใกใฏ๏ผ"
inference("ce-lery/mistral-300m-sft", "cuda", prompt)
#<s>User:ใใใซใกใฏ๏ผ
#<s>Assistant:ใใฏใใใใใใพใใ็งใฏใชใผใใณใขใทในใฟใณใใงใใใใชใใฎ่ณชๅใซ็ญใใใใใใชใใฎ่ณชๅใซใ็ญใใใพใใไฝใใๆไผใใงใใใใจใใใใฐใ้ ๆ
ฎใชใ่ใใฆใใ ใใใ
prompt = "่ชๅ่ปใ้่ปขใใ้ใซๅฟ
่ฆใชใใฎใฏ๏ผ"
inference("ce-lery/mistral-300m-sft", "cuda", prompt)
#<s>User:่ชๅ่ปใ้่ปขใใ้ใซๅฟ
่ฆใชใใฎใฏ๏ผ
#<s>Assistant:้่ปขใซๅฟ
่ฆใชใในใฆใฎ้ๅ
ทใจ่ฃ
ๅใใใใใใซใฏใใใใคใใฎในใใใใ่ธใๅฟ
่ฆใใใใพใใไปฅไธใฏใใฎในใใใใฎในใใใใปใใคใปในใใใใงใใ
#
#1. ้่ปขใใๅ ดๆใฎ้่ทฏ็ถๆณใ่ชฟในใใใใใฏใ้่ทฏใฎ็ถๆณใๆๆกใใใฎใซๅฝน็ซใกใพใใใพใใไบค้้ใ้่ทฏใฎๆทท้็ถๆณใชใฉใใใพใใพใช่ฆๅ ใ่ๆ
ฎใใใใจใ้่ฆใงใใไพใใฐใ้ซ้้่ทฏใงใฎ้่ปขใฏใๆธๆปใไบๆ
ใฎใชในใฏใ้ซใพใใใใ้ฟใใในใใงใใใใใใซใ่ป้่ท้ขใๅๅใซใจใฃใฆใๅจๅฒใฎ็ถๆณใซๆณจๆใๆใใๅฑ้บใๅ้ฟใใๅฎๅ
จใ็ขบไฟใใใใใซๅๅใชๆณจๆใๆใใใจใ้่ฆใงใใใๆดใซใๅฎๅ
จใช้่ปขใใใใใใซใใใฌใผใญใจใขใฏใปใซใฎ่ธใฟ้้ใใใใขใฏใปใซใจใใฌใผใญใ้้ใใใชใฉใฎใในใ็ฏใใชใใใใๆณจๆๆทฑใ้่ปขใใใใจใๅฟ
่ฆใงใใ
prompt = "ๆฅๆฌใฎ้ฆ้ฝใฏ๏ผ"
inference("ce-lery/mistral-300m-sft", "cuda", prompt)
#<s>User:ๆฅๆฌใฎ้ฆ้ฝใฏ๏ผ
#<s>Assistant:ๆฑไบฌใฏๆฅๆฌใงๆใไบบๅฃใฎๅคใ้ฝๅธใงใใใไบบๅฃๅฏๅบฆใฎ้ซใ้ฝๅธใงใใ
Receipe
If you want to restruct this model, you can refer this Github repository.
And the manual of this repository is here. Please refer it.
If you find my mistake,error,...etc, please create issue. If you create pulreqest, I'm very happy!
Training procedure
Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
- PEFT 0.17.1
- Downloads last month
- -
Model tree for ce-lery/mistral-300m-sft
Base model
ce-lery/mistral-300m-base