YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

NorLlama-3B - bnb 4bits

Original model description:

license: cc-by-nc-sa-4.0 language: - 'no'

Gnerative Pretrained Tranformer with 3 Billion parameters for Norwegian. NorLlama-3B is based on Llama architechture, and pretrained on Tencent Pre-training Framework

It belongs to NorGLM, a suite of pretrained Norwegian Generative Language Models. NorGLM can be used for non-commercial purposes.

Datasets

All models in NorGLM are trained on 200G datasets, nearly 25B tokens, including Norwegian, Denish, Swedish, Germany and English.

Run the Model

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "NorGLM/NorLlama-3B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map='auto',
    torch_dtype=torch.bfloat16
)

text = "Tom ønsket å gå på barene med venner"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)

Citation Information

If you feel our work is helpful, please cite our paper:

@article{liu2023nlebench+,
  title={NLEBench+ NorGLM: A Comprehensive Empirical Analysis and Benchmark Dataset for Generative Language Models in Norwegian},
  author={Liu, Peng and Zhang, Lemei and Farup, Terje Nissen and Lauvrak, Even W and Ingvaldsen, Jon Espen and Eide, Simen and Gulla, Jon Atle and Yang, Zhirong},
  journal={arXiv preprint arXiv:2312.01314},
  year={2023}
}
Downloads last month
5
Safetensors
Model size
3B params
Tensor type
F32
·
F16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for RichardErkhov/NorGLM_-_NorLlama-3B-4bits