How to use alexgusevski/LLaMA-Mesh-q8-mlx with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("alexgusevski/LLaMA-Mesh-q8-mlx") model = AutoModelForCausalLM.from_pretrained("alexgusevski/LLaMA-Mesh-q8-mlx")
How to use alexgusevski/LLaMA-Mesh-q8-mlx with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir LLaMA-Mesh-q8-mlx alexgusevski/LLaMA-Mesh-q8-mlx
The community tab is the place to discuss and collaborate with the HF community!