# Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("WinKawaks/vit-small-patch16-224")
model = AutoModelForImageClassification.from_pretrained("WinKawaks/vit-small-patch16-224")Quick Links
Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the timm repository. This model is used in the same way as ViT-base.
Note that [safetensors] model requires torch 2.0 environment.
- Downloads last month
- 6,621
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="WinKawaks/vit-small-patch16-224") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")