How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("image-classification", model="WinKawaks/vit-small-patch16-224")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")
# Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification

processor = AutoImageProcessor.from_pretrained("WinKawaks/vit-small-patch16-224")
model = AutoModelForImageClassification.from_pretrained("WinKawaks/vit-small-patch16-224")
Quick Links

Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the timm repository. This model is used in the same way as ViT-base.

Note that [safetensors] model requires torch 2.0 environment.

Downloads last month
6,621
Safetensors
Model size
22.1M params
Tensor type
F32
ยท
Inference Providers NEW

Model tree for WinKawaks/vit-small-patch16-224

Adapters
5 models
Finetunes
16 models

Spaces using WinKawaks/vit-small-patch16-224 3