OpenCLIP How to use UCSC-VLAA/openvision3-vit-base-patch2-32 with OpenCLIP:
import open_clip
model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:UCSC-VLAA/openvision3-vit-base-patch2-32')
tokenizer = open_clip.get_tokenizer('hf-hub:UCSC-VLAA/openvision3-vit-base-patch2-32')