Instructions to use Letian2003/unified_vit_v38-vision-decoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- OpenCLIP
How to use Letian2003/unified_vit_v38-vision-decoder with OpenCLIP:
import open_clip model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:Letian2003/unified_vit_v38-vision-decoder') tokenizer = open_clip.get_tokenizer('hf-hub:Letian2003/unified_vit_v38-vision-decoder') - Notebooks
- Google Colab
- Kaggle
No model card
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support