Any-to-Any
Transformers
Safetensors
qwen2_5_vl
image-text-to-text
custom_code
text-generation-inference
Instructions to use modelscope/Nexus-Gen with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use modelscope/Nexus-Gen with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("modelscope/Nexus-Gen", trust_remote_code=True) model = AutoModelForImageTextToText.from_pretrained("modelscope/Nexus-Gen", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- a9947647d518c10e57cb61d1a8de5ef424b6a678c3d8ee1f751f7fff1c84cf1b
- Size of remote file:
- 23.9 GB
- SHA256:
- 5185e89d6f4aec5d4197c4b8c8055e6a4a7939fbd74b595e8c671b0d92fb9b00
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.