Instructions to use diffusers/controlnet-canny-sdxl-1.0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use diffusers/controlnet-canny-sdxl-1.0 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("diffusers/controlnet-canny-sdxl-1.0", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Do I need a 8xA100 machine to run the model?
#12
by seven-dev - opened
I'm new to this, I only have a GeForce RTX 2060. There's no way I can run this model, right?
Is a 8xA100 machine to train the model or to run?
You can run this model, if you don't run out of VRAM.
This model runs on an NVIDIA T4 in fp16 mode. So no, An 8xA100 is not needed for inference.
You definitely do not need a A100 to run the example code. Try to run the code example so that the model doesn't OOM.
Also see: https://huggingface.co/diffusers/controlnet-canny-sdxl-1.0/discussions/16#64db818d69d21c567bf02bdd
williamberman changed discussion status to closed