working in ComfyUI with sdxl models:

Just need the two safetensors model1 and model2. clip_g and clip_l for the common name from sdxl

Needs my custom loader here to work: https://github.com/Apache0ne/ComfyUI-SDXLNVFP4

Workflow in images, also I am using taesd vae for both.


With NVFP4 CLIP


with normal CLIP

Hybrid-Sensitivity-Weighted-Quantization (HSWQ)

High-fidelity FP8 quantization for diffusion models (SDXL). HSWQ uses sensitivity and importance analysis instead of naive uniform cast, and offers two modes: standard-compatible (V1) and high-performance scaled (V2).

Technical details: md/HSWQ_ Hybrid Sensitivity Weighted Quantization.md

How to quantize: md/HSWQ_ How to quantize SDXL.md

SDXL Benchmark Test Results: md/SDXL Benchmark Test Results.md

Credit & Special Acknowledgement

https://github.com/ussoewwin/Hybrid-Sensitivity-Weighted-Quantization

https://github.com/tritant/ComfyUI_Kitchen_nvfp4_Converter

https://github.com/NVIDIA/Model-Optimizer

We extend our deepest respect and gratitude to the Nunchaku Team for their groundbreaking work on SVDQ quantization and for sharing their models with the community. This collection relies heavily on their research and original implementation.

Downloads last month
26
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ApacheOne/sdxl_text_encoders-NVFP4

Quantized
(23)
this model

Collection including ApacheOne/sdxl_text_encoders-NVFP4