OpenAI Whisper base model converted to ONNX format for onnx-asr.

Install onnx-asr

pip install onnx-asr[cpu,hub]

Load whisper-base model and recognize wav file

import onnx_asr
model = onnx_asr.load_model("whisper-base")
print(model.recognize("test.wav")) # Auto-detect lang (slower)
print(model.recognize("test.wav", language="en"))

Model export

Read onnxruntime instruction for convert Whisper to ONNX.

Download model and export with Beam Search and Forced Decoder Input Ids:

python3 -m onnxruntime.transformers.models.whisper.convert_to_onnx -m openai/whisper-base --output ./whisper-onnx --use_forced_decoder_ids --optimize_onnx --precision fp32

Save tokenizer config

from transformers import WhisperTokenizer

processor = WhisperTokenizer.from_pretrained("openai/whisper-base")
processor.save_pretrained("whisper-onnx")
Downloads last month
3,610
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for istupakov/whisper-base-onnx

Quantized
(18)
this model

Spaces using istupakov/whisper-base-onnx 3

Collection including istupakov/whisper-base-onnx