MMS-TTS Kotokoli + Tem โ€” Eyaa-Tom Fine-tuned

Fine-tuned version of facebook/mms-tts-kdh on the Eyaa-Tom dataset for Kotokoli + Tem (kdh).

Kotokoli and Tem are the same language (ISO kdh). Trained on merged data from both folders.

Language Details

Field Value
Language Kotokoli + Tem
ISO 639-3 (MMS) kdh
Your ISO kdh / kot
Region Togo/Ghana
Family Gur (Niger-Congo)
Base model facebook/mms-tts-kdh

Usage

from transformers import VitsModel, VitsTokenizer
import torch, torchaudio

model     = VitsModel.from_pretrained("Umbaji/eyaa-tom-mms-tts-kdh")
tokenizer = VitsTokenizer.from_pretrained("Umbaji/eyaa-tom-mms-tts-kdh")

inputs = tokenizer("your text here", return_tensors="pt")
with torch.no_grad():
    waveform = model(**inputs).waveform[0]

torchaudio.save("output.wav", waveform.unsqueeze(0), model.config.sampling_rate)

Training Details

  • Loss: Mel-spectrogram L1 (avoids VITS training restriction)
  • Optimizer: AdamW โ€” lr=2e-4, betas=(0.8, 0.99)
  • Scheduler: ExponentialLR ฮณ=0.999
  • Epochs: 6 | Batch size: 4 (effective 16 w/ grad accumulation)

Citation

@article{pratap2023mms,
  title={Scaling Speech Technology to 1,000+ Languages},
  author={Pratap, Vineel et al.},
  journal={arXiv preprint arXiv:2305.13516},
  year={2023}
}

Fine-tuned: 2026-02-25 โ€” Eyaa-Tom project

Downloads last month
3
Safetensors
Model size
36.3M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Umbaji/eyaa-tom-mms-tts-kdh

Finetuned
(3)
this model

Paper for Umbaji/eyaa-tom-mms-tts-kdh