YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
TIME-Module: Mapping β t5gemma-2-270m-270m
Model Description
Temporal span prediction using Google's T5Gemma architecture (270M parameters). Encoder-decoder model combining Gemma weights with T5 architecture. Demonstrates competitive performance with the much smaller flan-t5-small.
Training Details
- Base Model: google/t5gemma-2-270m-270m
- Architecture: T5ForConditionalGeneration
- Dataset: Pieces/temporal-span-prediction-v-0-3-1-quality (41,927 train / ~5K val / ~5K test)
- Training Steps: 50,000
- Learning Rate: 1e-4
- Batch Size: 8 Γ 4
- Mixed Precision: bf16
- Hardware: NVIDIA RTX 4090 (24 GB)
Results
| Metric | Value |
|---|---|
| exact_match | ~51.8% |
| tmr | 54.35% |
| avg_iou | ~65.7% |
Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('Pieces/time-mapping-t5gemma-270m-best')
model = AutoModelForSeq2SeqLM.from_pretrained('Pieces/time-mapping-t5gemma-270m-best')
inputs = tokenizer('Map: tomorrow at 3pm | ref_time: 2026-01-15T10:00:00 | tz: UTC', return_tensors='pt')
outputs = model.generate(**inputs)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
Part of the TIME-Module Project
This model is part of the TIME (Temporal Intent, Mapping, and Extraction) module, a suite of models for understanding and processing temporal information in natural language.
Related models:
- Pieces/time-classification-flan-t5-small-split-best β Intent classification
- Pieces/time-mapping-flan-t5-small-quality-best β Span prediction (best)
- Pieces/time-mapping-t5gemma-270m-best β Span prediction (T5Gemma)
Citation
@software{time_module,
title={TIME-Module: Temporal Intent, Mapping, and Extraction},
author={Pieces},
year={2026},
url={https://huggingface.co/Pieces}
}
- Downloads last month
- 12
Model tree for Pieces/time-mapping-t5gemma-270m-best
Base model
google/t5gemma-2-270m-270m