gemma-3-270m-tuned-0107-1154
This model is a fine-tuned version of google/gemma-3-270m on the broadfield-dev/rStar-Coder-PyStructure-refined dataset.
Training Details
- Task: REASONING
- Epochs: 2
- Learning Rate: 2e-05
- Gradient Accumulation Steps: 4
Entity Labels
['LABEL_0', 'LABEL_1']
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for broadfield-dev/gemma-3-270m-tuned-0107-1154
Base model
google/gemma-3-270m