checkpoints
This model is a fine-tuned version of distilbert-base-uncased trained on processed Amazon review datasets from multiple categories. It is designed for sentiment classification across five classes. It achieves the following results on the evaluation set:
- Loss: 0.9056
- Accuracy: 0.6087
- F1 Macro: 0.6101
- F1 Weighted: 0.6085
- Precision Class 1: 0.6762
- Precision Class 2: 0.4900
- Precision Class 3: 0.5177
- Precision Class 4: 0.6173
- Precision Class 5: 0.7535
- Recall Class 1: 0.6781
- Recall Class 2: 0.5293
- Recall Class 3: 0.5049
- Recall Class 4: 0.5606
- Recall Class 5: 0.7781
- F1 Class 1: 0.6772
- F1 Class 2: 0.5089
- F1 Class 3: 0.5112
- F1 Class 4: 0.5876
- F1 Class 5: 0.7656
Model description
This model is a distilled, lightweight transformer optimized for multi-class sentiment analysis. It is capable of processing text efficiently while maintaining competitive performance. The model was fine-tuned on Amazon product reviews spanning different categories such as electronics and home goods... It can classify reviews into five sentiment levels, from highly negative to highly positive.
Intended uses & limitations
Automated sentiment classification for Amazon reviews or similar e-commerce datasets.
Analysis of customer feedback trends to inform product decisions.
Training and evaluation data
The model was trained on a processed subset of Amazon review datasets covering multiple product categories. Text data was cleaned, tokenized, and labeled into five sentiment classes. Evaluation was performed on a held-out test set representing the same categories to ensure balanced coverage.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted | Precision Class 1 | Precision Class 2 | Precision Class 3 | Precision Class 4 | Precision Class 5 | Recall Class 1 | Recall Class 2 | Recall Class 3 | Recall Class 4 | Recall Class 5 | F1 Class 1 | F1 Class 2 | F1 Class 3 | F1 Class 4 | F1 Class 5 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.0806 | 0.16 | 500 | 1.0584 | 0.5392 | 0.5333 | 0.5316 | 0.5485 | 0.4359 | 0.4347 | 0.5297 | 0.7385 | 0.7797 | 0.3506 | 0.4185 | 0.4576 | 0.6959 | 0.6440 | 0.3886 | 0.4264 | 0.4910 | 0.7166 |
| 0.9979 | 0.32 | 1000 | 0.9753 | 0.5727 | 0.5741 | 0.5726 | 0.7303 | 0.4461 | 0.4691 | 0.5915 | 0.7095 | 0.5433 | 0.5949 | 0.4908 | 0.4406 | 0.8016 | 0.6231 | 0.5099 | 0.4797 | 0.5050 | 0.7528 |
| 0.9773 | 0.48 | 1500 | 0.9464 | 0.5869 | 0.5832 | 0.5814 | 0.6321 | 0.5040 | 0.4675 | 0.5748 | 0.7381 | 0.7328 | 0.3800 | 0.5110 | 0.5416 | 0.7785 | 0.6787 | 0.4333 | 0.4883 | 0.5577 | 0.7578 |
| 0.9311 | 0.64 | 2000 | 0.9356 | 0.5876 | 0.5923 | 0.5909 | 0.7000 | 0.4546 | 0.5063 | 0.5627 | 0.8035 | 0.6272 | 0.6081 | 0.4236 | 0.6077 | 0.6754 | 0.6616 | 0.5203 | 0.4613 | 0.5843 | 0.7339 |
| 0.9332 | 0.8 | 2500 | 0.9305 | 0.5924 | 0.5832 | 0.5815 | 0.6279 | 0.4773 | 0.5509 | 0.5754 | 0.6969 | 0.7421 | 0.4949 | 0.3492 | 0.5432 | 0.8437 | 0.6802 | 0.4860 | 0.4274 | 0.5589 | 0.7633 |
| 0.9193 | 0.96 | 3000 | 0.9068 | 0.6037 | 0.6024 | 0.6008 | 0.6696 | 0.4849 | 0.5144 | 0.6016 | 0.7372 | 0.6956 | 0.5188 | 0.4514 | 0.5519 | 0.8097 | 0.6823 | 0.5013 | 0.4809 | 0.5757 | 0.7717 |
| 0.8735 | 1.12 | 3500 | 0.9177 | 0.6032 | 0.6029 | 0.6012 | 0.6766 | 0.5004 | 0.4973 | 0.5865 | 0.7445 | 0.6859 | 0.4791 | 0.4817 | 0.5848 | 0.7941 | 0.6812 | 0.4895 | 0.4894 | 0.5857 | 0.7685 |
| 0.8656 | 1.28 | 4000 | 0.9069 | 0.6042 | 0.6011 | 0.5996 | 0.6823 | 0.4792 | 0.5197 | 0.6217 | 0.7149 | 0.6830 | 0.5532 | 0.4448 | 0.4968 | 0.8526 | 0.6826 | 0.5136 | 0.4794 | 0.5523 | 0.7777 |
| 0.8667 | 1.44 | 4500 | 0.9034 | 0.6059 | 0.6051 | 0.6035 | 0.6859 | 0.4958 | 0.5050 | 0.5923 | 0.7384 | 0.6877 | 0.4941 | 0.4807 | 0.5640 | 0.8124 | 0.6868 | 0.4950 | 0.4925 | 0.5778 | 0.7736 |
| 0.879 | 1.6 | 5000 | 0.9001 | 0.6067 | 0.6086 | 0.6071 | 0.6592 | 0.4874 | 0.5043 | 0.6120 | 0.7918 | 0.7239 | 0.5192 | 0.4876 | 0.5703 | 0.7381 | 0.6901 | 0.5028 | 0.4958 | 0.5904 | 0.7640 |
| 0.8709 | 1.76 | 5500 | 0.9005 | 0.6039 | 0.6045 | 0.6029 | 0.7008 | 0.4802 | 0.5250 | 0.5652 | 0.7736 | 0.6591 | 0.5636 | 0.4078 | 0.6366 | 0.7603 | 0.6793 | 0.5186 | 0.4590 | 0.5988 | 0.7669 |
| 0.8565 | 1.92 | 6000 | 0.9076 | 0.6084 | 0.6053 | 0.6036 | 0.6409 | 0.5094 | 0.5011 | 0.6234 | 0.7513 | 0.7529 | 0.4305 | 0.5253 | 0.5368 | 0.8057 | 0.6924 | 0.4666 | 0.5129 | 0.5769 | 0.7776 |
| 0.794 | 2.08 | 6500 | 0.9080 | 0.6091 | 0.6103 | 0.6086 | 0.6960 | 0.4837 | 0.5061 | 0.6135 | 0.7610 | 0.6749 | 0.5568 | 0.4676 | 0.5521 | 0.8022 | 0.6853 | 0.5177 | 0.4861 | 0.5812 | 0.7810 |
| 0.7857 | 2.24 | 7000 | 0.9131 | 0.6058 | 0.6062 | 0.6046 | 0.6521 | 0.4940 | 0.5055 | 0.5987 | 0.7828 | 0.7342 | 0.4886 | 0.4795 | 0.5853 | 0.7482 | 0.6907 | 0.4913 | 0.4922 | 0.5919 | 0.7651 |
| 0.7956 | 2.4 | 7500 | 0.9092 | 0.6074 | 0.6105 | 0.6090 | 0.6897 | 0.4941 | 0.4919 | 0.6113 | 0.7813 | 0.6776 | 0.5317 | 0.5251 | 0.5475 | 0.7613 | 0.6836 | 0.5122 | 0.5080 | 0.5776 | 0.7712 |
| 0.7721 | 2.56 | 8000 | 0.9153 | 0.6057 | 0.6080 | 0.6065 | 0.6641 | 0.5053 | 0.4927 | 0.5936 | 0.7982 | 0.7196 | 0.4777 | 0.5280 | 0.5911 | 0.7177 | 0.6908 | 0.4911 | 0.5098 | 0.5924 | 0.7558 |
| 0.7974 | 2.7200 | 8500 | 0.9019 | 0.6095 | 0.6110 | 0.6094 | 0.6790 | 0.4909 | 0.5019 | 0.6212 | 0.7680 | 0.6981 | 0.5208 | 0.5058 | 0.5483 | 0.7819 | 0.6884 | 0.5054 | 0.5038 | 0.5825 | 0.7749 |
| 0.7884 | 2.88 | 9000 | 0.9119 | 0.6074 | 0.6054 | 0.6037 | 0.6467 | 0.5007 | 0.5033 | 0.6101 | 0.7592 | 0.7431 | 0.4593 | 0.4868 | 0.5608 | 0.7958 | 0.6915 | 0.4791 | 0.4949 | 0.5844 | 0.7771 |
| 0.7359 | 3.04 | 9500 | 0.9220 | 0.6066 | 0.6100 | 0.6085 | 0.6829 | 0.4916 | 0.4923 | 0.6094 | 0.7899 | 0.6889 | 0.5139 | 0.5286 | 0.5661 | 0.7414 | 0.6859 | 0.5025 | 0.5098 | 0.5869 | 0.7649 |
| 0.7453 | 3.2 | 10000 | 0.9298 | 0.6092 | 0.6064 | 0.6047 | 0.6534 | 0.4958 | 0.5115 | 0.6233 | 0.7431 | 0.7369 | 0.4930 | 0.4696 | 0.5315 | 0.8238 | 0.6927 | 0.4944 | 0.4897 | 0.5738 | 0.7814 |
Framework versions
- Transformers 4.57.3
- Pytorch 2.9.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.1
- Downloads last month
- 1
Model tree for CheritiA/distilbert-sentiment-5class
Base model
distilbert/distilbert-base-uncased