---
license: cc-by-nc-4.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- loss:CosineSimilarityLoss
base_model: stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_dot
- spearman_dot
- pearson_euclidean
- spearman_euclidean
- pearson_manhattan
- spearman_manhattan
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev dot
type: sts-dev-dot
metrics:
- type: pearson_dot
value: 0.6125529066567547
name: Pearson Dot
- type: spearman_dot
value: 0.607020920491597
name: Spearman Dot
- type: pearson_dot
value: 0.6151741779356057
name: Pearson Dot
- type: spearman_dot
value: 0.6095317749105116
name: Spearman Dot
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev euclidian
type: sts-dev-euclidian
metrics:
- type: pearson_euclidean
value: 0.7076748166600304
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7205880822002616
name: Spearman Euclidean
- type: pearson_euclidean
value: 0.7099832358841494
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7216899339827408
name: Spearman Euclidean
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev manhattan
type: sts-dev-manhattan
metrics:
- type: pearson_manhattan
value: 0.706930206993536
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.7197955970878462
name: Spearman Manhattan
- type: pearson_manhattan
value: 0.7092200936299493
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.7209197353975371
name: Spearman Manhattan
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev cosine
type: sts-dev-cosine
metrics:
- type: pearson_cosine
value: 0.7244468292859898
name: Pearson Cosine
- type: spearman_cosine
value: 0.7251349738474332
name: Spearman Cosine
- type: pearson_cosine
value: 0.7253818539410067
name: Pearson Cosine
- type: spearman_cosine
value: 0.7209886641359866
name: Spearman Cosine
---
# SentenceTransformer based on stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0](https://huggingface.co/stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0](https://huggingface.co/stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Language:** Portuguese
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"o autor possuía..., ",
"a parte autora é servidor pública...",
"a parte autora é..."
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 1.0000, 0.8019],
# [1.0000, 1.0000, 0.8019],
# [0.8019, 0.8019, 1.0000]])
```
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev-dot`
* Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-----------------|:----------|
| pearson_dot | 0.6126 |
| **spearman_dot** | **0.607** |
#### Semantic Similarity
* Dataset: `sts-dev-euclidian`
* Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-----------------------|:-----------|
| pearson_euclidean | 0.7077 |
| **spearman_euclidean** | **0.7206** |
#### Semantic Similarity
* Dataset: `sts-dev-manhattan`
* Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-----------------------|:-----------|
| pearson_manhattan | 0.7069 |
| **spearman_manhattan** | **0.7198** |
#### Semantic Similarity
* Dataset: `sts-dev-cosine`
* Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7244 |
| **spearman_cosine** | **0.7251** |
#### Semantic Similarity
* Dataset: `sts-dev-dot`
* Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-----------------|:-----------|
| pearson_dot | 0.6152 |
| **spearman_dot** | **0.6095** |
#### Semantic Similarity
* Dataset: `sts-dev-euclidian`
* Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-----------------------|:-----------|
| pearson_euclidean | 0.71 |
| **spearman_euclidean** | **0.7217** |
#### Semantic Similarity
* Dataset: `sts-dev-manhattan`
* Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-----------------------|:-----------|
| pearson_manhattan | 0.7092 |
| **spearman_manhattan** | **0.7209** |
#### Semantic Similarity
* Dataset: `sts-dev-cosine`
* Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.7254 |
| **spearman_cosine** | **0.721** |
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
- `resume_from_checkpoint`: True
#### All Hyperparameters
Click to expand
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: True
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 5.0.0
- Transformers: 4.53.3
- PyTorch: 2.7.1+cu126
- Accelerate: 1.9.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Authors
Diretoria de Inteligência Artificial, Ciência de Dados e Estatística do Tribunal de Justiça do Estado de Goiás (TJGO).
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### STJ IRIS
```bibtex
@InProceedings{MeloSemantic,
author="Melo, Rui
and Santos, Pedro A.
and Dias, Jo{\~a}o",
editor="Moniz, Nuno
and Vale, Zita
and Cascalho, Jos{\'e}
and Silva, Catarina
and Sebasti{\~a}o, Raquel",
title="A Semantic Search System for the Supremo Tribunal de Justi{\c{c}}a",
booktitle="Progress in Artificial Intelligence",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="142--154",
abstract="Many information retrieval systems use lexical approaches to retrieve information. Such approaches have multiple limitations, and these constraints are exacerbated when tied to specific domains, such as the legal one. Large language models, such as BERT, deeply understand a language and may overcome the limitations of older methodologies, such as BM25. This work investigated and developed a prototype of a Semantic Search System to assist the Supremo Tribunal de Justi{\c{c}}a (Portuguese Supreme Court of Justice) in its decision-making process. We built a Semantic Search System that uses specially trained BERT models (Legal-BERTimbau variants) and a Hybrid Search System that incorporates both lexical and semantic techniques by combining the capabilities of BM25 and the potential of Legal-BERTimbau. In this context, we obtained a {\$}{\$}335{\backslash}{\%}{\$}{\$}335{\%}increase on the discovery metric when compared to BM25 for the first query result. This work also provides information on the most relevant techniques for training a Large Language Model adapted to Portuguese jurisprudence and introduces a new technique of Metadata Knowledge Distillation.",
isbn="978-3-031-49011-8"
}
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@inproceedings{fonseca2016assin,
title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
pages={13--15},
year={2016}
}
@inproceedings{real2020assin,
title={The assin 2 shared task: a quick overview},
author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={406--412},
year={2020},
organization={Springer}
}
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
```