CrossEncoder based on BAAI/bge-reranker-v2-m3
This is a Cross Encoder model finetuned from BAAI/bge-reranker-v2-m3 using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: BAAI/bge-reranker-v2-m3
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
Model Sources
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
model = CrossEncoder("kallilikhitha123/finetuned-bge-reranker-2403")
pairs = [
['bhargavi madhavi', 'bhargavi srinidhi'],
['navya kiara', 'navya tanvi'],
['manish', 'manisha'],
['anantha padmanabha', 'ananthpadmanabha'],
['nitin singh', 'n singh'],
]
scores = model.predict(pairs)
print(scores.shape)
ranks = model.rank(
'bhargavi madhavi',
[
'bhargavi srinidhi',
'navya tanvi',
'manisha',
'ananthpadmanabha',
'n singh',
]
)
Evaluation
Metrics
Cross Encoder Classification
| Metric |
Value |
| accuracy |
0.9716 |
| accuracy_threshold |
0.9984 |
| f1 |
0.9762 |
| f1_threshold |
0.9965 |
| precision |
0.988 |
| recall |
0.9647 |
| average_precision |
0.9915 |
Training Details
Training Dataset
Unnamed Dataset
Evaluation Dataset
Unnamed Dataset
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_eval_batch_size: 16
learning_rate: 2e-05
weight_decay: 0.01
warmup_steps: 73
remove_unused_columns: False
load_best_model_at_end: True
All Hyperparameters
Click to expand
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 8
per_device_eval_batch_size: 16
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 2e-05
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 3
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: None
warmup_ratio: None
warmup_steps: 73
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
enable_jit_checkpoint: False
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
use_cpu: False
seed: 42
data_seed: None
bf16: False
fp16: False
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: -1
ddp_backend: None
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
disable_tqdm: False
remove_unused_columns: False
label_names: None
load_best_model_at_end: True
ignore_data_skip: False
fsdp: []
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
parallelism_config: None
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch_fused
optim_args: None
group_by_length: False
length_column_name: length
project: huggingface
trackio_space_id: trackio
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: None
hub_always_push: False
hub_revision: None
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_for_metrics: []
eval_do_concat_batches: True
auto_find_batch_size: False
full_determinism: False
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
include_num_input_tokens_seen: no
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
liger_kernel_config: None
eval_use_gather_object: False
average_tokens_across_devices: True
use_cache: False
prompts: None
batch_sampler: batch_sampler
multi_dataset_batch_sampler: proportional
router_mapping: {}
learning_rate_mapping: {}
Training Logs
| Epoch |
Step |
Training Loss |
Validation Loss |
entity-matching-eval_average_precision |
| 0.1463 |
36 |
0.3950 |
- |
- |
| 0.2927 |
72 |
0.3475 |
- |
- |
| 0.2967 |
73 |
- |
0.3461 |
0.9648 |
| 0.1463 |
36 |
0.0637 |
- |
- |
| 0.2927 |
72 |
0.1365 |
- |
- |
| 0.2967 |
73 |
- |
0.4392 |
0.9630 |
| 0.4390 |
108 |
0.4193 |
- |
- |
| 0.5854 |
144 |
0.2634 |
- |
- |
| 0.5935 |
146 |
- |
0.3738 |
0.9602 |
| 0.7317 |
180 |
0.3739 |
- |
- |
| 0.8780 |
216 |
0.1583 |
- |
- |
| 0.8902 |
219 |
- |
0.1850 |
0.9893 |
| 1.0244 |
252 |
0.1806 |
- |
- |
| 1.1707 |
288 |
0.1287 |
- |
- |
| 1.1870 |
292 |
- |
0.4037 |
0.9836 |
| 1.3171 |
324 |
0.1883 |
- |
- |
| 1.4634 |
360 |
0.1562 |
- |
- |
| 1.4837 |
365 |
- |
0.3427 |
0.9868 |
| 1.6098 |
396 |
0.0619 |
- |
- |
| 1.7561 |
432 |
0.0854 |
- |
- |
| 1.7805 |
438 |
- |
0.2923 |
0.9887 |
| 1.9024 |
468 |
0.1240 |
- |
- |
| 2.0488 |
504 |
0.1076 |
- |
- |
| 2.0772 |
511 |
- |
0.2441 |
0.9915 |
| 2.1951 |
540 |
0.1289 |
- |
- |
| 2.3415 |
576 |
0.0311 |
- |
- |
| 2.3740 |
584 |
- |
0.2151 |
0.9923 |
| 2.4878 |
612 |
0.0181 |
- |
- |
| 2.6341 |
648 |
0.0982 |
- |
- |
| 2.6707 |
657 |
- |
0.2268 |
0.9915 |
| 2.7805 |
684 |
0.0010 |
- |
- |
| 2.9268 |
720 |
0.0321 |
- |
- |
| 2.9675 |
730 |
- |
0.2393 |
0.9915 |
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.2.3
- Transformers: 5.0.0
- PyTorch: 2.10.0+cpu
- Accelerate: 1.12.0
- Datasets: 4.8.3
- Tokenizers: 0.22.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}