gemma3-4b-cord-v2-peft
This is a LoRA adapter fine-tuned on the naver-clova-ix/cord-v2 dataset for document understanding tasks (receipt parsing).
Model Details
- Base Model: google/gemma-3-4b-it
- Fine-tuning Framework: NeMo AutoModel
- Task: Visual document understanding / OCR
- Training Steps: 999
- Final Training Loss: N/A
Usage
With NeMo AutoModel
from nemo_automodel._transformers import NeMoAutoModelForImageTextToText
from nemo_automodel._peft.lora import PeftConfig, apply_lora_to_linear_modules
from transformers import AutoProcessor
from safetensors.torch import load_file
import torch
import json
# Load base model
model = NeMoAutoModelForImageTextToText.from_pretrained(
"google/gemma-3-4b-it",
torch_dtype=torch.bfloat16,
).to("cuda")
# Load and apply LoRA adapter
adapter_path = "path/to/downloaded/adapter"
with open(f"{adapter_path}/adapter_config.json") as f:
config = json.load(f)
peft_config = PeftConfig(dim=config["r"], alpha=config["lora_alpha"])
apply_lora_to_linear_modules(model, peft_config)
# Load adapter weights
adapter_weights = load_file(f"{adapter_path}/adapter_model.safetensors")
model.load_state_dict(adapter_weights, strict=False)
# Run inference
processor = AutoProcessor.from_pretrained("google/gemma-3-4b-it")
# ... use model for inference
With HuggingFace PEFT (if compatible)
from peft import PeftModel
from transformers import AutoModelForImageTextToText, AutoProcessor
base_model = AutoModelForImageTextToText.from_pretrained("google/gemma-3-4b-it")
model = PeftModel.from_pretrained(base_model, "plouryNV/gemma3-4b-cord-v2-peft")
processor = AutoProcessor.from_pretrained("google/gemma-3-4b-it")
Training
This adapter was trained using the NeMo AutoModel framework with the following configuration:
- LoRA Rank (r): 8
- LoRA Alpha: 32
- Target Modules: Language model linear layers (vision tower frozen)
- Optimizer: Adam (lr=1e-5)
- Precision: bfloat16
Files
adapter_model.safetensors- LoRA adapter weightsadapter_config.json- HuggingFace PEFT-compatible configautomodel_peft_config.json- NeMo AutoModel PEFT config
License
Apache 2.0
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support