gemma3-4b-cord-v2-peft

This is a LoRA adapter fine-tuned on the naver-clova-ix/cord-v2 dataset for document understanding tasks (receipt parsing).

Model Details

Usage

With NeMo AutoModel

from nemo_automodel._transformers import NeMoAutoModelForImageTextToText
from nemo_automodel._peft.lora import PeftConfig, apply_lora_to_linear_modules
from transformers import AutoProcessor
from safetensors.torch import load_file
import torch
import json

# Load base model
model = NeMoAutoModelForImageTextToText.from_pretrained(
    "google/gemma-3-4b-it",
    torch_dtype=torch.bfloat16,
).to("cuda")

# Load and apply LoRA adapter
adapter_path = "path/to/downloaded/adapter"
with open(f"{adapter_path}/adapter_config.json") as f:
    config = json.load(f)

peft_config = PeftConfig(dim=config["r"], alpha=config["lora_alpha"])
apply_lora_to_linear_modules(model, peft_config)

# Load adapter weights
adapter_weights = load_file(f"{adapter_path}/adapter_model.safetensors")
model.load_state_dict(adapter_weights, strict=False)

# Run inference
processor = AutoProcessor.from_pretrained("google/gemma-3-4b-it")
# ... use model for inference

With HuggingFace PEFT (if compatible)

from peft import PeftModel
from transformers import AutoModelForImageTextToText, AutoProcessor

base_model = AutoModelForImageTextToText.from_pretrained("google/gemma-3-4b-it")
model = PeftModel.from_pretrained(base_model, "plouryNV/gemma3-4b-cord-v2-peft")
processor = AutoProcessor.from_pretrained("google/gemma-3-4b-it")

Training

This adapter was trained using the NeMo AutoModel framework with the following configuration:

  • LoRA Rank (r): 8
  • LoRA Alpha: 32
  • Target Modules: Language model linear layers (vision tower frozen)
  • Optimizer: Adam (lr=1e-5)
  • Precision: bfloat16

Files

  • adapter_model.safetensors - LoRA adapter weights
  • adapter_config.json - HuggingFace PEFT-compatible config
  • automodel_peft_config.json - NeMo AutoModel PEFT config

License

Apache 2.0

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for plouryNV/gemma3-4b-cord-v2-peft

Adapter
(329)
this model

Dataset used to train plouryNV/gemma3-4b-cord-v2-peft