|
Found the fix for memory not being freed when switching models on Linux (it's not Python or PyTorch)
|
|
2
|
23
|
March 29, 2026
|
|
Wave Field LLM — O(n log n) attention via wave equation dynamics, within 5% of standard transformer
|
|
4
|
5474
|
March 29, 2026
|
|
Mes Spaces restent bloqués sur “Starting” malgré abonnement Pro et hébergement GPU
|
|
5
|
104
|
March 28, 2026
|
|
How do I get started with Hugging Face Transformers as a beginner?
|
|
0
|
24
|
March 27, 2026
|
|
Numerical instability when finetuning deberta-v3-small
|
|
2
|
27
|
March 23, 2026
|
|
Title: Could Tagalog’s Focus System Inspire a Higher-Level Attention Mechanism in Transformers?
|
|
1
|
13
|
March 19, 2026
|
|
ImportError for function find_pruneable_heads_and_indices
|
|
1
|
153
|
March 16, 2026
|
|
Transformers.js: Retrieving the size of models in MB/GB before running
|
|
1
|
12
|
March 16, 2026
|
|
Purpose of commit_hash in PreTrainedModel.from_pretrained
|
|
1
|
23
|
March 16, 2026
|
|
How DEoT Makes LLMs Think: A New Framework for Open-Ended Reasoning
|
|
2
|
14
|
March 15, 2026
|
|
AutoModel with ClinicalBERT gives UNEXPECTED warning
|
|
3
|
33
|
March 13, 2026
|
|
Are biofoundation models actually used in practice and how helpful they are?
|
|
0
|
8
|
March 10, 2026
|
|
Overfitting in BERT IMDB50k
|
|
2
|
1145
|
March 6, 2026
|
|
LLM Course code errors
|
|
7
|
119
|
March 6, 2026
|
|
Different output when we inference through packing with flash attention in bf16
|
|
1
|
12
|
March 6, 2026
|
|
Why are gradient_checkpointing and training bound?
|
|
2
|
29
|
March 2, 2026
|
|
Attentions not returned from transformers ViT model when using output_attentions=True
|
|
5
|
1226
|
March 2, 2026
|
|
Using hyperparameter-search in Trainer
|
|
102
|
38931
|
March 2, 2026
|
|
Issue with summarization and translation pipeline
|
|
3
|
54
|
March 2, 2026
|
|
Is LLaMA rotary embedding implementation correct?
|
|
8
|
9586
|
February 26, 2026
|
|
Gemma 3 12B: 4-bit Quantization failing/ignored in Transformers v5.1.0 (Gemma3ForConditionalGeneration)
|
|
10
|
173
|
February 23, 2026
|
|
[Help Needed] Dual-Phase Softmax Steering on Llama-2 Residual Stream Yields Identical POPE Results
|
|
3
|
37
|
February 23, 2026
|
|
[Research/Discussion] Depth-agnostic stability for residual models (no extra norms, no tuning). Is this useful to you?
|
|
1
|
26
|
February 22, 2026
|
|
LLaVA Steering: Why does grounding fix hallucinations in captioning but not in Yes/No QA?
|
|
1
|
37
|
February 19, 2026
|
|
KV Caching problem with gemma 3
|
|
2
|
75
|
February 17, 2026
|
|
Num_beam_groups removed in V5?
|
|
1
|
69
|
February 14, 2026
|
|
[LLaVA-1.5] Implementing Control Barrier Functions (LCBF) via Attention Hooking – Persistent AttributeError: 'LlamaAttention' object has no attribute 'rotary_emb'
|
|
4
|
21
|
February 13, 2026
|
|
Error while importing "Trainer"
|
|
1
|
147
|
February 13, 2026
|
|
[LLaVA-1.5] Very low hallucination rate & weak attention correlation in "Attention Gap" experiment – Is my implementation of output_attentions correct?
|
|
4
|
30
|
February 12, 2026
|
|
Confusion with freezing Whisper's feature encoder
|
|
3
|
44
|
February 11, 2026
|