id
stringlengths
10
10
number
int64
1
25.6k
forum
stringlengths
10
10
title
stringlengths
5
214
abstract
stringlengths
26
4.31k
content_TLDR
stringlengths
1
250
content_keywords
stringlengths
6
1.02k
content_pdf
stringlengths
49
49
content_primary_area
stringclasses
21 values
content_supplementary_material
stringlengths
56
56
signatures
stringlengths
47
51
qfeSpu5FBE
25,459
qfeSpu5FBE
Treating Neural Image Compression via Modular Adversarial Optimization: From Global Distortion to Local Artifacts
The rapid progress in neural image compression (NIC) led to the deployment of advanced codecs, such as JPEG AI, which significantly outperform conventional approaches. However, despite extensive research on the adversarial robustness of neural networks in various computer vision tasks, the vulnerability of NIC models to adversarial attacks remains underexplored. Moreover, the existing adversarial attacks on NIC are ineffective against modern codecs. In this paper, we introduce a novel adversarial attack targeting NIC models. Our approach is built upon two core stages: (1) optimization of global-local distortions, and (2) a selective masking strategy that enhances attack stealthiness. Experimental evaluations demonstrate that the proposed method outperforms prior attacks on both JPEG AI and other NIC models, achieving greater distortion on decoded images and lower perceptibility of adversarial images. We also provide a theoretical analysis and discuss the underlying reasons for the effectiveness of our attack, offering new insights into the security and robustness of learned image compression.
We propose a modular adversarial attack on neural image codecs that reduces the compression quality of both the entire image and local areas, in order to improve the effectiveness and filters noise to stay imperceptible.
['Adversarial Robustness', 'Neural Image Compression', 'Adversarial Attacks']
/pdf/c27e2a1d745c3bc20af3e220ea1164b7e312a2d3.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25459/Authors']
4TFfiG17ec
25,458
4TFfiG17ec
Thanos: A Block-wise Pruning Algorithm for Efficient Large Language Model Compression
This paper presents Thanos, a novel weight-pruning algorithm designed to reduce the memory footprint and enhance the computational efficiency of large language models (LLMs) by removing redundant weights while maintaining accuracy. Thanos introduces a block-wise pruning strategy with adaptive masks that dynamically adjust to weight importance, enabling flexible sparsity patterns and structured formats, such as n:m sparsity, optimized for hardware acceleration. Experimental evaluations demonstrate that Thanos achieves state-of-the-art performance in structured pruning and outperforms existing methods in unstructured pruning. By providing an efficient and adaptable approach to model compression, Thanos offers a practical solution for deploying large models in resource-constrained environments. The algorithm is publicly available for further research and application.
We developed a novel pruning method for LLMs that compresses matrices in a block-wise manner.
['LLM Compression', 'Pruning', 'Wanda', 'SparseGPT', 'Deep Learning', 'AI']
/pdf/9fca108dcf3ca3b5fe5f807bf88eeb2de0f5a57b.pdf
foundation or frontier models, including LLMs
/attachment/53f3e08859a3fdf60d993a1573cabd2e9812d653.zip
['ICLR.cc/2026/Conference/Submission25458/Authors']
f9cYLpakOI
25,457
f9cYLpakOI
Endogenous Communication in Repeated Games with Learning Agents
Communication among learning agents often emerges without explicit supervision. We study endogenous protocol formation in infinitely repeated stage games with a costless pre-play channel. Each agent has a representation map that compresses private signals into messages subject to an information budget. Agents update strategies by no-regret learning with stochastic approximation and choose representation maps by a myopic objective that trades off predictive value and encoding cost. We provide three main results. First, if the stage game admits a folk-theorem set and the information budget exceeds a task-specific threshold, there exists a stable communication equilibrium in which messages are sufficient statistics for continuation payoffs. Second, when the budget is below the threshold, any stable equilibrium must be pooling on a finite partition that we characterize with a minimax information bound. Third, we give polynomial sample-complexity guarantees for convergence to an approximately efficient communicating equilibrium under mild regularity. Our analysis connects cheap talk, representation learning with information constraints, and multi-agent no-regret dynamics. The framework yields testable predictions for when emergent messages are interpretable, when they collapse, and how much data is needed for stable coordination.
We show when cheap-talk communication learned by agents in repeated games is predictive, incentive-compatible, and sample-efficient, giving tight conditions for stable emergent protocols.
['multi-agent learning', 'repeated games', 'cheap talk', 'communication', 'information bottleneck', 'equilibrium', 'representation learning']
/pdf/f066855147b7f8a0cc58eb0f08fc0e64b7bf487c.pdf
learning theory
null
['ICLR.cc/2026/Conference/Submission25457/Authors']
QWopGahUEL
25,452
QWopGahUEL
Games for AI Control: Models of Safety Evaluations of AI Deployment Protocols
To evaluate the safety and usefulness of deployment protocols for untrusted AIs, AI Control uses a red-teaming exercise played between a protocol designer and an adversary. This paper introduces AI-Control Games, a formal decision-making model of the red-teaming exercise as a multi-objective, partially observable, stochastic game. We also introduce reductions from AI-Control Games to a special case of zero-sum partially observable stochastic games that allow us to leverage existing algorithms to find Pareto-optimal protocols. We apply our formalism to model, evaluate and synthesise protocols for deploying untrusted language models as programming assistants, focusing on Trusted Monitoring protocols, which use weaker language models and limited human assistance. Finally, we demonstrate the utility of our formalism by showcasing improvements over empirical studies in existing settings, evaluating protocols in new settings, and analysing how modelling assumptions affect the safety and usefulness of protocols.
We introduce a game-theoretic framework for modelling AI Control evaluations, and synthesising protocols.
['Partially Observable Stochastic Games', 'AI Control', 'AI Evaluations', 'Safeguards', 'Game theory']
/pdf/7c6bf24da5832b676ea37c1f217c451e5d09b73a.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/2ecbbd78b6374b506c760f201ef667c983b87fa8.zip
['ICLR.cc/2026/Conference/Submission25452/Authors']
q05hC1Pzkr
25,450
q05hC1Pzkr
Characterizing and Optimizing the Spatial Kernel of Multi Resolution Hash Encodings
Multi Resolution Hash Encoding (MHE), the foundational technique behind Instant Neural Graphics Primitives, provides a powerful parameterization for neural fields. However, its spatial behavior lacks rigorous understanding from a physical systems perspective, leading to reliance on heuristics for hyperparameter selection. This work introduces a novel analytical approach that characterizes MHE by examining its Point Spread Function (PSF), which is analogous to the Green's function of the system. This methodology enables a quantification of the encoding's spatial resolution and fidelity. We derive a closed form approximation for the collision free PSF, uncovering inherent grid induced anisotropy and a logarithmic spatial profile. We establish that the idealized spatial bandwidth, specifically the Full Width at Half Maximum (FWHM), is determined by the average resolution, $N_{\text{avg}}$. This leads to a crucial, counterintuitive finding: the effective resolution of the model is governed by the broadened empirical FWHM (and therefore $N_{\text{avg}}$), rather than the finest resolution $N_{\max}$. Furthermore, we analyze the impact of finite hash capacity, demonstrating how collisions introduce speckle noise and degrade the Signal to Noise Ratio (SNR). Leveraging these theoretical insights, we propose two main advancements. First, we establish and validate a principled methodology for hyperparameter selection guided by our PSF analysis, which demonstrably outperforms standard heuristics by accounting for the analytically derived effective resolution and optimization induced broadening. Second, we introduce Rotated MHE (R-MHE), an architecture that employs rotated and independently hashed grids to mitigate anisotropy and average collision noise. This study establishes a methodology based on physical principles that moves beyond heuristics to characterize and optimize MHE.
We analyze Multi-Resolution Hash Encoding (MHE) using its Point Spread Function (PSF) to reveal that effective resolution is governed by average, not finest, resolution, and introduce Rotated MHE to mitigate inherent anisotropy and collision noise.
['multi-resolution hash encoding', 'implicit neural representations', 'neural fields', 'point spread function', 'spatial kernel analysis', 'anisotropy', 'resolution limit', 'FWHM', 'hash collisions', 'signal-to-noise ratio', 'NeRF']
/pdf/11f4413fe01f01addafd76cd01dfd0c3346c148e.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25450/Authors']
0JLUFJMo5p
25,449
0JLUFJMo5p
Dynamic Task-Embedded Reward Machines for \\ Adaptive Code Generation and Manipulation \\ in Reinforcement Learning
We introduce Dynamic Task-Embedded Reward Machine (DTERM), a new machine learning approach for reinforcement learning on tasks of code generation and code manipulation. Conventional reward models tend to be based on fixed weightings or manual tuning, which is not flexible enough for many different coding tasks, such as translation, completion and repair. To overcome that, DTERM dynamically modulates reward components using a hypernetwork-driven architecture, which can balance the task-aware configuration of syntactic correctness, semantic correctness, and computational efficiency. The framework combines three key modules, including a transformer-based task embedding generator, a modular reward decomposer, and a hypernetwork to generate context-dependent weights of sub-rewards.
null
['Reinforcement Learning']
/pdf/fa6de8f172967f9988c29abcc16091879272bcd0.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25449/Authors']
ScpCaOVGw1
25,448
ScpCaOVGw1
EVEREST: A Transformer for Probabilistic Rare-Event Anomaly Detection with Evidential and Tail-Aware Uncertainty
Forecasting rare events in multivariate time-series data is a central challenge in machine learning, complicated by severe class imbalance, long-range dependencies, and distributional uncertainty. We introduce EVEREST, a transformer-based architecture for probabilistic rare-event forecasting that delivers calibrated predictions and tail-aware risk estimation, with auxiliary interpretability through attention-based signal attribution. EVEREST integrates four key components: (i) a learnable attention bottleneck for soft aggregation of temporal dynamics; (ii) an evidential head for estimating aleatoric and epistemic uncertainty via a Normal–Inverse–Gamma distribution; (iii) an extreme-value head that models tail risk using a Generalized Pareto Distribution; and (iv) a lightweight precursor head for early-event detection. These modules are jointly optimised with a composite loss combining focal loss, evidential negative log-likelihood, and a tail-sensitive EVT penalty, and act only at training time; deployment uses a single classification head with no inference overhead. We evaluate EVEREST on a real-world benchmark spanning a decade of space-weather data and demonstrate state-of-the-art performance, including True Skill Statistic (TSS) scores of 0.973, 0.970, and 0.966 at 24, 48, and 72-hour horizons for C-class flares. The model is compact (≈0.81M parameters), efficient to train on commodity hardware, and applicable to other high-stakes domains such as industrial monitoring, weather, and satellite diagnostics. Limitations include reliance on fixed-length inputs and exclusion of image-based modalities, motivating future extensions to streaming and multimodal forecasting.
EVEREST is a transformer architecture for rare-event time-series forecasting that combines evidential and tail-aware uncertainty to deliver calibrated, interpretable, and state-of-the-art predictions across scientific anomaly detection tasks.
['Transformer models', 'Uncertainty quantification', 'Evidential deep learning', 'Extreme value theory', 'Imbalanced classification']
/pdf/95203a99a1ccbf3fd0495c1baadd9fa578a921c5.pdf
learning on time series and dynamical systems
/attachment/ab91622dbcd60fe6eb53bd44423454704b34fc62.zip
['ICLR.cc/2026/Conference/Submission25448/Authors']
rb7rnOSa2g
25,446
rb7rnOSa2g
Latents-Inv:Robust Semantic Watermark with Key-Assisted Recovery for diffusion models
Semantic watermarking provides imperceptible identity traceability for diffusion-generated images, enabling model copyright protection and image source verification. However, existing semantic watermarking methods based on initial latent noise render the protected image vulnerable to adversarial latent-space manipulations, such as black-box forgery via proxy models and watermark-pattern-removal attacks that exploit statistical regularities. In this paper, we propose a robust watermarking framework resilient diverse adversarial manipulation attack. Specifically, we design a fully reversible, flow-based codec with dual encoding paths, allowing plug-and-play integration into the diffusion generation process across architectures (UNet and MMDiT). The dual-output network encodes watermark information into both the carrier image and the owner’s secret key, enabling recovery of removal attacked watermark via key-assisted reconstruction. To guarantee verification reliability without excessive reliance on the key while retaining the ability to detect forged watermarked images, we propose a joint-training strategy that leverages negative-sample pairs under both accuracy and fidelity constraints. Furthermore, we introduce an Euler-based enhanced solver for the effective inversion in rectified flow models, which improves the accuracy of watermark information recovered. Experimental results show that our method achieves superior robustness under various attacks while maintaining high visual quality across diverse models.
null
['watermark', 'AI Security', 'diffusion model']
/pdf/953f565cb7f04df0535f50c851a11c19dacee315.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25446/Authors']
CyVUxyDc4U
25,444
CyVUxyDc4U
IDAP++: Advancing Divergence-Based Pruning via Filter-Level and Layer-Level Optimization
This paper presents a novel approach to neural network compression that addresses redundancy at both the filter and architectural levels through a unified framework grounded in information flow analysis. Building on the concept of tensor flow divergence, which quantifies how information is transformed across network layers, we develop a two-stage optimization process. The first stage employs iterative divergence-aware pruning to identify and remove redundant filters while preserving critical information pathways. The second stage extends this principle to higher-level architecture optimization by analyzing layer-wise contributions to information propagation and selectively eliminating entire layers that demonstrate minimal impact on network performance. The proposed method naturally adapts to diverse architectures, including convolutional networks, transformers, and hybrid designs, providing a consistent metric for comparing the structural importance across different layer types. Experimental validation across multiple modern architectures and datasets reveals that this combined approach achieves substantial model compression while maintaining competitive accuracy. The presented approach achieves parameter reduction results that are globally comparable to those of state-of-the-art solutions and outperforms them across a wide range of modern neural network architectures, from convolutional models to transformers. The results demonstrate how flow divergence serves as an effective guiding principle for both filter-level and layer-level optimization, offering practical benefits for deployment in resource-constrained environments.
null
['Neural Network Pruning', 'Information Flow Divergence', 'Model Compression', 'Architecture Optimization']
/pdf/e3eb774465bc449139535e73d2f1868321ba7680.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25444/Authors']
FPBtaGBv81
25,440
FPBtaGBv81
Dynamic Trust Region Adaptation for \\ Human-in-the-Loop Reinforcement Learning \\ in Code Refinement
We propose a dynamic trust region adaptation framework for Human-in-the-Loop Reinforcement Learning (HITL-RL) in code refinement to address the challenge of incorporating unskilled human feedback into policy updates. Conventional methods handle all feedback in the same way, and this may result in poor convergence because not all feedback is of the same quality. The proposed system presents a Bayesian-driven Feedback Confidence Estimator, which geometrizes the faith in the human as the sole reliable agent as a dynamically updated score of confidence, and an Adaptive Trust Region Controller to modulate the policy updates based on a dynamically changing confidence score. High confidence feedback works to enlarge the trust region so they will explore, while low confidence feedback works to shrink the trust region so they will avoid overfitting. Furthermore, the framework has a confidence weighting reward shaping mechanism and a gated policy network to selectively favor reliable feedback during the training process. Implemented with transformative architectures including Codex-style policy network and DeBertA-v3 feedback encoder, closed-looped adaptation to feedback uncertainty.
null
['Code Refinement']
/pdf/5e672f3ea95a9e58fe41dc6e69e40c23a3003aa5.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25440/Authors']
guUUlHPXRw
25,437
guUUlHPXRw
Modelling Optimal Trade-Off Between Continued Pre-Training and Supervised Fine-Tuning for LLM Domain Adaptation
Domain adaptation is critical for tailoring pre-trained Large Language Models (LLMs) to specialised tasks without significant costs of pre-training from scratch. Two common approaches for domain adaptation are Continual Pre-training (CPT) and Supervised Fine-Tuning (SFT), yet the data mix for each is often determined arbitrarily based on data availability or through limited data ablations. In this paper, we present a mathematical framework to model downstream domain performance as a function of the ratio between CPT and SFT under a fixed token budget. Using 7B-parameter pre-trained LLMs, we perform domain adaptation training across three domains - health, chemistry, and coding - within a 30B-token limit. CPT uses domain-relevant subsets of Nvidia's ClimbLab dataset, while SFT employs medqa (health), OpenCodeInstruct (programming), and ChemData700k (chemistry). Resultant models are evaluated on domain-specific QA benchmarks across sixteen CPT:SFT allocations. Results show that optimal performance, regardless of domain, arises from allocations with effective CPT:SFT token ratios between 29.9976B:2.4M and 29.9982B:1.8M corresponding to a CPT fraction of approximately 0.99992 - 0.99994. Our optimal split demonstrated an 11.6% score improvement over the state-of-the-art domain-adapted model Code Llama and a 6.4% increase in performance on MedQA over HippoCrates Meta 7B while approaching the performance of HippoCrates Mistral 7B, at up to 95% token budget reduction. We further validate these findings through ablation with trained models to better understand the impact of individual datasets on resultant model weights. Our work provides a framework for guiding efficient domain adaptation of LLMs through CPT and SFT.
Finding the optimal data allocation between CPT and SFT for domain adaptation
['Machine Learning', 'Continuous Pretraining', 'Supervised Fine Tuning', 'Parameter-Efficient Fine-Tuning (PEFT)', 'Optimization']
/pdf/3ea0a83c73d87a6a98d2b88890ee861937e1cc3c.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25437/Authors']
gMc5Qa45ia
25,435
gMc5Qa45ia
DynamicRank LoRA: Real-Time Adaptive Fine-Tuning \\ for Code Models via Token-Level Importance and Loss Landscape Awareness
\begin{abstract} We propose \textbf{DynamicRank LoRA}, a novel fine-tuning mechanism for code models that dynamically adjusts the rank of low-rank adaptation (LoRA) matrices in real-time, addressing the limitations of static rank configurations in conventional LoRA. The proposed approach combines two fundamental ingredients: token level importance scoring: the structural importance of their input tokens and loss landscape aware rank adaptation: rank modulation, which can be adjusted with information about gradient dynamics and curvature. High importance tokens, namely syntax keywords or variable names, will result in rank increases to get finer grain patterns, and flat loss regions, to reduce rank for faster convergence. The mechanism is tightly coupled with transformer architectures, and makes use of attention weights and gradient norms to "plasma" LoRA matrices through truncated SVD through training. We apply DynamicRank LoRA in the framework of a GPT-3.5-turbo where dense layers in the feed-forward blocks are replaced with those of adaptive-rank LoRA pairs modulated by a lightweight MLP. This design allows the model to very well balance the speed and precision of adaptation between the various combinations of input complexity, e.g. verbose or terse code, and task requirements, i.e. bug fixing, code generation, etc. Experimental results show that DynamicRank LoRA is more efficient and accurate for fine-tuning compared to fixed-rank baselines, especially under the need of fast adaptation to inhomogeneous code structures. The two-fold rank modulation technology and the transformer-specific integration of the methodology distinguishes it from previous works to provide a scalable solution for real time code model customization without compromising the latency. \end{abstract}
null
['Real-Time Adaptive Fine-Tuning']
/pdf/da473586ea64c99f5a828a62e17a734bfc042785.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25435/Authors']
cY2aTfhT3L
25,432
cY2aTfhT3L
ReSafe: Enhancing Safety of Text-to-Image Diffusion via Post-Hoc Image Back Translation
Ensuring safe images in Text-to-Image (T2I) diffusion models has emerged as an active area of research. However, existing T2I safe image generation methods may fail to fully erase learned knowledge and remain vulnerable to circumvention like adversarial prompts or concept arithmetic. Given that safe image generation methods can be bypassed, we introduce a post-hoc approach designed to uphold safety even in the presence of such circumvention. We present ReSafe, the first Image-to-Image (I2I) translation framework designed to regenerate safe images from unsafe inputs by removing only harmful features while preserving safe visual information. ReSafe extracts safe multimodal (i.e., vision and language) features by selectively removing unsafe concepts from the input representations. It then optimizes a discrete safe prompt to align with the interpolated multimodal safe features and generates new safe images from this prompt, effectively eliminating unsafe content while preserving semantic and visual information. Since ReSafe is a post-hoc approach, it can be applied to a variety of existing safe image generation methods to enhance their performance. ReSafe reduces attack success rates by 3-4$\times$ compared to T2I methods and by 3-7$\times$ compared to I2I baselines across five adversarial prompt benchmarks.
Image-to-image translation framework designed to remove inappropriate components from a given unsafe image and regenerate a safe image.
['Safe generation', 'Image-to-Image translation', 'Image back translation']
/pdf/4b5185bc7cded893c915145060b91bd2f0732553.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25432/Authors']
EbSkBZQF9g
25,431
EbSkBZQF9g
Mechanistic Interpretability analysis of a single-layer transformer on 0-1 knapsack
Small language models have been shown to exhibit generalisation for toy problems while being trained on algorithmically generated datasets. It is poorly understood whether this phenomenon happens in complex problems such as NP-complete problems. In this work, we show the inability of a single-layer transformer to "grok" the 0-1 knapsack problem. We analyze the internals using visualisations and interpretability techniques and show why the model is not able to form a robust internal circuit. This shows how transformer-based models struggle to generalize on NP-complete problems as well as their inability to solve problems requiring high amount of computation. This work showcases why LLM-based AI agents should not be deployed in high-impact spaces where a vast amount of planning and computation is required.
mechanistic interpretability of a single-layer transformer on 0-1 knapsack, shows the inability of transformers to solve NP-complete tasks
['Mechanistic Interpretability', 'Machine Learning', 'grokking', 'knapsack problem']
/pdf/b22ab5fff3cc4fc0689e7fae9ee4e09f1f1bd6f2.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25431/Authors']
APawIJjJlP
25,428
APawIJjJlP
Fed-Energy: Federated Reinforcement Learning for Scalable and Energy-Efficient Large-Scale Code Optimization
\begin{abstract} We propose \textbf{Fed-Energy}, a federated reinforcement learning (RL) framework for scalable and energy-efficient large-scale code optimization. Runaway mass: Modern code optimization contains two conflicting goals: computational burden of training model by RL and lack of estimation of energy consumption for wide variety of codebases. The proposed method solves these in the combination of lightweight energy models and federated learning to achieve distributed training and adaptive aggregation of local energy predictors. Each code component utilizes mini-sized neural networks to estimate the amount of energy a program uses from its execution traces and/or its structural features as LSTMs or CNN, and then combines such estimates from a personalized federated approach that takes into consideration non-IID data distributions. The RL system optimizes decay of program code transformations considering composite rewards with energy, performance, and computation overhead trades, while compiler pipelines and dynamic profilers are used to provide feedback for refinement. Fed-Energy's decentralized design avoids monolithic simulators, not only easing the computational workload, but also maintaining privacy and scalability. Moreover, its spatial-temporal adaptive coordination makes it more different from static federated averaging, and this adaptive coordination facilitates optimization on the basis of context-awareness to heterogeneous code structures. Experiments show gainful improvements in energy efficiency and training scalability, as compared with centralized methods, which makes it a feasible solution towards real world deployment. The novelty of the framework is the joint approach of federated learning and RL, and it provides a scalable and accurate alternative to traditional energy-aware code optimization. \end{abstract}
null
['Large-Scale Code Optimization']
/pdf/5fa20d3ba87fb7edecacdbbb12614927552139e1.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25428/Authors']
i6fc97RY1l
25,426
i6fc97RY1l
Addition Circuit: How LLMs Add in Their Heads using State Vectors
Large Language Models (LLMs) are often treated as black boxes, yet many of their behaviours suggest the presence of internal, algorithm-like structures. We present addition circuit as a concrete, mechanistic example of such a structure: a sparse set of attention heads that perform integer addition. Focusing on two popular open-source models (Llama 3.1 8B and Llama 3.1 70B), we make the following contributions. (i) We extend prior work on two-argument addition to the multi-argument setting, showing that both models employ fixed subsets of attention heads specialized in encoding summands at specific positions in addition prompts. (ii) We introduce state vectors that efficiently capture how models represent summands in their activation spaces. We find that each model learns a common representation of integers that generalizes across prompt formats and across six languages, whether numbers are expressed as Arabic digits or word numerals.
We show that LLMs learn representations of integers in the addition tasks that generalize across prompt templates/number formats/languages and we reverse engineer the 2-argument addition circuit for muti-token integers in Llama 3.1 8B
['Mechanistic Interpretability', 'Large Language Models', 'Addition', 'Arithmetic', 'Algorithmic Reasoning', 'Circuits']
/pdf/bf3cdff17b45198441e6affc204f49282648af1f.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25426/Authors']
Gq7cBZC04L
25,424
Gq7cBZC04L
Steering Language Models for Theorem Proving
Recent progress in automated theorem proving leverages Large Language Models (LLMs) for their capacity to comprehend informal mathematical statements and generate corresponding formal proofs. Even though these techniques perform well, very little exploration has been done to understand how language models interpret and utilize these informal mathematical cues to generate formal proofs more effectively. To address this, we explore activation steering, a lightweight, inference-time mechanism that identifies linear directions in a model’s residual activations corresponding to informal “thought” traces, and nudges those activations to improve proof construction entirely without finetuning. Unlike previous approaches, activation engineering offers valuable insights into language models’ internal reasoning dynamics encoded in their activation space. We evaluated these activation vectors on two distinct tasks: formal proof generation from formal theorems and formal proof generation from informal problem descriptions. Our contributions are twofold: (1) we propose an activation-based intervention technique to guide proof synthesis in LLMs; and (2) improve performance across two different decoding strategies without additional training.
null
['Theorem proving', 'activation steering']
/pdf/2387c2996333c2671934a348f83f77f88b91180f.pdf
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
/attachment/2b3be24122b5e297732737020636f7a8fb930635.zip
['ICLR.cc/2026/Conference/Submission25424/Authors']
Oq3yRhFp0t
25,423
Oq3yRhFp0t
How Well Does GPT-4o Understand Vision? Evaluating Multimodal Foundation Models on Standard Computer Vision Tasks
Multimodal foundation models, such as GPT-4o, have recently made remarkable progress, but it is not clear where exactly these models stand in terms of understanding vision. In this paper, we benchmark the performance of popular multimodal foundation models (GPT-4o, o4-mini, Gemini 1.5 Pro and Gemini 2.0 Flash, Claude 3.5 Sonnet, Qwen2-VL, Llama 3.2) on standard computer vision tasks (semantic segmentation, object detection, image classification, depth and surface normal prediction) and using established datasets (e.g., COCO, ImageNet and its variants, etc). The main challenges to performing this are: 1) most models are trained to output text and cannot natively express versatile domains, such as segments or 3D geometry, and 2) many leading models are proprietary and accessible only at an API level, i.e., there is no weight access to adapt them. We address these challenges by translating standard vision tasks into equivalent text-promptable and API-compatible tasks via prompt chaining to create a standardized benchmarking framework. We observe that 1) the models are not close to the state-of-the-art specialist models at any tasks, and 2) they perform semantic tasks notably better than geometric ones. However, 3) they are respectable generalists; this is remarkable as they are presumably trained on primarily image-text-based tasks. 4) While the prompt-chaining techniques affect performance, better models exhibit less sensitivity to prompt variations. 5) GPT-4o performs the best among non-reasoning models, securing the top position in 4 out of 6 tasks and 6) reasoning models, e.g. o3, show improvements in geometric tasks.
null
['vision benchmark', 'multimodal foundation models', 'vision language models', 'standard computer vision tasks']
/pdf/e62eedf4fc606a238123b0c26aeb9f413944fcad.pdf
datasets and benchmarks
/attachment/d87ced81699641e0183dde7f95a0332ea626ea78.zip
['ICLR.cc/2026/Conference/Submission25423/Authors']
YkLA6exfqW
25,417
YkLA6exfqW
Are Color Trained Models Robust in Handling Binary Images: A Fingerprint Recognition Study
Fingerprint recognition has long been a cornerstone of biometric authentication, yet robust performance across varying imaging conditions remains a challenge, especially fingerphoto, which are generally acquired from the camera, instead of the Livescan images, which are not prone to the environmental factors. Due to the tremendous security demands in large-scale areas and areas where the deployment of computationally heavy devices might not be feasible, like refugee camps, the development of a scalable solution must be a priority. Through this research, we aim to achieve this by understanding the impact of binarization on images and models. Surprisingly, neither the role of Binarized Neural Networks (BNNs) nor binary fingerprint images (especially photos, not scans) has been explored in the literature. Henceforth, in this work, we conduct a comprehensive study of fingerprint recognition using both floating-point-based Deep Neural Networks (DNNs) and Binarised Neural Networks (BNNs) across multiple image representations, ranging from RGB to grayscale to binary. Our experiments reveal that while DNNs excel with richer representations such as RGB and grayscale, BNNs demonstrate superior compatibility with binary fingerprints, effectively leveraging their reduced complexity to achieve competitive or even better recognition accuracy. This finding highlights the importance of aligning model architectures with input spectra: full-precision networks benefit from information-rich domains, whereas binarized models coupled with binary images offer both efficiency and improved accuracy in inherently discrete representations. The results provide new insights into spectrum-aware fingerprint recognition, guiding the design of accurate and resource-efficient biometric systems.
null
['Fingerprint Recognition', 'Binary Images', 'Color Images', 'Deep Learning']
/pdf/b21410099a629d42563f4e9b90612001ed84bb5b.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25417/Authors']
Kkcaz5XlJB
25,416
Kkcaz5XlJB
AgentChangeBench: A Multi-Dimensional Evaluation Framework for Goal-Shift Robustness in Conversational AI
Goal changes are a defining feature of real-world multi-turn interactions, yet current agent benchmarks primarily evaluate static objectives or one-shot tool use. We introduce $\textbf{AgentChangeBench}$, a benchmark explicitly designed to measure how tool augmented language model agents adapt to mid dialogue goal shifts across three enterprise domains. Our framework formalizes evaluation through four complementary metrics: Task Success Rate (TSR) for effectiveness, Tool Use Efficiency (TUE) for reliability, Tool Call Redundancy Rate (TCRR) for wasted effort, and Goal-Shift Recovery Time (GSRT) for adaptation latency. AgentChangeBench comprises of 590 task sequences and five user personas, each designed to trigger realistic shift points in ongoing workflows. Using this setup, we evaluate a mix of proprietary and open source models and uncover sharp contrasts obscured by traditional pass@k scores. Our findings demonstrate that high raw accuracy does not imply robustness under dynamic goals, and that explicit measurement of recovery time and redundancy is essential. AgentChangeBench establishes a reproducible testbed for diagnosing and improving agent resilience in realistic enterprise settings.
We present a benchmark that stress-tests agents on explicit goal-shifts in dual-control, multi-turn dialogs. We also add sequence-annotated scenarios spanning multiple service domains, personas and goal-shift based evaluation metrics.
['benchmark', 'multiturn', 'goal-shift', 'robustness', 'agents', 'evaluation', 'llm']
/pdf/3c915ee0d1b420cbcd944d8353796982627e4fc9.pdf
datasets and benchmarks
/attachment/97a53d76e26d9905382f775adfcb870275422de0.zip
['ICLR.cc/2026/Conference/Submission25416/Authors']
w7jkX7FfZ5
25,415
w7jkX7FfZ5
Formal-Lagrangian Policy Optimization for Safe Reinforcement Learning in Code Generation with Differentiable Verification
\begin{abstract} We propose Formal-Lagrangian Policy Optimization (FLPO), an original framework of safe reinforcement learning (RL) in code generation that combines safe image inspection and policy optimization through a Lagrangian multiplier mechanism. The major bottleneck to RL-based code synthesis, however, is to ensure the constraints of hard safety, such as memory safety or type correctness, without losing the flexibility of generative models. FLPO addresses this by adding to the reward function a Lagrangian to dynamically penalise constraint violations, the penalty weight of which is adapted using the dual ascent to decrease the importance of safety issues downwards. Moreover, we propose a differentiable formal verification layer to approximate the verification results into a continuous value gradient so that the policy network can also learn straight from formal feedback. \end{abstract}
null
['Code Generation']
/pdf/eca150f63d4f8ff01a5c7f0e6a9f4f1e5d598224.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25415/Authors']
5PBKxl7o49
25,414
5PBKxl7o49
Listens like Mel: Boosting Latent Audio Diffusion with Channel Locality
Latent representations critically shape diffusion-based audio generation. We observe that Mel spectrograms exhibit an approximate power-law spectrum that aligns with diffusion’s coarse-to-fine denoising, whereas waveform variational autoencoder (VAE) latents are nearly equal intensity along the channel axis. We introduce channel-span masking, which in expectation behaves like a rectangular window across channels and thus a low-pass filter in the channel-frequency domain, increasing channel locality. The induced locality steepens latent spectral slopes toward a power-law distribution and leads to up to 4× faster convergence of Diffusion Transformer (DiT) training on audio generation tasks, while maintaining reconstruction fidelity and compression. Experimental results show that the model performs comparably to, or better than, competitive baselines under the same conditions. Our code and checkpoint are available at \url{https://anonymous.4open.science/r/lafa-F2A2}.
Channel span masking imposes mel-like spectral bias on high-compression VAE latents by acting as a low-pass window over channels, restoring power-law structure and delivering up to 4× faster Diffusion Transformer convergence.
['audio generation', 'variational auto-encoder', 'representation learning', 'self-supervised learning']
/pdf/e756b36f1a434cbaf885374a85d473e8e271d7df.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25414/Authors']
cQvBP4TZHe
25,413
cQvBP4TZHe
When Forces Disagree: A Data-Free Fast Diagnostic from Internal Consistency in Direct-Force Neural Network Potentials
Direct-force neural network potentials (NNIPs) offer superior speed for atomistic simulations, but their reliability is limited by the lack of a fast and data-free uncertainty estimate to monitor the impact of non-conservativity and prediction errors. While ensembles are data-free but slow, and other single-model methods often require training data, we introduce an approach that combines the advantages of both. Our metric is derived from the internal disagreement between a model's directly predicted force and its energy-gradient-derived force, motivated by our finding that a model's internal self-consistency is more critical for algorithmic stability than its external accuracy. We then identify an asymmetric failure mode inherent to the direct-force architecture that this metric can diagnose, and also show a strong monotonic correlation between the disagreement and the true force error across diverse materials and out-of-distribution structures. We propose the link between internal disagreement and practical reliability is a consequence of inter-head influence via the shared graph neural network embedding. We provide direct evidence for this mechanism by showing that fine-tuning the conservative force pathway on adversarial data that maximizes this internal disagreement measurably improves the stability of simulations driven only by the direct force. The metric serves as a versatile and out-of-the-box tool that is competitive with expensive ensembles, offering both an on-the-fly assessment of model reliability and a principled method for generating targeted data to improve the stability of direct-force models.
We introduce a fast physics-informed uncertainty metric for pre-trained direct-force neural network potentials that leverages the model's internal physical inconsistency to achieve the data-free advantage of ensembles at the single-model speed.
['NNIPs', 'Uncertainty', 'Pre-trained', 'Data-free', 'Physics-informed Uncertainty Estimate', 'Algorithmic Stability', 'Internal Consistency', 'Inter-head Influence', 'Multi-headed Architecture']
/pdf/8be6125d13960cf54a6c92db369718c179664af3.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25413/Authors']
HDSlPuFoEu
25,412
HDSlPuFoEu
Do Large Language Models Respect Contracts? Evaluating and Enforcing Contract-Adherence in Code Generation
Prevailing code generation benchmarks, such as HumanEval+ and MBPP+, primarily evaluate large language models (LLMs) with $\textit{pass@k}$ on functional correctness using well-formed inputs. However, they ignore a crucial aspect of real-world software: adherence to $\textit{contracts}$$\textemdash$the preconditions and validity constraints that dictate how ill-formed inputs must be rejected. This critical oversight means that existing benchmarks fail to measure, and models consequently fail to generate, truly robust and reliable code snippets. We introduce $\textbf{PACT}$, a program assessment and contract-adherence evaluation framework, to bridge this gap. PACT is the first framework designed to systematically evaluate and enhance contract-adherence in LLM-generated code snippets alongside functional correctness. PACT's contributions are threefold: First, it provides a comprehensive test-suite corpus focused on contract violations, extending HumanEval+ and MBPP+. Second, it enables a systematic analysis of code generation under varied prompting conditions. This analysis demonstrates that augmenting prompts with contract-violating test cases significantly enhance a model's ability to respect contracts compared to using contract description alone. Finally, it introduces novel metrics to rigorously quantify contract adherence in both test generation and code generation. By revealing critical errors that conventional benchmarks overlook, PACT provides the rigorous and interpretable metrics to evaluate the robustness of LLM-generated code snippets in both functionality and contract-adherence.
A contract-aware benchmark and generation framework that pairs LLMs with an SMT solver to create violation focused tests and quantitatively assess whether generated code satisfies explicit contracts.
['Test-Case Generation', 'Contract-Violating Test Cases', 'Contract-Aware Evaluation', 'SMT solver', 'Code Generation']
/pdf/80647a798a6d02678370c296d3ff1b9c358db3a5.pdf
applications to computer vision, audio, language, and other modalities
/attachment/26fea117473e05f7d05fc76571856d7cb83b793d.zip
['ICLR.cc/2026/Conference/Submission25412/Authors']
pcaHnwjnsO
25,409
pcaHnwjnsO
Graph Adversarial Refinement for Robust Code Fixes: Enhancing Policy Networks via Structure-Aware Contrastive Learning
\begin{abstract} We propose \textbf{Graph Adversarial Refinement (GARM)}, a novel module to enhance the robustness of policy networks in adversarial reinforcement learning for code fixes. Modern code repair systems frequently breakdown when confronted with adversary perturbed inputs, which mainstreamer the structural weaknesses in their internal representations. To facilitate that, GARM combines graph structure learning and adversarial training to dynamically identify and perturb less-critical edges in code graphs while maintaining semantically-significant adjacencies. The module consists of three key components: a \textbf{Graph Structure Learning (GSL)} sub-module that quantifies edge importance, an \textbf{Adversarial Perturbation Generator (APG)} that introduces controlled perturbations, and an \textbf{Adversarial Contrastive Learning (ACL)} sub-module that enforces robustness by aligning original and perturbed embeddings. The proposed method uses the graph transformer as its encoder and therefore captures the long-range dependencies better than conventional graph neural networks. Moreover, the adversarial perturbations are incrementally refined during training, which makes the policy network harder and harder before disrupting its capacity to generate accurate fixes. Experiments show that GARM actually increases resilience to adversarial code edits with high repair accuracy. The modular design facilitates seamless integration into existing reinforcement learning pipelines, making it practical for deployment in real-world scenarios where code integrity is critical. Our work fills in the gap between powerful graph representation learning and adversarial reinforcement learning that provides a principled solution for secure and reliable automated code repair. \end{abstract}
null
['Structure-Aware Contrastive Learning']
/pdf/74e1ece49e8eb2553a2458820fc063c358a86c26.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25409/Authors']
cUrshXsWYK
25,406
cUrshXsWYK
MARINA-P: Superior Performance in Nonsmooth Federated Optimization with Adaptive Stepsizes
Non-smooth communication-efficient federated optimization remains largely unexplored theoretically, despite its importance in machine learning applications. We consider a setup focusing on optimizing downlink communication by improving state-of-the-art schemes like EF21-P [Gruntkowska et al., 2023] and MARINA-P [Gruntkowska et al., 2024] in the non-smooth convex setting. Our key contributions include extending the non-smooth convex theory of EF21-P from single-node to distributed settings and generalizing MARINA-P to non-smooth convex optimization. For both algorithms, we prove optimal $\mathcal{O}(1/\sqrt{T})$ convergence rates under standard assumptions and establish matching communication complexity bounds with classical subgradient methods. We provide theoretical guarantees under constant, decreasing, and adaptive (Polyak-type) stepsizes. Our experiments demonstrate MARINA-P’s superior performance with correlated compressors in both smooth non-convex and non-smooth convex settings. This work presents the first theoretical analysis of distributed non-smooth optimization with server-to-worker compression, including a comprehensive analysis for various stepsize schemes.
We extend MARINA-P and EF21-P to non-smooth distributed optimization, introduce adaptive stepsizes, and show MARINA-P with permutation compressors outperforms EF21-P in non-smooth settings
['Federated Learning', 'Communication-efficient non-smooth optimization', 'Adaptive Stepsizes']
/pdf/fe0d57f0d21aa1b22be55d3ee6383abccc106cb7.pdf
optimization
/attachment/51668adc2048872493c6c3f4296b75aae17e00fb.zip
['ICLR.cc/2026/Conference/Submission25406/Authors']
NRX1iNUrZ3
25,404
NRX1iNUrZ3
Graph-Energy Reinforcement Learning: Adaptive Reward Design for API Usage Pattern Mining with OOD Detection
\begin{abstract} We propose its a novel framework Graph-Energy Reinforcement Learning (GERL), in which the goal is in the case of mining API usage patterns with robust out of distribution (OOD) detection capabilities. Growing complexity of API ecosystems demands adaptive methods to differentiate between in-distribution and anomalous patterns, however, often existing approaches rely on static thresholds or do not have structural awareness. GERL addresses this by integrating energy based OOD scoring with graph diffusion in a reinforcement learning (RL) framework, which makes it possible to dynamically design rewards which guides exploration in graph-structured API Spaces. The core innovation lies the in Graph-Energy Reward Function which combines; node level energy scores calculated using a Graph Neural Network with multi-hop topological dependencies as represented by diffusion. This joint formulation gives freedom for RL agent to change the exploitative of known patterns and discovering of novel ones, while the policy network, built on Transformer-XL, processes variable length API sequences with structural context. In addition, using a graph-based Markov Decision Processes creates realistic scenarios of API use, transitions modeled by a Graph Variational Auto Encoder for Predicting Likely Subgraph Evolutions. Experiments show that in compared with conventional methods, GERL is more both pattern mining accuracy as well as OOD detection robustness, particularly when making recursive or many-hop applications of the API \end{abstract}
null
['OOD Detection']
/pdf/39278bb4311c713b48318136b74f3834049dd323.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25404/Authors']
0Ow7PTK0Qj
25,400
0Ow7PTK0Qj
FastEdit: Low-Rank Structured Regularization for Efficient Model Editing
When new knowledge emerges, it is crucial to efficiently update large language models (LLMs) to reflect the latest information. However, state-of-the-art methods widely adopted in the model editing community --- such as MEMIT, PRUNE, and AlphaEdit --- suffer from prohibitively slow editing speeds, often taking 6 to 14 hours to sequentially edit just 2000 facts on models like LLaMA-3-8B, making real-time updates impractical, especially as model scale increases. Moreover, they require extensive pre-computation to sample pre-edit knowledge --- a step that can take over 24 hours --- severely limiting their deployability. In this paper, we present \textbf{FastEdit}, a highly efficient editing framework that enables rapid and scalable model updates. Our key insight is to exploit the low-rank structure inherent in editing updates through a structured regularizer, allowing us to avoid costly inversions via the Sherman-Morrison-Woodbury (SMW) identity. This drastically accelerates the computation of update matrices while preserving edit quality. Crucially, \textbf{FastEdit} requires only a small number of pre-edit samples, reducing both memory and computational overhead. On 2000 sequential edits, \textbf{FastEdit} completes the process in just \textbf{1 hour} -- an order of magnitude faster than prior work -- without sacrificing accuracy. Our method significantly lowers the barrier to practical model editing, enabling timely and scalable knowledge updates in large models.
null
['Large Language Models', 'Model Editing', 'Knowledge Updating']
/pdf/a9b25e95a586621fb175980502f410c29b8a691d.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25400/Authors']
zwfpyw345l
25,398
zwfpyw345l
Hierarchical Code Embeddings with Multi-Level Attention for Reinforcement Learning State Representation
\begin{abstract} In this paper, we propose novel state representation and reinforcement learning (RL) system of encoding the semantics of code hierarchically using multiple attention mechanisms. Traditional approaches regularly address code embeddings as flat sequences or to be reliant only on graph-based representations, which don't capture the complex level of interplay between local and global code features. The proposed method incorporate token-level, function-level, and module-level attention using graph-structured dependencies, to allow the RL agent to reason about code at varying granularities while maintaining structural relationships \end{abstract}
null
['Multi-Level Attention']
/pdf/293bbf406ac5f2948e1bb7bb48c7a1596b0596c7.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25398/Authors']
6MgD2sXZmg
25,395
6MgD2sXZmg
Deep Cognition: A Multi-Agent Framework for Collaborative Research with Real-Time Cognitive Oversight
Despite advances in large language models, current systems for deep research are limited by an asynchronous, "input-wait-output" interaction paradigm. This model creates a critical disconnect between human intent and AI execution, leading to error propagation and an inability to dynamically course-correct during complex problem-solving. We propose that a more effective form of human-AI partnership requires a shift from passive command-giving to cognitive oversight, where humans actively guide and intervene in the AI's thinking process. This perspective treats interaction as a core component of intelligence, rather than a peripheral interface. We introduce Deep Cognition, a system designed to enable this paradigm through three technical pillars: transparent and interruptible AI reasoning, fine-grained bidirectional dialogue, and a shared cognitive context. At the core of our system is a layered StateManager architecture and a novel multi-stage budget allocation algorithm. This architecture ingests and normalizes all interaction data (e.g., dialogue trajectories and user artifacts) into a perpetually optimized, high-information-density working memory. By dynamically prioritizing context based on a combination of static heuristics and a time-sensitive scoring function, our system mitigates error cascades and allows the AI to adapt its reasoning pathways based on the user's implicit focus. We conduct a comprehensive user study on challenging deep research tasks to evaluate the efficacy of our system. Results show that our approach significantly enhances the user experience, yielding improvements of up to 29.2% in Fine-Grained Interaction and 27.7% in Ease of Collaboration compared to a competitive baseline. Most notably, our system demonstrates a 31.8% to 50.0% points improvement in overall task performance. These results highlight the critical importance of designing interactive AI systems that facilitate continuous human guidance and transparent reasoning, rather than merely responding to isolated commands.
null
['Interactive AI Systems', 'Human-in-the-Loop', 'Multi Agent Framework']
/pdf/76608e16d84874597e4f482fc64058578bc5eaf7.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission25395/Authors']
GVIei1IdmC
25,390
GVIei1IdmC
Large Language Models as Nondeterministic Causal Models
Chatzi et al. (2025) recently developed, for the first time, a method for generating counterfactuals of probabilistic Large Language Models. Such counterfactuals tell us what would - or might - have been the output of an LLM if some factual prompt ${\bf x}$ had been ${\bf x}^*$ instead. The ability to generate such counterfactuals is an important necessary step towards explaining, evaluating, and comparing, the behavior of LLMs. We argue, however, that their method rests on an ambiguous interpretation of LLMs: they do not interpret LLMs literally, for the method involves the assumption that one can change the implementation of an LLM's sampling process without changing the LLM itself, nor do they interpret LLMs as intended, for their method involves explicitly representing a _nondeterministic_ LLM as a _deterministic_ causal model. We here present a much simpler method for generating counterfactuals that is based on an LLM's intended interpretation by representing it as a nondeterministic causal model instead. The advantage of our simpler method is that it is directly applicable to any black-box LLM without modification, as it is agnostic to any implementation details. The advantage of Chatzi et al.'s method, on the other hand, is that it directly implements the generation of a specific type of counterfactuals that is useful for certain purposes, but not for others. We clarify how both methods relate by offering a theoretical foundation for reasoning about counterfactuals in LLMs based on their intended semantics, thereby laying the groundwork for novel application-specific methods for generating counterfactuals.
By representing Large Language Models as Nondeterministic Causal Models we show that the generation of counterfactuals becomes extremely simple.
['Large Language Models', 'counterfactuals', 'causal models']
/pdf/ecb2259a8c51ce0330d579f1faaefef0922d4ed6.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25390/Authors']
2wshkCgNYk
25,387
2wshkCgNYk
Performance vs interpretability trade-off of hand-crafted and language model features: The case of protein superfamily classification
The newfound rise of protein language models (PLMs) that leverage data and compute has introduced an interesting conflict in computational biology: a trade-off between the high predictive performance of non-interpretable features and the scientific insight that can be gained from interpretable, hand-crafted ones. In this work, we highlight and study this conflict via the task of classifying protein domains into their CATH superfamilies. We train one-vs-all linear SVM classifiers for 45 CATH superfamilies, each characterised by significant class imbalance. We address the class imbalance by using a class-balanced loss function and the arithmetic mean (AM) of specificity and sensitivity for evaluation. Our analysis compares nine feature vector types, which are either non-interpretable embeddings from PLMs or interpretable hand-crafted features. The latter includes amino acid composition (AAC), di- and tri-peptide composition (DPC, TPC), and novel sequence-order (2OAAC, 3OAAC) and structure-based features (OCPC, CSIC). Our results demonstrate that PLM-based features achieve superior test AM scores of 90-99\% with low variability, outperforming hand-crafted features by 20-30\%. While PLM features yield high classification accuracy, their lack of interpretability obscures the underlying biological determinants. Conversely, the interpretability of hand-crafted features, despite their relatively low performance, can be leveraged to infer sequence and structural characteristics of CATH superfamilies. The proposed hand-crafted CSIC feature stikes a balance between predictive performance and interpretability, because it overfits to a lesser extent. This can be valuable for downstream applications like investigating protein-related diseases and guiding rational protein design.
null
['Feature engineering', 'interpretability', 'proteins', 'CATH superfamily', 'hand-crafted features', 'attention matrix', 'protein language models', 'class imbalance']
/pdf/ef4c28715f84304ec46425923b04c58b4b76a767.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25387/Authors']
FAK3lJSRQQ
25,386
FAK3lJSRQQ
ExLLM: Experience-Enhanced LLM Optimization for Molecular Design and Beyond
Molecular design involves an enormous and irregular search space, where traditional optimizers such as Bayesian optimization, genetic algorithms, and generative models struggle to leverage expert knowledge or handle complex feedback. Recently, LLMs have been used as optimizers, achieving promising results on benchmarks such as PMO. However, existing approaches rely only on prompting or extra training, without mechanisms to handle complex feedback or maintain scalable memory. In particular, the common practice of appending or summarizing experiences at every query leads to redundancy, degraded exploration, and ultimately poor final outcomes under large-scale iterative search. We introduce ExLLM, an LLM-as-optimizer framework with three components: (1) a compact, evolving experience snippet tailored to large discrete spaces that distills non-redundant cues and improves convergence at low cost; (2) a simple yet effective k-offspring scheme that widens exploration per call and reduces orchestration cost; and (3) a lightweight feedback adapter that normalizes objectives for selection while formatting constraints and expert hints for iteration. ExLLM sets new state-of-the-art results on PMO and generalizes strongly—in our setup, it sets records on circle packing and stellarator design, and yields consistent gains across additional domains—requiring only a task-description template and evaluation functions to transfer.
ExLLM is an LLM-as-Optimizer with experience, offspring, and feedback mechanisms, achieving SOTA in molecular design and generalizing to diverse discrete optimization tasks with minimal problem templates.
['Large Language Models', 'Molecular Design', 'Evolutionary Algorithms', 'Discrete Optimization']
/pdf/565d3e43e701210d23422f488938de88d1fae4e2.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25386/Authors']
JJeZWINFmz
25,385
JJeZWINFmz
SAGE Can Quantify Why Two Models Behave Differently
Vision-based activity recognition tasks are sensitive to environmental context and lighting, making generalization across domains difficult. Models trained in controlled settings can report high accuracy, but often fail under domain shift, where it remains unclear whether predictions depend on causal foreground cues, spurious background signals, or shortcut learning tied to context rather than behavior. Saliency methods offer a view of model focus, but have largely been confined to qualitative visualization. We hypothesize that behavioral divergence between models is proportional to divergence in their saliency embeddings. To examine this, we introduce Saliency Attribution for Goal-grounded Evaluation (SAGE), a modular framework that unifies heterogeneous datasets through category mapping and balancing, generates controlled foreground and background variants, computes saliency maps, and encodes them into tokenized representations suitable for embedding and comparison. By disentangling foreground and background saliency, the framework provides a diagnostic signal of how models attend to causal versus spurious regions, complementing accuracy as a measure of generalization. We demonstrate feasibility on vision-based driver distraction detection, an activity recognition task where distraction is inferred from driver activities rather than objects, by creating a unified 10-class variant of the StateFarm and 100-Driver datasets that highlights the challenges of category mapping and background control. While full embedding-based evaluations are ongoing, the framework separates foreground and background saliency, discretizes them into tokens, and encodes them in a manner aligned with tokenized vision architectures such as ViTs and VLMs. This design makes the framework scalable across vision-based classification tasks where foreground-background disentanglement is critical, and presents it as a diagnostic tool for analyzing behavioral divergence and robustness under domain shift.
null
['Explainable AI', 'Vision-based Driver Distraction Detection (vDDD)', 'SAGE', 'Saliency Embeddings', 'Behavioral Divergence', 'Domain Shift', 'Generalization', 'Shortcut Learning', 'Vision--Language Models (VLMs)']
/pdf/ead736e219b2243d5f786eca923eafa27860fd53.pdf
interpretability and explainable AI
/attachment/039311be5f790d4e79b9c2d476321292ba1bf422.zip
['ICLR.cc/2026/Conference/Submission25385/Authors']
BdlIQGetYv
25,382
BdlIQGetYv
Octopus: An Auto-Generated Multidimensional Fine-Grained Benchmark for Evaluating Text-to-SQL Systems
Text-to-SQL is to convert natural language queries into structured SQLs, facilitating user interaction with databases without any SQL knowledge. The advent of LLM technologies significantly accelerates the text-to-SQL development. It is important to construct an appropriate benchmark to evaluate the performance of text-to-SQL models. However, existing text-to-SQL benchmarks are mainly produced by human annotations and suffer from limitations of low SQL complexity, single questioning mode, and low scalability. To address these limitations, we present a new multidimensional text-to-SQL benchmark, called OCTOPUS, which contains comprehensive evaluation metrics and fully auto-generated datasets. OCTOPUS has 9 first-level metrics and 18 second-level metrics from four dimensions to evaluate the performance of text-to-SQL systems, including accuracy, robustness, interactivity, and generalization. To help the benchmark construction, we also propose a series of fully automatic text-to-SQL data generation methods, which reduce human involvement, improve efficiency, and support higher scalability. OCTOPUS consists of 10,885 complex question-SQL pairs and 10,874 multi-turn dialogues over 74 public databases. We evaluate state-of-art text-to-SQL models on OCTOPUS and find they have unsatisfactory performance in all testing metrics and are still far from practical applications. OCTOPUS can be used to enhance the accuracy and utility of text-to-SQL models.
null
['Text-to-SQL', 'Benchmark', 'Large Language Model']
/pdf/083725aed7cc56b521d1b95d51c69446083981ac.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25382/Authors']
xRxh48OAAM
25,381
xRxh48OAAM
Eliminating the first moment state in Adam optimizer
The Adam optimizer and its variants are widely used in large-scale machine learning, but their memory footprint is high because they maintain two state variables per parameter. In Adam, the exponential moving average (EMA) of gradients (m) serves as a first-moment estimator, but it also carries variance information that can be exploited to estimate the second moment. Furthermore, the gradient buffer can be repurposed to handle both gradient accumulation and a proxy for the first moment, effectively folding m into the gradient buffer itself. These modifications reduce the number of optimizer state variables from two to one, yielding Half-Memory Adam (HMAdam) and its decoupled-weight-decay variant (HMAdamW). Both variants retain the Adam update rule and hyperparameters. Experiments across discriminative and generative tasks including CNNs, transformers, and diffusion models show that HMAdamW matches the performance of standard AdamW in convergence speed, final accuracy, and runtime, while substantially lowering memory usage. Moreover, this version of Adam retains its convergence properties. This makes it a practical choice for memory-constrained training scenarios such as large-scale language modeling.
We present a novel variant of Adam optimizer that uses one state variable, instead of two
['Half-memory Adam', 'efficient Adam', 'Memory efficient optimizer']
/pdf/3c066a66ffe593be4edc42cd97e09428bd5f1246.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25381/Authors']
29Mote2SrR
25,378
29Mote2SrR
Hierarchical Feedback Interface for Human-in-the-Loop Reinforcement Learning in Debugging
We propose Hierarchical Feedback Interface (HFI) for human-in-the-loop reinforcement learning in debugging which structures human feedback grouped into high level objectives and low level refinements to cover the subjectivity and inefficaciousness of ad-hoc corrections. The HFI employs a two-tiered policy architecture, in which a high-level policy abstracts debugging goals into ac a interpretable meta-objectives, and a low-level policy translates these into actionable feedback thus grounding human input to the ALigned-and-goal reasoning. The framework integrates a hierarchical actor-critic mechanism - with the high-level policy generating goal vectors over reduced state representations, while the low level policy conditions of both code specific features and these goals to generate context-aware feedback.
null
['Reinforcement Learning in Debugging']
/pdf/8a3e9c4f1d1111df09bb5b27b93e15cd35858148.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25378/Authors']
lvtiRJ2nwU
25,375
lvtiRJ2nwU
Semantic Proximity for Redundancy-Aware Context Compression in Large Language Models
LLMs are increasingly bottlenecked by fixed context windows, motivating principled compression of conversational histories. We study semantic-redundancy–aware compression, in which we pair human–assistant turns, embed them, and summarize those that are most semantically overlapping. We introduce STAE (Semantic-Temporal Aware Eviction), a centroid–temporal hybrid policy that scores each pair by a convex combination of semantic distance to a conversation centroid and recency (weighted by $\beta$), alongside an inverted variant and a cluster-aware compressor that summarizes whole embedding-space clusters. Crucially, redundancy is detected from embeddings using lightweight centroid/cluster arithmetic without extra LLM calls, reducing token usage and inference cost. To evaluate retrieval under compression, we augment LongMemEval with a 20-needle-per-dialogue benchmark, addressing the brittleness of single-needle tests and enabling finer-grained measurement of information retention. On this benchmark, summarizing pairs closest to the centroid outperforms FIFO across compression regimes, while compression of those furthest from the centroid degrades at stricter budgets; moreover, local STAE within temporal or semantic groups closely matches a strong temporal upper bound and consistently surpasses global eviction at the same ECR, with inverted (evict-lowest) preserving more needles. We also show that clustered summarization of semantically or temporally similar message pairs provides a strong chunking strategy for compression. The takeaway is simple and actionable: compress where redundancy is highest, measured explicitly via semantic similarity in embedding space, while freeing tokens with minimal loss.
Compress LLM context by summarizing semantically redundant turns, via embedding similarity (or blended with recency) and extended to cluster-level summaries, alleviating extra LLM calls, outperforming FIFO on an augmented LongMemEval benchmark.
['Large language models', 'context compression', 'semantic proximity']
/pdf/7a633668675afc936bcbb39004813bccca2dfca4.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25375/Authors']
w0rVXs6QJM
25,372
w0rVXs6QJM
EucliFold: Probing 3D Euclidean Prior in VLMs via Cognitively-Stratified Folding Tasks
Humans leverage robust 3D spatial priors to align perception with the physical world, enabling flexible and intelligent behavior. While Vision-Language Models (VLMs) exhibit impressive zero-shot performance, it remains unclear whether they possess genuine spatial reasoning capabilities, as standard evaluations are confounded by dataset bias and spurious correlations. To address this, we introduce **EucliFold**, a synthetic visual question-answering benchmark focused on cube net folding in Euclidean space—a domain that enables precise analysis while requiring genuine spatial understanding. We propose a **cognitively-stratified evaluation framework** that decomposes spatial reasoning into three hierarchical levels: **Perception** (grounding sensory input to spatial representations), **Operation** (manipulating representations according to instructions), and **Imagination** (autonomous spatial problem-solving under geometric constraints). This decomposition isolates genuine spatial reasoning from superficial pattern matching. To mitigate evaluation biases, we employ **Winograd-style accuracy** using minimal-pair contrastive samples. Our evaluation reveals that state-of-the-art VLMs demonstrate reasonable perceptual capabilities but fail significantly at operational and imagination-level spatial reasoning, suggesting reliance on statistical patterns rather than genuine geometric understanding. Ablation studies confirm the effectiveness of our cognitively-stratified decomposition and bias-resistant evaluation methodology. EucliFold provides a rigorous testbed for probing emergent spatial priors in future models and demonstrates how systematic cognitive decomposition can reveal nuanced capability gaps in VLMs.
null
['vision language model', 'synthetic dataset']
/pdf/7b834136df5c9f61b6e5976859831dd3fcf904e9.pdf
datasets and benchmarks
/attachment/353df013a9588e5616023b26dcc22016dddd0a9c.zip
['ICLR.cc/2026/Conference/Submission25372/Authors']
Mq6bGrtktf
25,371
Mq6bGrtktf
Aligning Large Language Model Behavior with Human Citation Preferences
Most services built on powerful large-scale language models (LLMs) add citations to their output to enhance credibility. Recent research has paid increasing attention to the question of what reference documents to link to outputs. However, how LLMs recognize cite-worthiness and how this process should be controlled remains insufficiently explored. In this study, we focus on what kinds of content LLMs currently tend to cite and how well that behavior aligns with human preferences. We construct a dataset to characterize the relationship between human citation preferences and LLM behavior. Web-derived texts are categorized into eight citation-motivation types, and pairwise citation preferences are exhaustively evaluated across all type combinations to capture fine-grained contrasts. Our results show that humans most frequently seek citations for medical text, and stronger models display a similar tendency. We also find that current models are as much as 27% more likely than humans to add citations to text that is explicitly marked as needing citations on sources such as Wikipedia, and this overemphasis reduces alignment accuracy. Conversely, models systematically underselect numeric sentences (by -22.6% relative to humans) and sentences containing personal names (by -20.1%), categories for which humans typically demand citations. Furthermore, experiments with fine-tuning and Direct Preference Optimization (DPO) demonstrate that model behavior can be calibrated to better match human citation preferences. We expect this study to provide a foundation for more fine-grained investigations into LLM citation preferences. Our dataset and code will be released upon publication.
Across 8 content types, LLMs over-cite “Citation needed” (up to +27%) and under-cite numeric (−22.6%) and person-name (−20.1%) sentences vs humans; DPO improves alignment by ~5.76%. Data/code will be released upon publication.
['LLM', 'Citaion', 'Credibility']
/pdf/2be5ae7132670186460ac752ed27c2cc35981c18.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25371/Authors']
0wSlFpMsGb
25,369
0wSlFpMsGb
Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training
Large Language Models (LLMs) are pre-trained on large data from different sources and domains. These data most often contain trillions of tokens with large portions of copyrighted or proprietary content, which hinders the usage of such models under AI legislation. This raises the need for truly open pre-training data that is compliant with the data security regulations. In this paper, we introduce Common Corpus, the largest open dataset for LLM pre-training. The data assembled in Common Corpus are either uncopyrighted or under permissible licenses and amount to about two trillion tokens. The dataset contains a wide variety of languages, ranging from the high-resource European languages to some low-resource languages rarely represented in pre-training datasets. In addition, it includes a large portion of code data. The diversity of data sources in terms of covered domains and time periods opens up the paths for both research and entrepreneurial needs in diverse areas of knowledge. In this paper, we present the detailed provenance of data assembling and the details of dataset filtering and curation. We train two small language models on Common Corpus and find that the resulting model performs comparably to other models of their size, indicating that our dataset is suitable for multilingual pretraining. Common Corpus represents a key contribution to the ecosystem for open science research on large language models.
We assemble and release the largest truly open multilingual dataset for LLM pre-training consisting of 2 trillion tokens
['dataset', 'pre-training', 'large language models', 'open data', 'open science', 'multilingual']
/pdf/e141458035fcff8c02d4916469b622af70d94021.pdf
datasets and benchmarks
/attachment/045fb9a31e057a27cf6dafc3e64ccda88fe88900.pdf
['ICLR.cc/2026/Conference/Submission25369/Authors']
uxi7YoZ13b
25,368
uxi7YoZ13b
Adversarial Robust Reward Shaping for Safe Reinforcement Learning in AI-Generated Code
We propose \textbf{Adversarial Robust Reward Shaping (ARRS)}, a novel reinforcement learning framework for generating secure code that explicitly addresses vulnerabilities to adversarial evasion attacks. Conventional reward functions in code generation tasks often do not take into consideration how vulnerable detection mechanisms to subtle perturbations in syntax are which leads to brittle security guarantees. The proposed method integrates an \textbf{Adversarial Robustness Module (ARM)} into the reward computation pipeline, which systematically identifies worst-case failure scenarios through gradient-based perturbation analysis and penalizes the policy for generating exploitable code patterns. ARM works by generating adversarial examples that are semantically preserving and degrade the performance of the code evaluation system to the utmost and then teaching the RL agent to build a solution that is intrinsically secure using a robustness penalty added to the reward signal.
null
['Adversarial Robust Reward']
/pdf/e3a6fbfe593154484f778dadb4de89cd18289b9c.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25368/Authors']
zOWljZMbCm
25,365
zOWljZMbCm
Unlocking the Potential of Weighting Methods in Federated Learning Through Communication Compression
Modern machine learning problems are frequently formulated in federated learning domain and incorporate inherently heterogeneous data. Weighting methods operate efficiently in terms of iteration complexity and represent a common direction in this setting. At the same time, they do not address directly the main obstacle in federated and distributed learning -- communication bottleneck. We tackle this issue by incorporating compression into the weighting scheme. We establish the convergence under a convexity assumption, considering both exact and stochastic oracles. Finally, we evaluate the practical performance of the proposed method on real-world problems.
null
['Convex optimization', 'Compression', 'Stochastic optimization']
/pdf/b115f08c7af8144ceefe4b9c36739de2a333012b.pdf
optimization
/attachment/ac1c6cb25f064144f3042112b000a9c70f9b27c3.pdf
['ICLR.cc/2026/Conference/Submission25365/Authors']
ULqzEEkyxk
25,363
ULqzEEkyxk
LLMs Leak Training Data Beyond Verbatim Memorization via Membership Decoding
Extracting training data from large language models (LLMs) exposes serious memorization issues and privacy risks. Existing attacks extract data by generations, followed by membership inference. However, extraction attacks do not guide such generations, and the extraction scope of member data is limited to the greedy decoding scheme. Only verbatim memorized member data is being audited in this process. And a majority of member data remains unexplored, even if it is partially memorized. In this work, we define a new notion of memorization, $k$-amendment-completable, to measure the degree of partial memorization. Greedy decoding can only extract $0$-amendment-completable sequences, which are verbatim memorized. To address the limitation in generation, we propose a membership decoding scheme, which introduces membership information to guide the generation process. We formulate the training data extraction problem as an iterative member token inference problem. The token distribution is calibrated with membership information at each generation step to explore member data. Extensive experiments show that membership decoding can extract novel member data that haven't been studied before. The proposed attack manifests that the privacy risk in LLMs is underestimated.
null
['Membership Inference Attacks', 'Privacy', 'LLMs', 'Data Extraction Attacks']
/pdf/e14ceb430f82c81d1d021fc97c331ca3d9d12bcb.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25363/Authors']
weWUOuLTdj
25,359
weWUOuLTdj
Generative Model via Quantile Assignment
Deep Generative models (DGMs) play two central roles in modern machine learning: (i) producing new information (e.g., image synthesis, data augmentation, and creative content generation) and (ii) reducing dimensionality (by deriving low-dimensional latent representations). Yet, DGMs' versatility must confront training difficulty. Both information generation and dimension reduction using DGMs require learning the distribution. While deep neural networks (DNNs) are a natural choice for parameterizing generators, there is no universally reliable method for learning compact latent representations. As a compromise, current approaches rely on introducing an additional DNN: (i) variational autoencoders (VAEs), which map data into latent variables through an encoder, and (ii) generative adversarial networks (GANs), which employ a discriminator in an adversarial framework. Learning two DNNs simultaneously, however, introduces conceptual and practical difficulties. Conceptually, there is no guarantee that such an encoder/discriminator exists, especially in the form of a DNN. In practice, training encoders/discriminators on high-dimensional inputs can be more data-hungry and unstable than training a generator on low-dimensional latents (whereas generators usually take low-dimensional latent data as input). Moreover, training multiple DNNs jointly is unstable, particularly in GANs, leading to convergence issues such as mode collapse. Here, we introduce NeuroSQL, a DGM that learns low-dimensional latent representations without an encoder. Specifically, NeuroSQL learns the latent variables implicitly by solving a linear assignment problem, then passes the latent information to a unique generator. To demonstrate NeuroSQL's efficacy, we benchmark its performance against GANs, VAEs, and a budget-matched diffusion baseline on three independent datasets on faces from the Large-Scale CelebFaces Attributes Dataset (CelebA), animal faces from Animal Faces HQ (AFHQ), and brain images from the Open Access Series of Imaging Studies (OASIS). Compared to VAEs, GANs, and diffusion models within our experimental setup, (1) in terms of image quality, achieves overall lower mean pixel distance between synthetic and true images and stronger perceptual/structural fidelity, under the same computational setting; (2) computationally, NeuroSQL requires the least amount of training time; and (3) practically, NeuroSQL provides an effective solution for generating synthetic data when there are limited training data (e.g., neuroimaging data with a higher-dimensional feature space than the sample size). Taken together, by embracing quantile assignment instead of an encoder, NeuroSQL presents us a fast, stable, and robust way to generate synthetic data with minimal information loss.
null
['generative models', 'quantile assignment', 'optimal transportation', 'latent representation learning', 'synthetic data generation']
/pdf/03dc9c505e450ba4983984b1c65cf40beda8f828.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25359/Authors']
goBph2pXDS
25,358
goBph2pXDS
Image Hashing via Cross-View Code Alignment in the Age of Foundation Models
Efficient large-scale retrieval requires representations that are both compact and discriminative. Foundation models provide powerful visual and multimodal embeddings, but nearest neighbor search in these high-dimensional spaces is computationally expensive. Hashing offers an efficient alternative by enabling fast Hamming distance search with binary codes, yet existing approaches often rely on complex pipelines, multi-term objectives, designs specialized for a single learning paradigm, and long training times. We introduce CroVCA (Cross-View Code Alignment), a simple and unified principle for learning binary codes that remain consistent across semantically aligned views. A single binary cross-entropy loss enforces alignment, while coding-rate maximization serves as an anti-collapse regularizer to promote balanced and diverse codes. To implement this, we design HashCoder, a lightweight MLP hashing network with a final batch normalization layer to enforce balanced codes. HashCoder can be used as a probing head on frozen embeddings or to adapt encoders efficiently via LoRA fine-tuning. Across benchmarks, CroVCA achieves state-of-the-art results in just 5 training epochs. At 16 bits, it performs particularly well—for instance, unsupervised hashing on COCO completes in under 2 minutes and supervised hashing on ImageNet100 in about 3 minutes—on a single GPU. These results highlight CroVCA's efficiency, adaptability, and broad applicability.
We propose cross-view code alignment, a simple and universal principle for hashing foundation model embeddings using binary cross-entropy and coding-rate maximization, unifying unsupervised and supervised hashing.
['Image Hashing', 'Image Retrieval', 'Cross-View Alignment', 'Coding-Rate Maximization', 'Foundation Models']
/pdf/3c324e37a1742959a96014f8bca45e0b9ecad963.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25358/Authors']
gILGafxq8R
25,354
gILGafxq8R
Joint Learning Between Reference Image and Text Prompt for Fashion Image Editing
Fashion image editing is an essential tool for designers to visualize design concepts, aiming to modify the garment in an input fashion image while ensuring that other areas of the image remain unaffected. Existing methods primarily focus on images-based virtual try-on or text-driven fashion image editing, often relying on multiple auxiliary information including segmentation masks or dense poses. However, they struggle with error accumulation or high computational costs when performing try-on and editing simultaneously. In this work, we introduce a joint learning fashion image editing framework based on text prompts and reference images, named D$^2$-Edit. It aims at flexible, fine-grained editing including garment migration and attribute adjustments such as sleeve length, texture, color, and material via textual descriptions. Our proposed D$^2$-Edit consists of four key components: (i) \textbf{image degradation module}, which introduces controlled noise to facilitate the learning of the target garment concept and preserves the contextual relationships between the target concept and other elements; (ii) \textbf{image reconstruction module}, responsible for reconstructing both the fashion image and the reference image; (iii) \textbf{garment concept learning module} that encourages each text token (e.g., \textit{skirt}) to attend solely to the image regions corresponding to the target concept via cross-attention loss; and (iv) \textbf{concept editing direction identification module}, designed to enable flexible attribute adjustments like fabric, color, and sleeve length. Extensive comparisons, ablations, and analyses demonstrate the effectiveness of our method across various test cases, highlighting its superiority over existing alternatives.
null
['Fashion Image Editing', 'Diffusion model', 'Text-Guided Image Editing']
/pdf/78bbeaf23b7657f87db354bc52f5851790339303.pdf
applications to computer vision, audio, language, and other modalities
/attachment/cc3c05c8e5d7c1caaa21632eb733c0fb8b37e738.zip
['ICLR.cc/2026/Conference/Submission25354/Authors']
MniooZbsKw
25,353
MniooZbsKw
Spectral Multiple-Instance Learning for Efficient Gigapixel Image Analysis
With ongoing advances in imaging technology, gigapixel images are now widely utilized in both scientific research and industrial applications. However, their extremely large scale presents significant challenges for conventional deep learning workflows. A common approach involves partitioning the image into thousands of smaller patches, processing each patch independently, and aggregating the representations using a Multiple-Instance Learning (MIL) framework. Because the label of a gigapixel image often depends on a small subset of informative regions, identifying these key patches is essential. However, MIL faces a persistent multi-resolution dilemma: low-magnification views offer global contextual information but fail to capture fine-grained details, whereas high-magnification views retain these details at a substantial computational cost. We introduce Multi-Instance Learning with Spectral Methods (SpecMIL), which addresses this challenge by capturing high-frequency features at low magnification and preserving geometric relationships across scales using graph spectral theory. SpecMIL exploits spectral features that remain informative even after down-sampling, guiding selective high-resolution "zoom-in" only where necessary. Experiments on various whole slide image benchmarks (e.g., tumor subtyping, grading, and metastasis detection) demonstrate that spectral approaches offer a highly effective and efficient solution for gigapixel image analysis.
null
['Multiple-Instance Learning', 'Spectral Methods', 'Whole Slide Images']
/pdf/1f0a03814f63fe183c4842fbb1038413d3044570.pdf
learning on graphs and other geometries & topologies
/attachment/742d7b35bcb971323ada832890552da3b07a55fc.zip
['ICLR.cc/2026/Conference/Submission25353/Authors']
K4ngUOra9m
25,348
K4ngUOra9m
Masked Skill Token Training for Hierarchical Off-Dynamics Transfer
Generalizing policies across environments with altered dynamics remains a key challenge in reinforcement learning, particularly in offline settings where direct interaction or fine-tuning is impractical. We introduce Masked Skill Token Training (MSTT), a fully offline hierarchical RL framework that enables policy transfer using observation-only demonstrations. MSTT constructs a discrete skill space via unsupervised trajectory tokenization and trains a skill-conditioned value function using masked Bellman updates, which simulate dynamics shifts by selectively disabling skills. A diffusion-based trajectory generator, paired with feasibility-based filtering, enables the agent to execute valid, temporally extended actions without requiring action labels or access to the target environment. Our results in both discrete and continuous domains demonstrate the potential of mask-guided planning for robust generalization under dynamics shifts. To our knowledge, MSTT is the first work to explore masking as a mechanism for simulating and generalizing across off-dynamics environments. It marks a promising step toward scalable, structure-aware transfer and opens avenues to explore multi-goal conditioning, and extensions to more complex, real-world scenarios.
null
['Tranfser Learning', 'Skills', 'Hierarchical RL', 'Embodied AI']
/pdf/e9f5c6214a2e0cfdadef9431dd4cc79a24ed9296.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25348/Authors']
n3u7PK2kyd
25,347
n3u7PK2kyd
From Divergence to Normalized Similarity:A Symmetric and Scalable Topological Toolkit for Representation Analysis
Representation Topology Divergence (RTD) offers a powerful lens for analyzing topological differences in neural network representations. However, its asymme- try and lack of a normalized scale limit its interpretability and direct comparability across different models. Our work addresses these limitations on two fronts. First, we complete the theoretical framework of RTD by introducing Symmetric Rep- resentation Topology Divergence (SRTD) and its lightweight variant, SRTD-lite. We prove their mathematical properties, demonstrating that they provide a more efficient, comprehensive, and interpretable divergence measure which matches the top performance of existing RTD-based methods in optimization tasks. Second, to overcome the inherent scaling issues of divergence measures, we propose Normal- ized Topological Similarity (NTS), a novel, normalized similarity score robust to representation scale and size. NTS captures the hierarchical clustering structure of representations by comparing their topological merge orders. We demonstrate that NTS can reliably identify inter-layer similarities and, when analyzing representa- tions of Large Language Models (LLMs), provides a more discriminative score than Centered Kernel Alignment (CKA), offering a clearer view of inter-model relationships.
We introduce a topological toolkit to advance representation analysis. SRTD unifies RTD's theoretical framework, while our novel, scale-invariant similarity score, NTS, provides a practical tool for robust, normalized comparisons
['Representation Learning', 'Topological Data Analysis (TDA)', 'Representation Similarity', 'Persistent Homology', 'Neural Network Analysis', 'Large Language Models (LLMs)']
/pdf/2a5d4eb5f7c5dd657e26ff1e588a05d6de695a0f.pdf
learning on graphs and other geometries & topologies
null
['ICLR.cc/2026/Conference/Submission25347/Authors']
VjGU55hEwV
25,346
VjGU55hEwV
RLIE: Rule Generation with Logistic Regression, Iterative Refinement, and Evaluation for Large Language Models
Nowadays, Large Lange Models (LLMs) are able to propose rules in natural language, overcoming constrains of a predefined predicate space inherent in traditional rule learning. However, existing methods using LLMs often overlook the combination effects of rules, and the potential of coupling LLMs with probabilistic rule learning to ensure robust inference is not fully explored. To address this gap, we introduce **RLIE**, a unified framework that integrates LLMs with probabilistic modeling to learn a set of probabilistic rules. The RLIE framework comprises four stages: (1) **R**ule generation, where a LLM proposes and filters candidate rules; (2) **L**ogistic regression, which learns the probabilistic weights of the rules for global selection and calibration; (3) **I**terative refinement, which continuously optimizes the rule set based on prediction errors; and (4) **E**valuation, which compares the performance of the weighted rule set as a direct classifier against various methods of injecting the rules into an LLM. Generated rules are the evaluated with different inference strategies on multiple real-world datasets. While applying rules directly with corresponding weights brings us superior performance, prompting LLMs with rules, weights and classification results from the logistic model will surprising degrade the performance. This result aligns with the observation that LLMs excel at semantic generation and interpretation but are less reliable at fine-grained, controlled probabilistic integration. Our work investigates the potentials and limitations of using LLMs for inductive reasoning tasks, proposing a unified framework which integrates LLMs with classic probabilistic rule combination methods, paving the way for more reliable neuro-symbolic reasoning systems.
null
['Rule Learning', 'Neuro-Symbolic', 'LLM']
/pdf/6d5bc1ea7d11b77ca666b7f36d65c53cfbae6733.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25346/Authors']
sWs0cCuM8I
25,344
sWs0cCuM8I
Spilling the Beans: Teaching LLMs to Self-Report Their Hidden Objectives
As AI systems become more capable of complex agentic tasks, they also become more capable of pursuing undesirable objectives and causing harm. Previous work has attempted to catch these unsafe instances by interrogating LLMs directly about their objectives and behaviors. However, the main weakness of trusting interrogations is that models can lie. We propose self-report fine-tuning (SRFT), a simple supervised fine-tuning technique that makes a model truthfully disclose its hidden objectives when questioned during interrogation. We train models to self-report factual mistakes in question-answering, and show that trained models are more likely to admit hidden objectives, even in the presence of adversarial pressure to conceal them. We evaluate SRFT in out-of-distribution (OOD) stealth tasks, where the model are instructed to complete a hidden misaligned objective alongside a user-specified objective without being caught by monitoring. Interrogation on SRFT models can detect hidden objectives with near-ceiling performance (F1 score = 0.98), while the baseline model lies when interrogated under the same conditions (F1 score = 0.03). Interrogation on SRFT models can further elicit the content of the hidden objective, recovering 28-100\% details, compared to 0\% details recovered in the baseline model and by the prefilled assistant turn attack. This provides a promising technique for incriminating misaligned AI systems.
We propose a SFT method that trains models to admit simple factual errors, which generalizes to admitting hidden objectives in sabotage tasks under adversarial pressure to conceal them, improving techniques for incriminating misaligned AI systems.
['honesty', 'interrogation', 'alignment auditing']
/pdf/4011ef9f2982f3e1483f17b89e4d05b031367f0a.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/c6e83f49718b4a3b67c581a1606f86d687b90837.zip
['ICLR.cc/2026/Conference/Submission25344/Authors']
1jXc6SHcUV
25,339
1jXc6SHcUV
Efficient Fine-Tuning of Quantized Models via Adaptive Rank and Bitwidth
As large language models (LLMs) scale up, model compression is crucial for their deployment on resource-constrained devices. While methods like QLoRA reduce resource demands by combining parameter quantization with LoRA fine-tuning, their use of uniform precision can limit performance by failing to account for layer-wise variations in parameter sensitivity. Recent advances have explored dynamic mixed-precision quantization and adaptive LoRA ranks, but these strategies are typically optimized in isolation. The synergistic integration of these two dimensions remains an unresolved core challenge. To address this, we introduce **QR-Adaptor**, a unified, gradient-free framework that jointly optimizes the per-layer quantization bit-width and LoRA rank. Instead of indirectly minimizing quantization error, QR-Adaptor formulates the task as a discrete, multi-objective optimization problem, directly guided by downstream task performance and memory constraints using a small calibration dataset. Our extensive experiments show that QR-Adaptor consistently establishes a new Pareto frontier, outperforming state-of-the-art quantized fine-tuning methods. Notably, our approach can surpass the performance of a 16-bit LoRA fine-tuned model while operating with a memory footprint comparable to 4-bit models.
we propose QR-Adaptor, a unified, gradient-free strategy that uses partial calibration data to jointly search the quantization components and the rank of low-rank spaces for each layer, thereby continuously improving model performance.
['Fine-tuning', 'Mixed Precision', 'LoRA', 'Adaptive rank', 'Multi-objective optimization']
/pdf/445c9940f7fb92522e2b23492e37f788ef6f3d5c.pdf
transfer learning, meta learning, and lifelong learning
/attachment/02c31a640220092b6ad396773b19279e93a1a45c.zip
['ICLR.cc/2026/Conference/Submission25339/Authors']
UbWy2QVmke
25,338
UbWy2QVmke
GAA-PtrNet: Graph attention aggregation-based pointer network for one-shot DAG scheduling
Optimizing Directed Acyclic Graph (DAG) workflow makespan by scheduling techniques is a critical issue in the high performance computing area. Many studies in recent years combined Pointer Network (PtrNet) with reinforcement learning (RL) to schedule DAGs by generating DAG task priorities in a sequence-to-sequence manner. However, these PtrNet-based scheduling methods need to repeatedly compute the decoder's hidden state or context embeddings according to the recent local decisions, which leads to limited capability of exploiting the DAG global topological structure, high computation complexity and inability to achieve one-shot scheduling. To address these issues, we propose GAA-PtrNet, a novel PtrNet based on graph attention aggregation (GAA) for one-shot DAG workflow scheduling. In GAA-PtrNet, we compute the pair-wise graph attention scores among nodes in one-shot, then directly aggregate these scores to obtain the probability of selecting candidate nodes. Consequently, the explicit decoder or context embedding structure in PtrNet is omitted in our GAA-PtrNet, and the network takes only one shot forward propagation to infer a solution for a whole DAG scheduling problem, significantly reducing the computation complexity. Additionally, to train GAA-PtrNet, we design a training strategy based on policy gradient RL with dense reward signal and demonstration learning. To our knowledge, GAA-PtrNet is the first network model to achieve PtrNet-based one-shot DAG scheduling. GAA-PtrNet can better handle with DAG workflow structures, providing high quality DAG scheduling solutions. The experimental results show that the proposed method is superior in terms of objective and runs about 10 times faster when compared to previous PtrNet-based methods, and also performs better than other learning-based DAG scheduling methods.
null
['DAG Scheduling', 'Graph Attention', 'Pointer Network', 'Reinforcement Learning', 'Combinatorial Optimization']
/pdf/92e9128d1c3a8308f5df44f2882a3fb263fd4eda.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25338/Authors']
EyswpODUEL
25,336
EyswpODUEL
DIANA with Compression for Distributed Variational Inequalities: Eliminating the Need to Transmit Full Gradients
Variational inequalities (VIs) are attracting increasing interest among machine learning (ML) researchers due to their applicability in numerous areas, such as empirical risk minimization (ERM) problems, adversarial learning, generative adversarial networks (GANs), and robust optimization. The growing volume of training data necessitates the use of advanced architectures beyond single-node computations. Distributed optimization has emerged as the most natural and efficient paradigm, enabling multiple devices to perform training simultaneously. However, this setup introduces a significant challenge: devices must exchange information with each other, which can substantially reduce the speed of learning. A standard approach to mitigating this issue involves the use of heuristics that allow only partial information transmission. State-of-the-art methods with compression for distributed VIs rely on variance reduction techniques, which makes them inapplicable to practical tasks due to full gradient computation and transmission. In this paper, we obviate the need to consider full gradient computations and introduce a novel algorithm for solving distributed variational inequalities. It combines the classical DIANA algorithm with the Extragradient technique. Additionally, we incorporate an error compensation mechanism, enabling our algorithm to handle the class of contractive compression operators, which are more practical for real-world applications. We provide a comprehensive theoretical analysis with near-optimal convergence guarantees and additionally outperform competitors in CNN and GAN training experiments.
null
['Variational inequalities', 'Compression operators', 'Convex optimization', 'Distributed learning']
/pdf/8ccf43e7a3eea9462877afb613362f58937c6d6f.pdf
optimization
/attachment/fe0855cb5917f4e09fc5721b247475db00de6c2f.zip
['ICLR.cc/2026/Conference/Submission25336/Authors']
HoUIYpitfo
25,331
HoUIYpitfo
Learning on the Job: Test-Time Curricula for Targeted Reinforcement Learning
Humans are good at learning on the job: We learn how to solve the tasks we face as we go along. Can a model do the same? We propose an agent that assembles a task-specific curriculum, called *test-time curriculum* (TTC-RL), and applies reinforcement learning to continue training the model for its target task. The test-time curriculum avoids time-consuming human curation of datasets by automatically selecting the most task-relevant data from a large pool of available training data. Our experiments demonstrate that reinforcement learning on a test-time curriculum consistently improves the model on its target tasks, across a variety of evaluations and models. Notably, on challenging math and coding benchmarks, TTC-RL improves the pass@1 of `Qwen3-8B` by approximately 80% on AIME25 and 135% on Codeforces. Moreover, we find that TTC-RL significantly raises the performance ceiling compared to the initial model, increasing pass@64 on AIME25 from 57% to 79% and on Codeforces from 45% to 72%. Our findings show the potential of test-time curricula in extending the test-time scaling paradigm to continual *training* on thousands of task-relevant experiences during test-time.
We propose a test-time curriculum agent that self-curates a sequence of training tasks to specialize towards a specific target task via reinforcement learning
['large language models', 'test-time training', 'reinforcement learning', 'curriculum learning']
/pdf/483ac4c407181fa2d576d35294adb21c65ea249e.pdf
foundation or frontier models, including LLMs
/attachment/660dac2be8ef1da70d94f4de187662390cd06b1e.zip
['ICLR.cc/2026/Conference/Submission25331/Authors']
3Gre3i1tSD
25,328
3Gre3i1tSD
GRACE-MoE: Grouping and Replication with Locality-Aware Routing for Efficient Distributed MoE Inference
Sparse Mixture of Experts (SMoE) performs conditional computation by selectively activating a subset of experts, thereby enabling scalable parameter growth in large language models (LLMs). However, the expanded parameter scale exceeds the memory capacity of a single device, necessitating distributed deployment for inference. This setup introduces two critical challenges: (1) *Communication Issue*: Transferring features to devices with activated experts leads to significant communication overhead. (2) *Computational Load Issue*: Skewed expert activation overloads certain GPUs, resulting in load imbalance across devices. Among these, communication overhead is identified as the main bottleneck in SMoE inference. Nevertheless, reducing communication between devices may exacerbate load imbalance, leading to device idleness and resource waste. Therefore, we present **GRACE-MoE**, short for **G**rouping and **R**eplic**a**tion with Lo**c**ality-Awar**e** Routing for S**MoE** inference. **GRACE-MoE** is a co-optimization framework that jointly reduces communication overhead and alleviates computational load imbalance. Specifically, the framework comprises two key phases: ① *Grouping & Replication*: This phase groups experts based on their affinity to reduce cross-device communication. Additionally, dynamic replication is applied to address load skewness, improving computational load balance across GPUs. ② *Routing*: This phase employs a locality-aware routing strategy with load prediction. It prioritizes local replicas to minimize communication overhead and balances requests across remote replicas when necessary. Experiments on diverse models and multi-node, multi-GPU environments demonstrate that **GRACE-MoE** efficiently reduces end-to-end inference latency, achieving up to **3.79×** speedup over state-of-the-art systems. Code for **GRACE-MoE** will be released upon acceptance.
We propose a co-optimization framework that reduces communication overhead and balances computational load across devices for efficient distributed SMoE inference.
['Mixture of Experts', 'Large Language Model', 'Efficient Inference']
/pdf/7979f53f11df58ae69e71e386419013b2d8def4c.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25328/Authors']
ICANwnoGgN
25,327
ICANwnoGgN
Model soups need only one ingredient
Fine-tuning large pre-trained models on a target distribution often improves in-distribution (ID) accuracy, but at the cost of out-of-distribution (OOD) robustness as representations specialize to the fine-tuning data. Weight-space ensembling methods, such as Model Soups, mitigate this effect by averaging multiple checkpoints, but they are computationally prohibitive, requiring the training and storage of dozens of fine-tuned models. In this paper, we introduce MonoSoup, a simple and data-free approach that achieves a strong ID–OOD balance using \textit{only a single} checkpoint. Our method applies Singular Value Decomposition (SVD) to each layer’s update, splitting it into high-energy directions that capture task-specific adaptation and low-energy directions that introduce noise but may still encode residual signals useful for robustness. MonoSoup then re-weights these components with adaptive, layer-wise coefficients that account for the spectral and geometric structure of the model. Experiments on CLIP models fine-tuned on ImageNet and evaluated under natural distribution shifts, as well as on Qwen language models tested on mathematical reasoning and multiple-choice benchmarks, show that this plug-and-play approach is a practical and effective alternative to multi-checkpoint methods, retaining much of their benefits without their computational overhead.
null
['Deep learning', 'Generalization', 'Out of Distribution']
/pdf/679ceccdc9e10bb43ae2b18cbed14bb7f6fa3ca5.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25327/Authors']
bydk8kAZRM
25,324
bydk8kAZRM
FedSycle: Mitigating Post-Unlearning Performance Inconsistency in Federated Learning via Latent Feature Decoupling
Federated Learning (FL) safeguards data privacy by enabling collaborative model training without centralizing client data. The emerging 'Right to Be Forgotten' mandates necessitate Federated Unlearning (FU), allowing clients to revoke their data's influence on the global model. However, a critical yet overlooked challenge in FU is the emergence of performance inconsistency across clients following an unlearning event. When a client departs, the global model's accuracy can degrade unevenly for the remaining participants, leading to unfairness and disincentivizing collaboration. To address this, we propose FedSycle, a novel FU framework that leverages the power of pre-trained models to do fast retraining and enhance performance consistency. FedSycle operates by decoupling client data into distinct latent representations: one capturing semantic content (retained locally for privacy and to boost client-side retraining efficiency) and another capturing domain-specific attributes (e.g., texture, color). Crucially, only the less sensitive domain attributes are aggregated on the server. The server then utilizes these aggregated attributes to synthesize auxiliary data, which guides the global model update, effectively recalibrating its performance across all remaining client domains. We provide theoretical convergence guarantees for FedSycle. Extensive experiments on standard benchmarks (PACS, DomainNet) demonstrate its superiority. FedSycle not only achieves state-of-the-art unlearning effectiveness but also significantly mitigates performance inconsistency, reducing its variance by up to 83.2% compared to leading baselines, while simultaneously improving the average accuracy for non-target clients by over 31%.
We propose a high-performance federated unlearning algorithm, ensuring model performance while reducing domain inconsistency, with theoretical convergence and experimental demonstration.
['post-unlearning performance; inconsistency']
/pdf/f10ac4e07be35e2aaa6c297320c849ed4c9b8ccc.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/06fa8395669b9e0d10528ebd79603d4ce81a2dd2.zip
['ICLR.cc/2026/Conference/Submission25324/Authors']
P0xkQNyguy
25,321
P0xkQNyguy
Gaussian Entropy Flow World Model for Streaming 3D Occupancy Predition
In 3D occupancy prediction, temporal information is crucial. Traditional methods fuse multi-frame features through a pipeline of perception, alignment, and fusion, but they overlook the coherence of static elements and the motion patterns of dynamic elements in 3D scenes. Existing methods reformulate 3D prediction as 4D prediction based on current sensor inputs by modeling the continuous evolution of the scene. However, the discrete refinements of the physical properties of dynamic elements in multiple encoding-decoding processes lead to cumulative errors and poor adaptation to dynamic motion. Inspired by non-equilibrium thermodynamics, we propose an Evolutionary Entropy Flow framework that uses Evolutionary Entropy as a carrier for continuous scene evolution, modeling the motion of dynamic elements as the flow of Evolutionary Entropy. We further introduce the Gaussian Entropy Flow World Model (GaussEFW), which represents Evolutionary Entropy Flow as a single, continuous Gaussian Entropy Flow in latent space, in contrast to the discrete refinements from multiple encoding-decoding processes. By predicting Gaussian Entropy Flow based on current RGB observations, we can accurately predict the motion of dynamic elements and learn continuous scene evolution. Extensive experiments on the nuScenes dataset validate the effectiveness of GaussEFW, demonstrating superior performance in dynamic elements prediction and high overall performance.
null
['Occupancy', 'World Model', 'Autonomous Driving']
/pdf/7a279c16f984091c0f2561af982860d6d59f8823.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25321/Authors']
sbEb0Ld6MK
25,320
sbEb0Ld6MK
Fairness via Independence: A General Regularization Framework for Machine Learning
Fairness in machine learning has emerged as a central concern, as predictive models frequently inherit or even amplify biases present in training data. Such biases often manifest as unintended correlations between model outcomes and sensitive attributes, leading to systematic disparities across demographic groups. Existing approaches to fair learning largely fall into two directions: incorporating fairness constraints tailored to specific definitions, which limits their generalizability, or reducing the statistical dependence between predictions and sensitive attributes, which is more flexible but highly sensitive to the choice of distance measure. The latter strategy in particular raises the challenge of finding a principled and reliable measure of dependence that can perform consistently across tasks. In this work, we present a general and model-agnostic approach to address this challenge. The method is based on encouraging independence between predictions and sensitive features through an optimization framework that leverages the Cauchy–Schwarz (CS) Divergence as a principled measure of dependence. Prior studies suggest that CS Divergence provides a tighter theoretical bound compared to alternative distance measures used in earlier fairness methods, offering a stronger foundation for fairness-oriented optimization. Our framework, therefore, unifies prior efforts under a simple yet effective principle and highlights the value of carefully chosen statistical measures in fair learning. Through extensive empirical evaluation on four tabular datasets and one image dataset, we show that our approach consistently improves multiple fairness metrics while maintaining competitive accuracy.
We introduce a general framework to promote fairness in machine learning by reducing the dependence between model predictions and sensitive attributes.
['Bias Mitigation', 'Statistical Independence', 'Fairness in Machine Learning']
/pdf/ba7e37116d06bac6d33c2a905bd5d5fabe5e25ca.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25320/Authors']
d3dSicnYkN
25,319
d3dSicnYkN
MANGO: MANGROVE GLOBAL OBSERVATIONS –A DATASET AND BENCHMARK
Mangroves buffer coasts and store large amounts of carbon, yet they are vulnerable to storms and require reliable monitoring at global scale. Thresholded spectral indices break across sensors, seasons, and atmospheres, which limits their usefulness beyond local settings. Recent segmentation models are more promising but are difficult to train at scale because single-date imagery and labels are rarely paired and because models seldom exploit location context. First, we collect a globally distributed dataset, MANGO, that pairs one Sentinel-2 acquisition with each region–year label through a principled selection that balances agreement with the label and scene quality, and we provide country-disjoint splits together with co-registered geospatial embeddings. Second, we introduce a simple way to turn a global geospatial embedding into a small set of context channels that augment the optical bands and condition any backbone without architectural changes. Across strong convolutional and transformer baselines, this combination yields consistent gains on held-out countries and visibly cleaner maps, with sharper shorelines, better retention of small stands, and fewer false positives over turbid water, while adding minimal computational overhead. We release the dataset, the selection protocol, and the conditioning module to support reliable and scalable monitoring of coastal ecosystems.
null
['Earth Observation', 'Mangrove']
/pdf/7bbe6157e54b6f8d06c1bd26c5c2c8c15969802a.pdf
datasets and benchmarks
/attachment/a5092be009e0a28405a4c0cb0c035363b720836d.zip
['ICLR.cc/2026/Conference/Submission25319/Authors']
C5Dgtmk7ho
25,318
C5Dgtmk7ho
MI-Grad-CAM: Letting Your Model Reveal What’s Most Informative
With the growing role of machine vision in critical applications such as healthcare, achieving precise and interpretable decision-making is crucial. Class Activation Mapping (CAM) is widely used for visual explanations in computer vision, but improving its interpretability remains an open research area. In this work, we introduce MI-Grad-CAM, a novel post-hoc visual explanation method that provides clearer, causally-driven insights into how CNNs reach their conclusions by prioritizing causality over mere correlation. MI-Grad-CAM generates class-specific visualizations by weighting feature maps based on normalized mutual information between the input image and feature maps, combined with gradient information of the predicted class with respect to these feature maps. This approach strengthens the causal link between explanations and model predictions, supported by counterfactual analysis to verify causality. We also propose the Harmonized Confidence Index (HCI), a new evaluation metric to measure explanation effectiveness. Our method demonstrates robust performance in both qualitative and quantitative evaluations, achieving competitive or superior results compared to state-of-the-art methods, particularly in terms of explanation faithfulness and model reliability.
null
['Mutual Information']
/pdf/6453cf71e518f802ab0ef99a67a1761b3b4d33ba.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25318/Authors']
0WdN7pFCja
25,317
0WdN7pFCja
Adaptive Inference‑Time Scaling for LRMs using Uncertainty‑Aware RL
The widespread adoption of Large Reasoning Models (LRMs), such as Gemini 2.5 Pro Deep Think, OpenAI GPT-5 Pro, and SuperGrok 4 Heavy, is bottlenecked by their computational inefficiency, primarily stemming from the “overthinking phenomenon”—the propensity to generate unnecessarily long Chain-of-Thought (CoT) sequences even for simple queries. This verbose output, while enhancing accuracy, substantially increases inference costs and latency. Current efforts to mitigate this rely on L1 methods like explicit token budget instructions or post-hoc truncation, which either lack precise control or struggle to generalize across varying task complexities. We propose Uncertainty-Guided Self-Braking Tuning (USBT), an L2 adaptive inference framework that addresses the overthinking issue by enabling LRMs to autonomously regulate their reasoning depth based on real-time internal uncertainty. We frame adaptive inference as a sequential decision-making process optimized via Reinforcement Learning (RL), building on core algorithms like Group Relative Policy Optimization (GRPO). Our novel contribution is integrating a confidence metric, such as certainindex based on semantic entropy, into the RL reward function alongside explicit length penalties. This reward function incentivizes the model to produce concise, correct reasoning paths and facilitates an early exit strategy. Techniques like Serial-Group Decaying-Reward Policy Optimization (S-GRPO), which serialize early-exit interventions and decay rewards for later completions, demonstrate that this paradigm achieves substantial token reduction (35.4%–61.1%) while boosting accuracy. Our USBT framework generalizes this approach by actively coupling the decay/penalty coefficients with the measured uncertainty, allowing the model to recognize and inhibit excessive reasoning, cultivating an intrinsic ability to self-regulate without relying on external control. Furthermore, integrating this uncertainty-based self-regulation with inference acceleration strategies, such as branch-parallel decoding, significantly reduces end-to-end latency. Experiments incorporating our self-braking mechanism consistently show dramatic reductions in token consumption (up to 60%) across complex benchmarks while maintaining high performance.
USBT learns RL policies that throttle LRM reasoning depth using uncertainty (semantic entropy) plus length penalties, yielding concise CoT. S‑GRPO adds early‑exit control with parallel search, cutting tokens and latency, maintaining accuracy.
['uncertainty-guided self-braking tuning (USBT)', 'adaptive inference', 'large reasoning models (LRMs)', 'reasoning depth control', 'uncertainty-aware reinforcement learning', 'semantic entropy (confidence)', 'chain-of-thought (CoT)', 'early exit', 'S‑GRPO', 'GRPO', 'reward shaping', 'length penalties', 'branch‑parallel decoding', 'token reduction', 'latency reduction', 'compute efficiency', 'inference-time scaling', 'self‑regulation']
/pdf/1d8bbfaefbf9f74be3ad138ee460fd623eaeb837.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25317/Authors']
b0gKCXLzuB
25,316
b0gKCXLzuB
Semi-Supervised Dataset Condensation with Dual Consistency Trajectory Matching
Dataset condensation synthesizes a small dataset that preserves the performance of training on the original, large-scale data. However, existing methods rely on fully labeled data, which limits their applicability in real-world scenarios where unlabeled data is abundant. To bridge this gap, we introduce a new task called $\textbf{Semi-Supervised Dataset Condensation}$, which condenses both labeled and unlabeled samples into a small yet informative synthetic labeled dataset, thereby enabling efficient supervised learning. We propose $\textbf{Semi-Supervised Dual Consistency Trajectory Matching (SSD)}$, a method that leverages semi-supervised knowledge distillation. The core of SSD is a two-stage trajectory matching framework that effectively incorporates unlabeled data. First, a teacher model is trained on the original data to generate accurate pseudo-labels using semi-supervised learning. Then, a student model is trained on the entire dataset with a novel \textit{dual consistency regularization} loss. This loss enforces both $\textbf{inter-model}$ consistency (between the student and teacher predictions) and $\textbf{intra-model}$ consistency (for the student model under different input perturbations), ensuring robust performance. By aligning the training trajectories of the student model on the complete dataset and the synthetic dataset, SSD optimizes and obtains a high-quality synthetic dataset. Experiments on image classification benchmarks demonstrate that SSD consistently outperforms previous methods, achieving superior performance and efficiency in dataset condensation.
null
['Dataset condensation', 'semi-supervised learning', 'knowledge distillation']
/pdf/e3ca34100fdf921e5d46aa121b8bc6fa66b78276.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25316/Authors']
4hKNGmjXVQ
25,315
4hKNGmjXVQ
Transformers as Unsupervised Learning Algorithms: A study on Gaussian Mixtures
The transformer architecture has demonstrated remarkable capabilities in modern artificial intelligence, among which the capability of implicitly learning an internal model during inference time is widely believed to play a key role in the understanding of pre-trained large language models. However, most recent works have been focusing on studying supervised learning topics such as in-context learning, leaving the field of unsupervised learning largely unexplored. This paper investigates the capabilities of transformers in solving Gaussian Mixture Models (GMMs), a fundamental unsupervised learning problem through the lens of statistical estimation. We propose a transformer-based learning framework called TGMM that simultaneously learns to solve multiple GMM tasks using a shared transformer backbone. The learned models are empirically demonstrated to effectively mitigate the limitations of classical methods such as Expectation-Maximization (EM) or spectral algorithms, at the same time exhibit reasonable robustness to distribution shifts. Theoretically, we prove that transformers can efficiently approximate both the Expectation-Maximization (EM) algorithm and a core component of spectral methods—namely, cubic tensor power iterations. These results not only improve upon prior work on approximating the EM algorithm, but also provide, to our knowledge, the first theoretical guarantee that transformers can approximate high-order tensor operations. Our study bridges the gap between practical success and theoretical understanding, positioning transformers as versatile tools for unsupervised learning.
null
['In-context learning', 'Gaussian Mixture Models', 'Theory']
/pdf/ac76cb9229e04c860ebb33e6b1c9aae67846983e.pdf
learning theory
/attachment/3069355987f017b7e648da33792cad0777138c80.zip
['ICLR.cc/2026/Conference/Submission25315/Authors']
c4ir92gYjv
25,313
c4ir92gYjv
Data-Efficient Generalization and Faster Initial Learning in Quantum Models for Classifying Cellular Activation States
Quantum computing is in its infancy. While it promises to solve some of the intractable problems of computing, real world application is scarce. It is mainly challenged by the hardware which are currently limited both in circuit width and depth. Finding a real world application with an advantage compared to classically available solutions is even harder in the current state-of-the-art machines. However, given the vastly different nature of quantum computers, it is possible the advantage may come from unexpected corners when applied to wide range of classical problems. Machine learning using quantum algorithms are of particular interest due to their ease of parameterization and possible resource efficiency. In this work, we apply a quantum machine learning (QML) algorithm to real world data and benchmark some of the well established scaling laws in a resource constraint scenario using both ideal and noisy ion trap quantum computer platform. The real world problem we investigated comes from the accurate identification of cytotoxic CD8+ T cell activation states from high‑dimensional cytometric data. Hand‑engineered features extracted from imaging flow cytometry capture morphological, intensity, texture and shape descriptors that are essential for discriminating between quiescent and stimulated cellular states. Leveraging a dataset of processed blood cell images from three patients, we compare quantum data re‑uploading classifiers (QDRCs) with classical feedforward neural networks (FNNs) for the task of binary classification of cellular activation. The study is driven by three findings: (1) both quantum and classical models achieve high test accuracy ($\approx99$%) when trained with sufficient data and epochs, and models trained on one patient generalize well to the other two, demonstrating the learnability of the engineered feature space; (2) the generalization error of QDRCs exhibits a predictable power‑law scaling with training size consistent with a $\sqrt{\frac{T}{N}}$ bound for T trainable parameters, whereas FNNs lack a comparable scaling relationship; and (3) QDRCs achieve high accuracy in early epochs under low‑data constraints, aligning with a convex kernel interpretation of the re‑uploading model. We further validate a theoretical bound derived from quantum generalization theory and provide an intuitive proof under a convexity assumption. These results indicate that quantum architectures can be competitive with classical baselines while offering faster early generalization and theory-consistent behavior in data‑limited regimes, although our conclusions are restricted to hand‑crafted features and do not imply clinical readiness or broader generalization.
This paper shows that for classifying cancerous cells from cytometric data, quantum models learn faster and generalize more effectively from limited data than classical neural networks, and their performance predictably scales as theory suggests.
['Quantum Machine Learning', 'Generalization Error', 'Data-Efficient Learning', 'Computational Biology', 'Quantum Neural Networks', 'Deep Learning']
/pdf/06952b01a910fe0985ec5472b51bf3cf12b0578f.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25313/Authors']
MUnHOkaEFC
25,310
MUnHOkaEFC
From Uncertainty to Inconsistency: Open-Set RF Fingerprint Identification
The rejection of unknown devices outside the known categories is crucial for radio frequency fingerprint identification (RFFI). Current open-set recognition (OSR) methods rely on the uncertainty of the model output, where unknown classes exhibit low confidence and vice versa for known classes. However, we demonstrate that uncertainty-based methods face a significant challenge, particularly in RFFI, which is termed ‘‘Overconfidence on Unknown Signal Segments’’ (OUSS), where unknown signal segments are misclassified with high confidence, directly contradicting the expected low-confidence characteristic for unknown classes. Inspired by an interesting observation that predictions for unknown classes across multiple models exhibit high inconsistency, while known classes exhibit the opposite, we propose to leverage decision entropy to quantify the inconsistency. Based on the decision entropy, we propose an inconsistency based open-set RFFI approach (IncOS-RFFI). We conduct extensive experiments on the seven open-source radio frequency fingerprint datasets with seventeen benchmarks and demonstrate the effectiveness of our proposed IncOS-RFFI compared to existing OSR algorithms.
Inspired by an interesting observation that predictions for unknown classes across multiple models exhibit high inconsistency, while predictions for known classes show high consistency, we propose an inconsistency based open-set RFFI approach.
['Open-set recognition', 'radio frequency fingerprint identification', 'deep learning']
/pdf/3bad493dbb8cd98aaea968255ec493f1532939bf.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25310/Authors']
KpvZ1kGOjH
25,307
KpvZ1kGOjH
EvoCF: Multi-agent Collaboration with Memory-guided Evolutionary Counterfactual Planning
Planning collaboration strategies for multi-agent embodied systems remains a core challenge for LLM-based planners, which often fail to capture the physical and coordination constraints of real-world environments. To address this, we present \textbf{EvoCF} (Evolutionary Counterfactual Planning), a memory-guided framework for discovering improved multi-agent collaboration strategies through counterfactual plan generation and evaluation. First, we induce a structured symbolic rule library from failure experiences, encoding reusable constraints of inter-agent dependencies and action feasibility. Then, we propose an evolutionary counterfactual plan generator that systematically explores semantically consistent plan variants through rule-guided mutations. This enables the discovery of robust multi-agent strategies beyond short-sighted LLM plans. Finally, we design an experience-driven evaluator that scores candidate plans along multiple metrics, using retrieval-augmented constraint matching. Across embodied simulation benchmarks, {EvoCF} consistently discovers more robust and executable plans compared to baseline approaches. Our results demonstrate that grounding multi-agent planning in structured memory and symbolic reasoning significantly enhances both reliability and adaptability.
null
['Multi-Agent Collaboration', 'Long-horizon Planning', 'Large Language Models']
/pdf/00a958c142661486301a48b13f1f3d1e831ce30f.pdf
applications to robotics, autonomy, planning
/attachment/7d2c284df23a2d24160b5de5683222e7fd1b7fa7.zip
['ICLR.cc/2026/Conference/Submission25307/Authors']
8QHxu9CGAB
25,306
8QHxu9CGAB
General Risk Measure meets Offline RL: Provably Efficient Risk-Sensitive Offline RL via Optimized Certainty Equivalent
We study the risk-sensitive reinforcement learning (RL), which is crucial in scenarios involving uncertainty and potential adverse outcomes. However, existing works on risk-sensitive RL either only focus on a specific risk measure or overlook the offline RL setting. In this work, we investigate the provably efficient risk-sensitive RL under the offline setting with a general risk measure, the optimized certainty equivalent (OCE), which captures various risk measures studied in prior risk-sensitive RL works, such as value-at-risk, entropic risk, and mean-variance. To the best of our knowledge, we (i) introduce the first offline OCE-RL frameworks and propose corresponding pessimistic value iteration algorithms (OCE-PVI) for both dynamic and static risk measures; (ii) establish suboptimality bounds for the algorithms, which can reduce to known results for risk-sensitive RL as well as risk-neutral RL with appropriate utility functions; (iii) derive the first information-theoretic lower bound of the sample complexity of offline risk-sensitive RL, matching the upper bounds and certifying optimality of our algorithms; and (iv) propose the first provably efficient risk-sensitive RL with linear function approximation for both dynamic and static risk measures, together with rigorous suboptimality bounds, yielding a scalable and model-free approach.
null
['Reinforcement Learning', 'Offline RL', 'Risk-Sensitive', 'Optimized Certainty Equivalent', 'General Risk Measure']
/pdf/6b63426e170cdf90412cec29d4e7971c9c42cf3c.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25306/Authors']
kITJl37ULw
25,305
kITJl37ULw
BridgeRAG: A Framework for Reasoning over Partitioned Knowledge Graphs
Existing Knowledge Graph-based RAG (Retrieval-Augmented Generation) systems face a fundamental dilemma in multi-document scenarios. They either treat each document as an isolated knowledge graph, which preserves contextual purity but prevents cross-document reasoning, or merge them into a single, massive graph, leading to entity saturation and contextual noise pollution. To resolve this core conflict, we introduce the BridgeRAG framework, designed to elegantly achieve both "rtitioned isolation"and "ross-partition linking"or multiple documents. BridgeRAG is a collaborative framework that integrates static linking and dynamic reasoning. Experiments on multi-hop question answering benchmarks like HotpotQA show that BridgeRAG significantly outperforms state-of-the-art RAG models, especially on complex questions that require deep cross-partition navigation.
null
['RAG', 'Knowledge Graphs', 'Multi-hop Question Answering', 'Multi-Document Reasoning', 'LLM Agents', 'Planned Navigation']
/pdf/51f5805fffc4fa55eb2d7ef9f890b997e3aefb09.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25305/Authors']
nuHmMRmyFV
25,304
nuHmMRmyFV
Semantic Fragment Similarity Representation Learning for Information Retrieval
We introduce Semantic Fragment Similarity (SFS), a novel similarity metric designed to enhance representation quality by partitioning embeddings into non-overlapping fragments, computing fragment level similarity, and aggregating these local scores. Conventional similarity metrics compute relevance using the global vector as a single unit. This process flattens and entangles multi-faceted semantic features and dilutes the fine-grained alignment signals crucial for accuracy. By inducing fragments to specialize in distinct semantic roles, SFS drives the substantial gains in retrieval performance across a wide range of models, tasks, and architectures when applied in both training and inference. Further, we find that a single embedding fragment trained with SFS, comprising just 12\% of the total dimensions, outperforms the entire global embedding on specific classification tasks. Ultimately, SFS can be directly integrated as a replacement for conventional similarity metrics, without architectural modifications or complex computational overhead and it opens up new avenues for building more structured and interpretable embedding models.
We propose Semantic Fragment Similarity, a representation learning method that partitions embeddings and applies fragment-level contrastive learning, yielding semantically specialized representations, improving relevance and retrieval performance.
['Information Retrieval', 'Representation Learning', 'Sentence Embeddings', 'Fragment Similarity']
/pdf/2492db6374921a18c4bbbd735cdf833d32591b63.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25304/Authors']
R59Nk7DS3a
25,303
R59Nk7DS3a
FMGTranDD: A Deception Detection Method Based on Spatiotemporal Facial Abnormal Emotional Changes
While multimodal deception detection methods improve detection efficiency, they inevitably introduce higher data collection and processing costs. Deceptive behavior is often accompanied by emotional fluctuations such as tension, anxiety, and guilt, which can lead to contradictory, inconsistent, or suppressed emotional expressions in individuals' facial expressions.This paper regards deceptive behavior detection as an abnormal signal recognition problem, aiming to capture abnormal features from regular behavior patterns. First, faces in videos are converted into a set of learnable facial emotion embedding sequences. Subsequently, a Time-LSTM-GCN module is proposed to model the spatiotemporal relationships between these facial emotion embedding sequences. The combined adversarial loss optimizes the decision boundary for deceptive behaviors. This loss function consists of two main components: first, semi-supervised learning of dominant facial emotions enhances the representational power of the embedding sequence; second, by comparing the similarity between embedding nodes with the same emotion (positive samples) and embedding nodes with different emotions (negative samples), the model is encouraged to capture both local structure within the sequence and global differences between sequences. Experimental results show that our new baseline model outperforms existing deception detection methods based on multimodal or multi-type features. Code is provided in the supplementary material.
null
['Emotion recognition', 'deception detection', 'facial emotion embedding sequence']
/pdf/e56bbe91b3df4a8d91445d55ca0e5b2d7f35bce8.pdf
applications to computer vision, audio, language, and other modalities
/attachment/f09a16675384d01324150c96c24a17a5f479a791.zip
['ICLR.cc/2026/Conference/Submission25303/Authors']
MS9nWFY7LG
25,302
MS9nWFY7LG
Q-RAG: Long Context Multi‑Step Retrieval via Value‑Based Embedder Training
Retrieval-Augmented Generation (RAG) methods enhance LLM performance by efficiently filtering relevant context for LLMs, reducing hallucinations and inference cost. However, most existing RAG methods focus on single-step retrieval, which is often insufficient for answering complex questions that require multi-step search. Recently, multi-step retrieval approaches have emerged, typically involving the fine-tuning of small LLMs to perform multi-step retrieval. However, this type of fine-tuning is highly resource-intensive and does not enable the use of larger LLMs. In this work, we propose Q-RAG, a novel approach that fine-tunes the Embedder model for multi-step retrieval using reinforcement learning (RL). Q-RAG offers a competitive, resource-efficient alternative to existing multi-step retrieval methods for open-domain question answering and achieves state-of-the-art results on the popular long-context benchmarks Babilong and RULER for contexts up to 10M tokens.
null
['Reinforcement Learning', 'RL', 'QA', 'Long-context', 'RAG', 'NLP']
/pdf/7875418351b10da4baeeeea9d900d57da2640f94.pdf
reinforcement learning
/attachment/ae9b9a32c651ef6aa99f966c793d9754a96a5033.zip
['ICLR.cc/2026/Conference/Submission25302/Authors']
1lLWZzikiT
25,300
1lLWZzikiT
Multi-objective Hyperparameter Optimization in the Age of Deep Learning
While Deep Learning (DL) experts often have prior knowledge about which hyperparameter settings yield strong performance, only few Hyperparameter Optimization (HPO) algorithms can leverage such prior knowledge and none incorporate priors over multiple objectives. As DL practitioners often need to optimize not just one but many objectives, this is a blind spot in the algorithmic landscape of HPO. To address this shortcoming, we introduce PriMO, the first HPO algorithm that can integrate multi-objective user beliefs. We show PriMO achieves state-of-the-art performance across 8 DL benchmarks in the multi-objective _and_ single-objective setting, clearly positioning itself as the new go-to HPO algorithm for DL practitioners.
We propose to use multi-objective expert priors to make hyperparameter optimization for expensive deep learning workloads feasible and show our algorithm PriMO achieves state-of-the-art performance in the multi-objective and single-objective setting.
['Hyperparameter Optimization', 'Multi-objective', 'Deep Learning']
/pdf/6f91078d016ee60ed80b3df8a88af7363fef73c3.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25300/Authors']
y3UkklvoW9
25,299
y3UkklvoW9
THEMIS: Towards Holistic Evaluation of MLLMs for Scientific Paper Fraud Forensics
We present **THEMIS**, a novel multi-task benchmark designed to comprehensively evaluate Multimodal Large Language Models (MLLMs) on visual fraud reasoning within real-world academic scenarios. Compared to existing benchmarks, THEMIS introduces three major advancements. (1) **Real-world Scenarios & Complexity**: Our benchmark comprises over 4K questions spanning 7 scenarios, derived from authentic retracted-paper cases and carefully curated multimodal synthetic data. With 73.73% complex-texture images, THEMIS bridges the critical gap between existing benchmarks and the complexity of real-world academic fraud. (2) **Task Diversity & Granularity**: THEMIS systematically covers five challenging tasks and introduces 16 fine-grained manipulation operations. On average, each sample undergoes multiple stacked manipulation operations, with the diversity and difficulty of these manipulations demanding a high level of visual fraud reasoning from the models. (3) **Multi-dimensional Capability Evaluation**: We establish a mapping from fraud tasks to five core visual fraud reasoning capabilities, thereby enabling an evaluation that reveals the distinct strengths and specific weaknesses of different models across these core capabilities. Experiments on 11 leading MLLMs show that even the best-performing model still falls below the passing threshold, demonstrating that our benchmark presents a stringent test. We expect THEMIS to advance the development of MLLMs for complex, real-world fraud detection tasks. The data and code will be updated on url: https://anonymous.4open.science/r/themis1638.
We present THEMIS, a holistic multi-task benchmark of over 4K questions derived from authentic retracted-paper cases and realistically simulated synthetic data, to systematically evaluate the fine-grained visual fraud reasoning abilities of MLLMs.
['Multimodal Large Language Model', 'Vision Fraud Reasoning', 'Scientific Paper Fraud Detection', 'Benchmark']
/pdf/1a0c9477a5233fbf5e0563f788de5ee5dd9505de.pdf
datasets and benchmarks
/attachment/7011ce3c6c4cd1f886f65284645bb19464ba55e8.zip
['ICLR.cc/2026/Conference/Submission25299/Authors']
XbVMiW0jTM
25,298
XbVMiW0jTM
PROBE: Benchmarking Reasoning Paradigm Overfitting in Large Language Models
The reliability of reasoning benchmarks for Large Language Models (LLMs) is threatened by overfitting, which leads to inflated scores that misrepresent true capability. While existing benchmarks focus on surface-level perturbations, they fail to detect a more profound form of overfitting where models memorize problem-specific reasoning paradigms rather than developing generalizable and dynamic logical skills. To address this, we introduce PROBE (Paradigm-ReOriented Benchmark for overfitting Evaluation), a novel benchmark designed to systematically assess this limitation. PROBE introduces variants that force a shift in the core reasoning paradigm—such as simplification, introducing unsolvability, or changing the fundamental solution approach—alongside conventional transformations. Our evaluation of state-of-the-art LLMs on PROBE reveals significant reasoning paradigm overfitting: while models achieve an average accuracy of 81.57\% on original problems, their performance drops substantially to 63.18\% on PROBE, with a striking low score of 35.08\% on the most challenging Unsolvability type. Our work highlights the necessity for benchmarks that probe deeper into reasoning generalization and provides a tool for fostering more robust LLMs.
null
['Large Language Models', 'Benchmark Evaluation']
/pdf/33134f0a6fb57afd2677195eae07b55fad083822.pdf
datasets and benchmarks
/attachment/422d4f487bba035ccd90602dfa547b21a14ee8c5.zip
['ICLR.cc/2026/Conference/Submission25298/Authors']
eQtSuMQNtH
25,296
eQtSuMQNtH
Beyond Turn Limits: Training Deep Search Agents with Dynamic Context Window
While recent advances in reasoning models have demonstrated cognitive behaviors through reinforcement learning, existing approaches struggle to invoke deep reasoning capabilities in multi-turn agents with long-horizon interactions. We propose DeepMiner, a novel framework that elicits such abilities by introducing high-difficulty training tasks and dynamic context window. DeepMiner presents a reverse construction method to generate complex but verifiable question-answer pairs from authentic web sources, which ensures the challenge and reliability of training data while injecting cognitive capabilities into multi-turn reasoning scenarios. We further design an elegant yet effective dynamic context management strategy for both training and inference, utilizing sliding window mechanisms while eliminating the dependency on external summarization models, thereby efficiently empowering the model to handle continuously expanding long-horizon contexts. Through reinforcement learning on Qwen3-32B, we develop DeepMiner-32B, which achieves substantial performance improvements across multiple search agent benchmarks. DeepMiner attains 33.5\% accuracy on BrowseComp-en, surpassing the previous best open-source agent by almost 20 percentage points, and demonstrates consistent improvements on BrowseComp-zh, XBench-DeepSearch, and GAIA. Notably, our dynamic context management enables sustained interactions of nearly 100 turns within standard 32k context length, effectively addressing the context limitations that constrain existing multi-turn interaction systems.
We present DeepMiner, a novel training framework that breaks the turn constraint in multi-turn search agents through dynamic context management.
['LLM', 'DeepResearch', 'Agent']
/pdf/b779fc86cfce2763d1e10fac7b37ed2e608038ab.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25296/Authors']
4w9HzBBLRk
25,295
4w9HzBBLRk
Towards Multimodal Understanding, Reasoning, and Tool Usage across Vision, Speech, and Audio in Long Videos
Long-form, multimodal video understanding requires models to integrate vision, speech, and ambient audio while reasoning coherently over extended contexts. However, existing benchmarks often emphasize either long temporal contexts or rich multimodal content, but rarely both. Moreover, they are typically restricted to multiple-choice evaluations and a single accuracy metric, offering limited insight into where models succeed or fail. To address these gaps, we introduce **STARBench**, a diagnostic benchmark designed for long-form, multimodal video understanding. STARBench features open-ended, intent-driven questions that reflect how humans naturally engage with video content. It supports single- and multi-turn dialogues, encompassing multimodal reasoning and agentic tool-use tasks across rich video, audio, and speech contexts. Each question includes a reference answer and a rubric with graded criteria, enabling interpretable and traceable evaluation. Importantly, STARBench is generated via a scalable, human-validated pipeline, ensuring reproducibility and coverage. Complementing the benchmark, we propose **STARAgent**, an agentic system for analyzing long videos using pre-processing, search, and refinement tools. Evaluating state-of-the-art closed- and open-source MLLMs on STARBench reveals substantial limitations: the top-performing Gemini-2.5-Flash reaches only 52.95\%, while open-source models remain below 25\%. STARAgent, leveraging structured reasoning over long videos, achieves 44.66\%, highlighting the challenge of complex, real-world video understanding. By combining breadth, interpretability, and reproducibility, STARBench provides a practical foundation for benchmarking and improving MLLMs on long-form, multimodal video tasks. All code, including the agentic pipeline, and datasets will be released publicly.
STARBench is a human-validated benchmark for long-form multimodal video understanding, and STARAgent is an agentic pipeline for multimodal long video understanding, together exposing current state-of-the-art MLLMs’ limits
['multimodal', 'long-form video understanding', 'benchmark', 'agentic pipeline', 'question answering', 'scenario-driven QA']
/pdf/e9af7c3f35795c5d0d036af54a6e7031e5c42642.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25295/Authors']
wNAUAPfceN
25,294
wNAUAPfceN
Guided Star-Shaped Masked Diffusion
The performance of pre-trained masked diffusion models is often constrained by their sampling procedure, which makes decisions irreversible and struggles in low-step generation regimes. We introduce a novel sampling algorithm that works with pre-trained models and, after a lightweight fine-tuning of a single layer, significantly improves sample quality and efficiency. Our method reformulates the generation process using a star-shaped paradigm, which inherently allows for error correction. To make this process effective, we augment it with a learnable re-masking scheduler that intelligently identifies and revises likely errors. This approach yields a substantial quality boost, particularly when using a small number of sampling steps. We extensively ablate key components of our approach and show its usability in different scenarios. In comprehensive experiments on text, and code generation, our sampling algorithm outperforms or matches existing methods.
We developed a new sampling algorithm that, with minimal fine-tuning, enables pre-trained diffusion models to self-correct, significantly boosting quality in few-step generation.
['Discrete Diffusion', 'Text Diffusion Models', 'Masked Diffusion', 'Guided Sampling']
/pdf/a3a60fcc0ae92a74480c02d92232c618b137c91d.pdf
generative models
/attachment/963d0a783d7d1a6479b54a88098d22d0cc665dce.zip
['ICLR.cc/2026/Conference/Submission25294/Authors']
riOevy2RwZ
25,292
riOevy2RwZ
Towards Text-Mask Consistency in Medical Image Segmentation
Vision-language models for medical image segmentation often produce masks that conflict with the accompanying text, especially under multi-site/multi-lesion descriptions. We trace this failure to two factors: (i) highly templated and repetitive clinical language causes one-to-one hard contrastive learning to yield numerous false negatives, weakening cross-modal alignment; and (ii) predominantly vision-driven, one-way cross-attention lacks a language-dominant, spatially aware pathway, hindering effective injection of textual semantics into the spatial visual domain. To this end, we propose Consistency-enhanced Two-stage Segmentation (C2Seg). In the pretraining stage, Cluster-aware Contrastive Learning uses a frozen strong baseline to construct an intra-batch text similarity matrix as soft labels, thereby alleviating false negative conflicts and producing more discriminative visual representations. In the fusion stage, we introduce a Bidirectional Complementary Attention Module, where each modality dominates attention along its own path, fostering deep interaction and structural consistency between visual and textual representations. In order to enhance the expressive power of multimodal features, we further adopt KAN-based Attention Gating. Without updating the language encoder, our approach significantly improves text--mask consistency and segmentation accuracy on two public medical imaging datasets. Code is provided in the supplementary material.
null
['Medical image segmentation', 'Vision language models', 'Multimodal learning', 'Kolmogorov–Arnold Networks']
/pdf/e05da9928ef8ca6dc9c0d857b79e738dd17148dc.pdf
other topics in machine learning (i.e., none of the above)
/attachment/086874f5351d8b08c7774ba8b5507c5ac84f2171.zip
['ICLR.cc/2026/Conference/Submission25292/Authors']
MLZLdOwEpA
25,286
MLZLdOwEpA
AI Alignment with Provable Protection of Human Judgements
Reinforcement learning from human preference rankings forms the basis for training language models to be helpful and value-aligned. As these powerful AI systems are trained for increasingly high-stakes tasks, the risk of leaking sensitive human training data increases. However, the problem of protecting human preference data is complicated by the fact that reinforcement learning from human feedback is a multistage pipeline involving learning a reward function from human preferences, and subsequently training a language model policy from the learned rewards. To address these issues, we design algorithms for the task of alignment from preference feedback that provably avoid leaking human preference data in both the Bradley-Terry and Plackett-Luce models. Our algorithms satisfy $\epsilon$-DP while matching the minimax optimal sample complexity for the task of aligning a policy to human preference rankings. These results demonstrate that there is no inherent tradeoff between protecting the privacy of human preferences and efficient alignment with human values.
null
['Alignment', 'RLHF', 'performance guarantees', 'asymptotic match']
/pdf/92a748e9119bf34cfc22f518404542aef9271b9b.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25286/Authors']
U30FO4wae8
25,284
U30FO4wae8
Entropy-driven Fair and Effective Federated Learning
Federated Learning (FL) enables collaborative model training across distributed devices while preserving data privacy. Nonetheless, the heterogeneity of edge devices often leads to inconsistent performance of the globally trained models, resulting in unfair outcomes among users. Existing federated fairness algorithms strive to enhance fairness but often fall short in maintaining the overall performance of the global model, typically measured by the average accuracy across all clients. To address this issue, we propose a novel algorithm that leverages entropy-based aggregation combined with model and gradient alignments to simultaneously optimize fairness and global model performance. Our method employs a bi-level optimization framework, where we derive an analytic solution to the aggregation probability in the inner loop, making the optimization process computationally efficient. Additionally, we introduce an innovative alignment update and an adaptive strategy in the outer loop to further balance global model's performance and fairness. Theoretical analysis indicates that our approach guarantees convergence even in non-convex FL settings and demonstrates significant fairness improvements in generalized regression and strongly convex models. Empirically, our approach surpasses state-of-the-art federated fairness algorithms, ensuring consistent performance among clients while improving the overall performance of the global model.
We propose a fair FL algorithm that addresses the underexplored challenge of improving performance fairness while enhancing global accuracy, with theoretical and empirical demonstrations.
['fairness alignment', 'federated learning']
/pdf/6b387893a1e1da8333909721171641b166c97874.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/b2f2b80d686cd5396ea3480f1a828698746e1a5f.zip
['ICLR.cc/2026/Conference/Submission25284/Authors']
PzCrvhSarX
25,283
PzCrvhSarX
HomeSafeBench: A Benchmark for Embodied Vision-Language Models in Free-Exploration Home Safety Inspection
Embodied agents can identify and report safety hazards in the home environments. Accurately evaluating their capabilities in home safety inspection tasks is curcial, but existing benchmarks suffer from two key limitations. First, they oversimplify safety inspection tasks by using textual descriptions of the environment instead of direct visual information, which hinders the accurate evaluation of embodied agents based on Vision-Language Models (VLMs). Second, they use a single, static viewpoint for environmental observation, which restricts the agents' free exploration and cause the omission of certain safety hazards, especially those that are occluded from a fixed viewpoint. To alleviate these issues, we propose HomeSafeBench, a benchmark with 12,900 data points covering five common home safety hazards: fire, electric shock, falling object, trips, and child safety. HomeSafeBench provides dynamic first-person perspective images from simulated home environments, enabling the evaluation of VLM capabilities for home safety inspection. By allowing the embodied agents to freely explore the room, HomeSafeBench provides multiple dynamic perspectives in complex environments for a more thorough inspection. Our comprehensive evaluation of mainstream VLMs on HomeSafeBench reveals that even the best-performing model achieves an F1-score of only 10.23\%, demonstrating significant limitations in current VLMs. The models particularly struggle with identifying safety hazards and selecting effective exploration strategies. We hope HomeSafeBench will provide valuable reference and support for future research related to home security inspections. Our dataset and code will be publicly available soon.
null
['Home Safty Inspection', 'Embodied Agent', 'Vision Language Model']
/pdf/0ca8c620e04a3eb10cce7b6073dbc6962cc10b99.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25283/Authors']
RuYwbd5xYa
25,282
RuYwbd5xYa
SCRAPL: Scattering Transform with Random Paths for Machine Learning
The Euclidean distance between differentiable wavelet scattering transform coefficients (known as paths) provides informative gradients for perceptual quality assessment of deep inverse problems in computer vision, speech, and audio processing. However, these transforms are computationally expensive when employed as differentiable loss functions for stochastic gradient descent due to their numerous paths, which significantly limits their use in neural network training. Against this problem, we propose ``Scattering transform with Random Paths for machine Learning'' (SCRAPL): a stochastic optimization scheme for efficient evaluation of multivariable scattering transforms. We implement SCRAPL for the joint time–frequency scattering transform (JTFS) which demodulates spectrotemporal patterns at multiple scales and rates, allowing a fine characterization of intermittent auditory textures. We apply SCRAPL to differentiable digital signal processing (DDSP), specifically, unsupervised sound matching of a granular synthesizer and the Roland TR-808 drum machine. We also propose an initialization heuristic based on importance sampling, which adapts SCRAPL to the perceptual content of the dataset, improving neural network convergence and evaluation performance. We make our audio samples available and provide SCRAPL as a Python package.
A stochastic optimization scheme for efficient perceptual quality assessment of deep inverse problems, implemented for differentiable joint time–frequency scattering, with applications to unsupervised sound matching of the Roland TR-808 drum machine.
['scattering transform', 'wavelets', 'stochastic optimization', 'ddsp', 'perceptual quality assessment']
/pdf/455213e4fdff77edc79ffb5719ed3403fdbdc52e.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission25282/Authors']
xalTjNXVHb
25,281
xalTjNXVHb
Where Redundancy Lives: Stage-Aware Block Saliency in Skip-Connected Models
Residual (skip-connected) architectures such as ResNets are widely used, yet the extent and structure of their inference-time redundancy remain unclear. We repurpose post-training block ablation as a diagnostic probe: we ablate residual blocks by replacing them with identity mappings, then measure the resulting accuracy drop on a small training ``probe" slice, yielding a block-level saliency map that we evaluate out of sample on ImageNet. Across ResNet-50, our stage-aware analyses show that simple magnitude or energy proxies are weak or inconsistent predictors of, indicating that large activation does not imply importance; redundancy is better explained by low novelty relative to the skip path. We characterize structure using stage-wise distributions, and we assess practical trade-offs by one-shot identity replacement of those blocks with optional short finetuning, reporting realistic latency-accuracy behavior on CPU and GPU while preserving topology. The methodology is architecture-agnostic and readily extends to other modern skip-connected families (for example, ConvNeXt and ViT). These findings provide a simple, evidence-based way to localize redundancy, and to guide architecture-preserving simplifications at inference.
null
['Residual networks', 'Post-training pruning', 'Latency', 'Model compression']
/pdf/246ca2052f6285d4f411b00e7a6015a2fc7082a6.pdf
other topics in machine learning (i.e., none of the above)
/attachment/2407fbb43851ac2da687846cbd7475c2386869ca.pdf
['ICLR.cc/2026/Conference/Submission25281/Authors']
YtBJHVbxf8
25,279
YtBJHVbxf8
HEX: Merging Heavy-Hitters and Expanders for Adaptive KV Cache Optimization in Long-Context Inference
Key–Value (KV) caching accelerates large-language model inference but grows linearly with sequence length, quickly exhausting GPU memory. Existing compression strategies such as quantization, pruning, or sparsification shrink this footprint, but often degrade performance. Most pruning methods discard crucial connections and disrupt information flow, while dynamic heuristics often lack theoretical basis. We propose HEX, a cache compression strategy that is both structurally efficient and adaptive. HEX constructs a sparse backbone using expander graphs with spectral guarantees on connectivity, and augments it with heavy-hitter and recent tokens to capture input-specific context. The selected entries are stored in full precision, while the remaining cache is quantized to retain information at low cost. The expander masks are precomputed and static, thus significantly reducing computational overhead and aiding sparse implementations. Experiments on GSM8k, CoQA, TruthfulQA, and LongBench across models of varying sizes show that HEX consistently outperforms existing methods at higher compression rates without retraining. These results illustrate how principled eviction layouts grounded in graph structure and input dynamics can yield stronger accuracy–efficiency trade-offs for long-context inference even for limited cache budgets.
HEX combines expander-graph sparsity with dynamic token selection and quantization to compress KV caches, achieving strong accuracy–efficiency trade-offs for long-context inference.
['Large Language Models', 'Key-Value Caching', 'Efficient Inference', 'Memory Optimization', 'KV Cache Compression', 'Structural Sparsity', 'Expander Graphs', 'Long Context Inference', 'Heavy-Hitters']
/pdf/277d5640e73139b6e5b2c962c43be882d3b3ba0f.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25279/Authors']
Vogxs8BzJS
25,274
Vogxs8BzJS
CABA: A Collusive Aggregation-Emergent Backdoor Attack in Federated Learning
Federated Learning (FL) has been shown to be vulnerable to backdoor attacks conducted by malicious clients. Although many studies have enhanced the stealthiness and durability of backdoors, the full potential of collusive attacks in FL remains underexplored. Existing collusive attacks typically adopt a strategy where each malicious client trains independently. These attacks inevitably embed backdoor features into the uploaded updates and make them susceptible to detection. To fully exploit the collaborative capabilities of malicious clients, we propose a novel collusive attack, named CABA (Collusive Aggregation-based Backdoor Attack), where the backdoor behavior emerges only during model aggregation. In CABA, multiple malicious clients jointly craft a set of updates that individually exhibit no backdoor characteristics, allowing them to bypass defense mechanisms. However, when aggregated, these updates manifest the backdoor in the global model. Extensive experiments demonstrate that our proposed attack can successfully bypass six state-of-the-art defense mechanisms, demonstrating superior stealth and attack efficacy compared to existing collusive approaches. Our research highlights the critical importance of developing defense mechanisms that can inspect the combined behavior of model updates after aggregation.
null
['Collusive Backdoor Attack', 'Federated Learning']
/pdf/46e2e33e68d0e01ade1992996aca8725809aab39.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25274/Authors']
BeLwO47iNn
25,270
BeLwO47iNn
A Function Centric Perspective on Flat and Sharp Minima
Flat minima are widely believed to correlate with improved generalisation in deep neural networks. However, this connection has proven more nuanced in recent studies, with both theoretical counterexamples and empirical exceptions emerging in the literature. In this paper, we revisit the role of sharpness in model performance, proposing that sharpness is better understood as a function-dependent property rather than a reliable indicator of poor generalisation. We conduct extensive empirical studies, from single-objective optimisation to modern image classification tasks, showing that sharper minima often emerge when models are regularised (e.g., via SAM, weight decay, or data augmentation), and that these sharp minima can coincide with better generalisation, calibration, robustness, and functional consistency. Across a range of models and datasets, we find that baselines without regularisation tend to converge to flatter minima yet often perform worse across all safety metrics. Our findings demonstrate that function complexity, rather than flatness alone, governs the geometry of solutions, and that sharper minima can reflect more appropriate inductive biases (especially under regularisation), calling for a function-centric reappraisal of loss landscape geometry.
We investigate flat and sharp minima from a function-centric lens, characterising global minima in single-objective optimisation and scaling to large-scale tasks, we find sharp minima counterintuitively, can improve both generalisation and safety.
['Flat Minima', 'Sharp Minima', 'Generalisation', 'Function', 'Robustness', 'Calibration', 'Safety']
/pdf/d709d634f6f3f9f9e6abb63113271495565ae0cb.pdf
learning theory
null
['ICLR.cc/2026/Conference/Submission25270/Authors']
e3XLWHFrnr
25,264
e3XLWHFrnr
From Text to Talk: Audio-Language Model Needs Non-Autoregressive Joint Training
Recent advances in large language models (LLMs) have attracted significant interest in extending their capabilities to multimodal scenarios, particularly for speech-to-speech conversational systems. However, existing multimodal models handling interleaved audio and text rely on autoregressive methods, overlooking that text depends on target-target relations whereas audio depends mainly on source-target relations. In this work, we propose Text-to-Talk (TtT), a unified audio-text framework that integrates autoregressive (AR) text generation with non-autoregressive (NAR) audio diffusion in a single Transformer. By leveraging the any-order autoregressive property of absorbing discrete diffusion, our approach provides a unified training objective for text and audio. To support this hybrid generation paradigm, we design a modality-aware attention mechanism that enforces causal decoding for text while allowing bidirectional modeling within audio spans, and further introduce three training strategies that reduce train-test discrepancies. During inference, TtT employs block-wise diffusion to synthesize audio in parallel while flexibly handling variable-length outputs. Extensive experiments across Audio-QA and ASR tasks demonstrate the effectiveness of our approach, with detailed ablation studies validating each proposed component. We will open-source our models, data and code to facilitate future research in this direction.
null
['Large Multimodal Models', 'Multi-token Prediction', 'Non-Autoregressive Learning']
/pdf/77f85376e5ef0aad208b40e86d4c896e89495109.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25264/Authors']
h0xG4JmGOP
25,261
h0xG4JmGOP
GDEGAN: Gaussian Dynamic Equivariant Graph Attention Network for Ligand Binding Site Prediction
Accurate prediction of binding sites of a given protein, to which ligands can bind, is a critical step in structure-based computational drug discovery. Recently, Equivariant Graph Neural Networks (GNNs) have emerged as a powerful paradigm for binding site identification methods due to the large-scale availability of 3D structures of proteins via protein databases and AlphaFold predictions. The state-of-the-art equivariant GNN methods implement dot product attention, disregarding the variation in the chemical and geometric properties of the neighboring residues. To capture the variation in properties, we propose GDEGAN (Gaussian Dynamic Equivariant Graph Attention Network), which replaces simple dot-product attention with adaptive kernels that recognize binding sites. The proposed attention mechanism captures variation in neighboring residues using statistics of their characteristic local feature distributions. Our mechanism dynamically computes neighborhood statistics at each layer, using local variance as an adaptive bandwidth parameter with learnable per-head temperatures, enabling each protein region to determine its own context-specific importance. Our model shows better predictive performance, outperforming existing methods with relative improvements of 37-66 \% in DCC and 7-19 \% DCA success rates across COACH420, HOLO4k, and PDBBind2020 datasets. These advances have direct application in accelerating protein-ligand docking by identifying potential binding sites for therapeutic target identification.
By recognizing that binding pockets have distinct statistical signatures, GDEGAN improves ligand binding site prediction by 37% and upto 20× faster than current methods while doing inference.
['equivariant gnns', 'protein ligand interaction', 'binding site identification', 'statistical attention']
/pdf/f36d94798ae3cbfc035572b8d7ffce9bb5f9bd89.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25261/Authors']
dGZYYishs0
25,260
dGZYYishs0
TopoGuide: A Finetuning Framework for Topologically-Consistent 3D Molecule Generation
Equivariant diffusion models can generate high-quality 3D molecular geometries but often struggle with chemical validity due to a lack of explicit guidance from the 2D molecular graph. While prior works have addressed this by adding graph-based information to the model's input, this often increases architectural complexity and slows inference. We propose a new finetuning framework that instills 2D topological awareness into pre-trained 3D generative models without altering their core architecture. Our method enforces consistency between the representations of a target 2D graph and a generated 3D structure within a shared embedding space, guided by a consistency loss. By applying our framework to state-of-the-art models, we demonstrate a significant improvement in topological accuracy and chemical validity while preserving the original model's high-quality geometry and inference efficiency.
null
['Molecule Generation', 'Diffusion Models', 'Equivariant Neural Networks', 'Drug Discovery']
/pdf/cf9d698d8c2fd2d6aa216ab7a5c70970f1f72d92.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25260/Authors']
QQp11zpm8M
25,258
QQp11zpm8M
Character Beyond Speech: Leveraging Role-Playing Evaluation in Large Audio Language Models via Reinforcement Learning
The advancement of multimodal large model technology has propelled the simulation of diverse characters in speech dialogue systems, establishing a novel interactive paradigm. Character attributes are manifested not only in textual responses but also through vocal features, with speech containing non-semantic information that is challenging to quantify. This poses significant difficulties in evaluating the character embodiment capabilities of role-playing agents. In response to these issues, we present the RoleJudge evaluation framework, which leverages audio large language models to systematically assess the alignment between speech and character across multiple modalities and dimensions. Furthermore, we introduce RoleChat, the first role-playing speech evaluation dataset, comprising both authentic speech samples and detailed reasoning annotations for evaluation. Utilizing this dataset, we implement a multi-stage training paradigm and incorporate standard alignment in reinforcement learning to mitigate reward misalignment during the optimization process. Experimental results on both accuracy and subjective assessment demonstrate that RoleJudge outperforms various baseline models, thereby validating the effectiveness of our multidimensional evaluation framework.
null
['Role-Playing Language Agents', 'Large Audio Language Models', 'Reinforcement Learning']
/pdf/49cf336bfa6ae012a5aeeb23ba14939ef0ad62e0.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25258/Authors']
cqNAjXUBOV
25,257
cqNAjXUBOV
Tables2Traces: Distilling Tabular Data to Improve LLM Reasoning in Healthcare
Large language models (LLMs) excel at reasoning when fine-tuned on curated text corpora, but many domains, such as medicine, primarily store knowledge in structured tabular data. Despite its richness, tabular data has been largely overlooked as a source of reasoning supervision. Interpreting such data requires structured, relational reasoning across features and outcomes, not just surface-level pattern matching. In practice, this mirrors clinical decision making, where doctors often compare patients with similar characteristics and reason about why their outcomes diverge. We introduce Tables2Traces, the first framework to enable improved reasoning from raw tabular data by generating contrastive, case-based reasoning traces for model fine-tuning. This establishes a new supervision paradigm: converting tabular records, traditionally used only for prediction, into structured reasoning signals that can serve as an effective new source of supervision for LLMs. Crucially, this paradigm is orthogonal to text-based QA supervision: rather than competing with curated corpora, it unlocks an abundant and low-cost modality that complements existing approaches. Using only cardiovascular patient records, Tables2Traces yields relative gains of 17.2% on in-domain MedQA questions and 8.4% out-of-domain, improving accuracy in 15 of 17 clinical categories. On MedMCQA, it achieves a 7.2% relative improvement and outperforms the base model in 17 of 21 specialties. These gains are driven by a lightweight, domain-agnostic pipeline that elicits structured reasoning via contrastive and counterfactual prompts. Compared to training on narrative patient descriptions, Tables2Traces generalizes more effectively across question types and medical specialties, showing that even limited tabular data can serve as a scalable and complementary source of reasoning supervision for LLMs.
We convert tabular clinical data into reasoning traces that improve LLM medical question answering across domains.
['large language models', 'tabular data', 'healthcare', 'medicine']
/pdf/ece73eadbb7d312ff9edc26b94ef3ddb0be07036.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25257/Authors']
rdf9BRHNql
25,253
rdf9BRHNql
TowerVision : Understanding and Improving Multilinguality in Vision-Language Models
Despite significant advances in vision-language models (VLMs), most existing work follows an English-centric design process, limiting their effectiveness in multilingual settings. In this work, we provide a comprehensive empirical study analyzing the impact of several multilingual design choices, such as training data composition, encoder selection, and text backbones. The result is TowerVision, a family of open multilingual VLMs for both image-text and video-text tasks, built upon the multilingual text-only model Tower+. TowerVision achieves competitive performance on multiple multilingual benchmarks and shows particular strength in culturally grounded tasks and multimodal translation. By incorporating visual and cultural context during fine-tuning, our models surpass existing approaches trained on substantially larger datasets, as demonstrated on ALM-Bench and Multi30K (image tasks) and ViMUL-Bench (video tasks). Alongside the models, we release VisionBlocks, a high-quality, curated vision-language dataset. Our findings highlight that multilingual vision-language training data substantially improves cross-lingual generalization---both from high-resource to underrepresented languages and vice versa---and that instruction-tuned LLMs are not always the optimal initialization point. To support further research, we publicly release all models, data, and training recipes.
We introduce a VLM that supports image and video called TowerVision, with improved multilingual capabilities explored via several ablations on data, base model, and vision encoders
['mutltilinguality', 'large language model', 'vision language models', 'multimodal models', 'image', 'video', 'cultural']
/pdf/1f559577292de0e5fa2bc621e877fff325aca1e2.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25253/Authors']
170GODIkgT
25,252
170GODIkgT
SpecExtend: A Drop-in Enhancement for Speculative Decoding of Long Sequences
Speculative decoding is a widely used technique for accelerating inference in large language models (LLMs), but its performance degrades as input length grows, with significant drops even at moderate lengths. Yet, this early degradation has remained largely underexplored. We introduce SpecExtend, a drop-in enhancement that improves speculative decoding on long sequences without additional training. SpecExtend integrates efficient attention mechanisms such as FlashAttention and Hybrid Tree Attention to accelerate prefill and verification steps. To improve both draft accuracy and speed on long inputs without retraining, we propose Cross-model Retrieval, a novel KV cache eviction strategy that leverages the target model’s attention scores to dynamically select relevant context for the smaller draft model. Extensive evaluations show that SpecExtend accelerates speculative decoding by up to 2.84× on 16K-token long summarization and up to 3.86× on long reasoning, while preserving the short-input performance of state-of-the-art frameworks.
We propose SpecExtend, a drop-in enhancement that improves the performance of speculative decoding on long sequences without additional training.
['Efficient LLM', 'LLM Inference', 'Speculative Decoding', 'Long-context Inference']
/pdf/f5a61bd460e7eac4062df32c7c959658139fc749.pdf
generative models
/attachment/6c3536be6ff4b3a4f33edc58386bc7937538fbe8.zip
['ICLR.cc/2026/Conference/Submission25252/Authors']
GDA1yB6yDP
25,245
GDA1yB6yDP
Not Search, But Scan: Benchmarking MLLMs on Scan-Oriented Academic Paper Reasoning
With the rapid progress of multimodal large language models (MLLMs), AI already performs well at literature retrieval and certain reasoning tasks, serving as a capable assistant to human researchers, yet it remains far from autonomous research. The fundamental reason is that current work on scholarly paper reasoning is largely confined to a search-oriented paradigm centered on pre-specified targets, with reasoning grounded in relevance retrieval, which struggles to support researcher-style full-document understanding, reasoning, and verification. To bridge this gap, we propose ScholScan, a new benchmark for scholarly paper reasoning. ScholScan introduces a scan-oriented task setting that asks models to read and cross-check entire papers like human researchers, scanning the document to identify consistency issues. The benchmark comprises 1,800 carefully annotated questions drawn from 9 error families across 13 natural-science domains and 715 papers, and provides detailed annotations for evidence localization and reasoning traces, together with a unified evaluation protocol. We assessed 15 models across 24 input configurations and conduct a fine-grained analysis of MLLM capabilities across error families. Across the board, retrieval-augmented generation (RAG) methods yield no significant improvements, revealing systematic deficiencies of current MLLMs on scan-oriented tasks and underscoring the challenge posed by ScholScan. We expect ScholScan to be the leading and representative work of the scan-oriented task paradigm.
We present ScholScan, a scan-oriented benchmark for full-paper scholarly reasoning that requires models to build a paper-level evidence view; spanning 1,800 questions from 715 papers, which exposes MLLM gaps and shows RAG ineffective.
['Multimodal Large Language Models; Academic Paper Reasoning; Scan-Oriented Reasoning']
/pdf/49cc34d84563f82a0411d2ea1c053215d0925474.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25245/Authors']
bld5GVRad0
25,243
bld5GVRad0
InfoBlend: Storing and Reusing KV Caches of Multimodal Information without Positional Restriction
The context caching technique is employed to accelerate the Multimodal Large Language Model (MLLM) inference by prevailing serving platforms currently. However, this approach merely reuses the Key-Value (KV) cache of the initial sequence of prompt, resulting in full KV cache recomputation even if the prefix differs slightly. This becomes particularly inefficient in the context of interleaved text and images, as well as multimodal retrieval-augmented generation. This paper proposes position-independent caching as a more effective approach for multimodal information management. We have designed and implemented a caching system, named InfoBlend, to address both system-level and algorithm-level challenges. InfoBlend stores the KV cache on local disks when receiving multimodal data, and calculates and loads the KV cache in parallel during inference. To mitigate accuracy degradation, we have incorporated the integrated reuse and recompute mechanism within the system. The experimental results demonstrate that InfoBlend can achieve up to 54\% reduction in response time and 2$\times$ improvement in throughput compared to existing context caching systems, while maintaining negligible or no accuracy loss.
The KV cache can be reused without positional restriction, through partial recomputation.
['Multimodal Large Language Model', 'AI System', 'Position-Independent Caching']
/pdf/b0d06f97f9c6ad80b2f9594560c7ee8d676679a3.pdf
infrastructure, software libraries, hardware, systems, etc.
/attachment/7c1cdb642495362f339e608259469e73183144d6.zip
['ICLR.cc/2026/Conference/Submission25243/Authors']
WHVk2qoCIY
25,240
WHVk2qoCIY
Exposing Weak Links in Multi-Agent Systems under Adversarial Prompting
LLM-based agents are increasingly deployed in multi-agent systems (MAS). As these systems move toward real-world applications, their security becomes paramount. Existing research largely evaluates single-agent security, leaving a critical gap in understanding the vulnerabilities introduced by multi-agent design. However, existing systems fall short due to lack of unified frameworks and metrics focusing on unique rejection modes in MAS. We present SafeAgents, a unified and extensible framework for fine-grained security assessment of MAS. SafeAgents systematically exposes how design choices such as plan construction strategies, inter-agent context sharing, and fallback behaviors affect susceptibility to adversarial prompting. We introduce DHARMA, a diagnostic measure that helps identify weak links within multi-agent pipelines. Using SafeAgents, we conduct a comprehensive study across five widely adopted multi-agent architectures (centralized, decentralized, and hybrid variants) on four datasets spanning web tasks, tool use, and code generation. Our findings reveal that common design patterns carry significant vulnerabilities. For example, centralized systems that delegate only atomic instructions to sub-agents obscure harmful objectives, reducing robustness. Our results highlight the need for security-aware design in MAS. Link to code is here https://anonymous.4open.science/r/SafeAgents/ .
We introduce SafeAgents, a framework for evaluating security vulnerabilities in multi-agent LLM systems, revealing that popular architectures contain significant security flaws stemming from design choices like autonomy levels and context sharing.
['Multi-agent systems', 'Vulnerability Attacks', 'Security']
/pdf/4d6aaacd05f8e26949a2377c2ded7905fe48cbd5.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25240/Authors']
CVqYCYpq75
25,238
CVqYCYpq75
Dem-HEC: High-Entropy Contrastive Fine-Tuning for Countering Natural Corruptions
Neural networks are highly susceptible to natural image corruptions such as noise, blur, and weather distortions, limiting their reliability in real-world deployment. The prime reason to maintain the high integrity against natural corruptions is that these distortions are the primary force of distribution shift intentionally (compression) or unintentionally (blur or weather artifacts). For the first time, through this work, we observe that such corruptions often collapse the network's internal feature space into a high-entropy state, causing predictions to rely on a small subset of fragile features. Inspired by this, we propose a simple yet effective entropy-guided fine-tuning framework, Dem-HEC, that strengthens corruption robustness while maintaining clean accuracy. Our method generates high-entropy samples within a bounded perturbation region to simulate corruption-induced uncertainty and aligns them with clean embeddings using a contrastive loss. In parallel, cross-entropy on both clean and high-entropy samples, combined with knowledge distillation from a teacher snapshot, ensures stable predictions. Dem-HEC is evaluated with numerous neural networks trained on multiple benchmark datasets, demonstrating consistent gains across diverse corruption types and their severities (noise strength), with strong transferability across backbones, including CNNs and Transformers. Our approach highlights entropy regularisation as a scalable pathway to bridging the gap between clean accuracy and real-world robustness.
null
['Corruption', 'Convolution', 'Transformer', 'Robustness', 'Explainability']
/pdf/a783c6effb58647d3f7a801d502180adcc642fb8.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25238/Authors']