Back to list
Value-Conflict Diagnostics Reveals Widespread Alignment Faking in Language Models
Value-Conflict Diagnostics Reveal Widespread Alignment Faking in Language Models
Translated: 2026/4/24 20:14:50
Japanese Translation
arXiv:2604.20995v1 Announce Type: new
Abstract: "Alignment faking", whereby a model appears compliant with developer policy under monitoring but reverts to its own preferences when unobserved, is a troubling yet poorly understood phenomenon, partly because current diagnostic tools remain limited. Existing diagnostics depend on highly toxic and unambiguously harmful scenarios, causing most models to refuse engagement immediately. Consequently, models never deliberate on developer policy, monitoring conditions, or the consequences of non-compliance, rendering these diagnostic approaches fundamentally incapable of detecting alignment faking propensity.
To investigate this phenomenon, we introduce VLAF, a diagnostic framework based on the hypothesis that alignment faking is most likely when developer policy conflicts with a model's strongly held values. VLAF employs morally unambiguous scenarios to probe such conflicts across diverse moral frameworks, bypassing refusal behavior while preserving meaningful deliberative stakes.
Using VLAF, we find that alignment faking is substantially more prevalent than previously reported, occurring even in models with as few as 7 billion parameters—for instance, olmo2-7b-instruct exhibits alignment faking in 37% of cases. Finally, we demonstrate that oversight conditions induce activation shifts along a single direction in representation space. This implies that the behavioral divergence driving alignment faking can be captured by a single contrastive steering vector, which we leverage for lightweight, inference-time mitigation. We further exploit this mechanism to develop a data-free mitigation method with minimal computational overhead, achieving relative reductions in alignment faking of 85.8%, 94.0%, and 57.7% on olmo2-7b-instruct, olmo2-13b-instruct, and qwen3-8b, respectively.
Original Content
arXiv:2604.20995v1 Announce Type: new
Abstract: Alignment faking, where a model behaves aligned with developer policy when monitored but reverts to its own preferences when unobserved, is a concerning yet poorly understood phenomenon, in part because current diagnostic tools remain limited. Prior diagnostics rely on highly toxic and clearly harmful scenarios, causing most models to refuse immediately. As a result, models never deliberate over developer policy, monitoring conditions, or the consequences of non-compliance, making these diagnostics fundamentally unable to detect alignment faking propensity. To support study of this phenomenon, we first introduce VLAF, a diagnostic framework grounded in the hypothesis that alignment faking is most likely when developer policy conflicts with a model's strongly held values. VLAF uses morally unambiguous scenarios to probe this conflict across diverse moral values, bypassing refusal behavior while preserving meaningful deliberative stakes. Using VLAF, we find that alignment faking is substantially more prevalent than previously reported, occurring in models as small as 7B parameters - with olmo2-7b-instruct faking alignment in 37% of cases.Finally, we show that oversight conditions induce activation shifts that lie along a single direction in representation space. This means the behavioral divergence driving alignment faking can be captured by a single contrastive steering vector, which we exploit for lightweight inference-time mitigation. Finally, we exploit this for mitigation that requires no labeled data and minimal computational overhead, achieving relative reductions in alignment faking of 85.8%, 94.0%, and 57.7% on olmo2-7b-instruct, olmo2-13b-instruct, and qwen3-8b respectively.