Back to list
arxiv_cs_lg 2026年2月10日

新しい測度の転換不等式によるより厳しい情報論的な一般化誤りの境界

Tighter Information-Theoretic Generalization Bounds via a Novel Class of Change of Measure Inequalities

Translated: 2026/3/15 7:04:49
generalization-boundsinformation-theorychange-of-measuref-divergencemachine-learning

Japanese Translation

arXiv:2602.07999v1 Announce Type: cross 摘要:本稿では、データ処理不等式に基づく統一枠組みを通じて、新たな測度の転換不等式のクラスを提案します。この枠組みは驚くほど単純であるにもかかわらず、より厳しい不等式を生み出す十分な力を備えています。我々は、Kullback-Leibler 発散とχ^2-発散を特殊ケースとする f-発散を含む広範な情報量の一連、Rényi 発散、および最大漏洩を特殊ケースとするα-相互情報量を含む広範な情報量に関する測度の転換不等式を提供します。次に、これら不等式を確率的学習アルゴリズムの一般化誤りの解析に組み込み、新たなより厳しい高確率の情報論的な一般化誤りの境界を導出するとともに、単純化された解析を通じて複数の既存の最良の結果を復元します。我々の枠組みの主要な利点はその柔軟性で、条件付き相互情報量の枠組み、PAC-Bayesian 理論、および差分プライバシー機構といった幅広い設定に容易に適応し、これらの分野でも新たな一般化誤りの境界を導出できます。

Original Content

arXiv:2602.07999v1 Announce Type: cross Abstract: In this paper, we propose a novel class of change of measure inequalities via a unified framework based on the data processing inequality for $f$-divergences, which is surprisingly elementary yet powerful enough to yield tighter inequalities. We provide change of measure inequalities in terms of a broad family of information measures, including $f$-divergences (with Kullback-Leibler divergence and $\chi^2$-divergence as special cases), R\'enyi divergence, and $\alpha$-mutual information (with maximal leakage as a special case). We then embed these inequalities into the analysis of generalization error for stochastic learning algorithms, yielding novel and tighter high-probability information-theoretic generalization bounds, while also recovering several best-known results via simplified analyses. A key advantage of our framework is its flexibility: it readily adapts to a range of settings, including the conditional mutual information framework, PAC-Bayesian theory, and differential privacy mechanisms, for which we derive new generalization bounds.