Back to list
arxiv_cs_lg 2026年2月10日

LLM による時間変化するユーザー好みに応じたバンドットアルゴリズムの向上:ストリーミング推薦への適用

Enhancing Bandit Algorithms with LLMs for Time-varying User Preferences in Streaming Recommendations

Translated: 2026/3/15 15:01:54
bandit-algorithmslarge-language-modelsstreaming-recommendationsreinforcement-learningtime-series-analytics

Japanese Translation

arXiv:2602.08067v1 Announce Type: new 抽象:

Original Content

arXiv:2602.08067v1 Announce Type: new Abstract: In real-world streaming recommender systems, user preferences evolve dynamically over time. Existing bandit-based methods treat time merely as a timestamp, neglecting its explicit relationship with user preferences and leading to suboptimal performance. Moreover, online learning methods often suffer from inefficient exploration-exploitation during the early online phase. To address these issues, we propose HyperBandit+, a novel contextual bandit policy that integrates a time-aware hypernetwork to adapt to time-varying user preferences and employs a large language model-assisted warm-start mechanism (LLM Start) to enhance exploration-exploitation efficiency in the early online phase. Specifically, HyperBandit+ leverages a neural network that takes time features as input and generates parameters for estimating time-varying rewards by capturing the correlation between time and user preferences. Additionally, the LLM Start mechanism employs multi-step data augmentation to simulate realistic interaction data for effective offline learning, providing warm-start parameters for the bandit policy in the early online phase. To meet real-time streaming recommendation demands, we adopt low-rank factorization to reduce hypernetwork training complexity. Theoretically, we rigorously establish a sublinear regret upper bound that accounts for both the hypernetwork and the LLM warm-start mechanism. Extensive experiments on real-world datasets demonstrate that HyperBandit+ consistently outperforms state-of-the-art baselines in terms of accumulated rewards.