Back to list
進化計算における再現性評価:人間による評価とLLMに基づいた検討
Assessing Reproducibility in Evolutionary Computation: A Case Study using Human- and LLM-based Assessment
Translated: 2026/3/7 11:31:57
Japanese Translation
解析的文書のコンテンツと関連するコードリポジトリから自動的に可視性を評価するためにREPROビンティクシステム(REproducibility Checklist Automation Pipeline) を提案しました。我々の分析により、発表した論文は平均で0.62の完全性スコアを達成し、投稿書の自己に対して追加性について36.90%のものが提供しており、これに対して人工評価家の相当な統一(Cohen's k=0.67)が見られました。それにより我々は再現性報告において依然としてある欠点があることを示しましたが、自動的なツールを使用して広範かつシステム的な再現性の活動監視を効果的に支持することが可能です。
Original Content
arXiv:2602.07059v1 Announce Type: cross
Abstract: Reproducibility is an important requirement in evolutionary computation, where results largely depend on computational experiments. In practice, reproducibility relies on how algorithms, experimental protocols, and artifacts are documented and shared. Despite growing awareness, there is still limited empirical evidence on the actual reproducibility levels of published work in the field. In this paper, we study the reproducibility practices in papers published in the Evolutionary Combinatorial Optimization and Metaheuristics track of the Genetic and Evolutionary Computation Conference over a ten-year period. We introduce a structured reproducibility checklist and apply it through a systematic manual assessment of the selected corpus. In addition, we propose RECAP (REproducibility Checklist Automation Pipeline), an LLM-based system that automatically evaluates reproducibility signals from paper text and associated code repositories. Our analysis shows that papers achieve an average completeness score of 0.62, and that 36.90% of them provide additional material beyond the manuscript itself. We demonstrate that automated assessment is feasible: RECAP achieves substantial agreement with human evaluators (Cohen's k of 0.67). Together, these results highlight persistent gaps in reproducibility reporting and suggest that automated tools can effectively support large-scale, systematic monitoring of reproducibility practices.