Back to list
教室から最先端研究へ:エージェント型 AI と LLM 検索エージェント解説
From Classroom to Cutting-Edge Research: Agentic AI & LLM Search Agents Explained
Translated: 2026/3/14 11:19:28
Japanese Translation
From Classroom to Cutting-Edge Research: Agentic AI & LLM Search Agents Explained
By [Your Name] — BS-CS Student, FAST University
Published: March 2026 | 5 min read
Introduction
FA ST 大学の大井博士の下で人工知能(AI)を初めて学ぶとき、検索アルゴリズム、エージェントの種類、制約満満問題を学ぶと、それは抽象的な概念として思えました。その後、2 つの研究論文を読み、これらのトピックを見る方法が一変しました。突如として、クラスで学んでいることはすべて理論だけでなく、現在 2025 年および 2026 年に構築されている実際の最先端 AI システムを動作させる力となっているのです。
このブログでは、私が解析した 2 つの論文を解説し、その核となるアイデアを簡単な言語で説明し、私たちが AI コースで学ぶものとどのように結びついているかを示します。
Paper 1: "The Rise of Agentic AI" (2025)
What is the paper about?
この論文は、新しい分野であるエージェント型 AI をreviewし、AI システムがどのように機能するかという大きな変化を取り上げます。従来の AI は一度に一つの質問にしか答えられません。エージェント型 AI は異なります。計画を立て、動作を実行し、ツールを使用し、多段階の目標を完了させる—すべてを自分で行います。
これを考えるようにしてください。基本的な AI は計算機のようなものです。入力を与え、出力を受けます。エージェント型 AI は従業員のようなものです。目標を与え、手順を調べ、ツールを使い、判断を下し、完了させるまで行います。
論文から取り出せる主要なアイデア:
この論文は、エージェント型 AI システムの 3 つの主要な性質を定義しています:
Autonomy(自律性) — エージェントは人間の恒常的な指導なしに動作します。現在の状態と目標に基づいて次に何をすべきか判断します。
Tool Use(ツールの使用) — 現代のエージェントはテキストだけを生成するわけではありません。ウェブを検索し、コードを書き実行し、ファイルを読み、メールを送信し、API と相互作用することができます。
Multi-Agent Collaboration(マルチエージェントコラボレーション) — 複数の AI エージェントは異なるタスクに専門化し、チームのように協調して機能することができます。
How does this connect to our AI course?
ここが面白くなる部分です。クラスでは、エージェントの種類を学びます:
Simple Reflex Agent(単純反則エージェント)
Model-Based Reflex Agent(モデルベース反則エージェント)
Goal-Based Agent(目標ベースエージェント)
Utility-Based Agent(効用ベースエージェント)
Learning Agent(学習エージェント)
この論文はそれに直接マップされます!エージェント型 AI システムは、言語モデルをスケールアップした Goal-Based と Utility-Based エージェントの本質的なものです。エージェント型 AI が行う「計画」は、Goal-Based エージェントが行うものです—現在の状態を見て、可能なアクションを定義し、目標へのパスを選択します。
この論文で記述されたマルチエージェントのコラボレーションは、私たちが環境次元表中に分類する Multi-Agent environment(マルチエージェント環境) にまさに一致します。Part A of Assignment 1(課題 1 の部分 A)—GB 洪水救援ロボットの環境を分類する時—I は、他のロボットがそのパスを共有しているため、ロボットが Multi-Agent 環境で動作していると理解しました。それは災害区域におけるエージェント型 AI なのです。
私が最も興味深く感じたこと
私が最も感銘を受けたのは、課題のセクションでした。この論文は、エージェント型 AI システムがまだ直面する困難に率直な事実を述べています:
マルチ段階計画中の幻覚症(Hallucination)
ループに巻き込まれて停止しないこと
人間に停止して尋ねるべき時を知らないこと
これらは、完全な検索アルゴリズムと不完全な検索アルゴリズムの比較を討論するクラスの同じ問題です。ループを永遠に続けるエージェントは、無限のパスを降りていく DFS(深さ優先検索) のようなものです。
Paper 2: "A Survey of LLM-based Deep Search Agents" (2026)
What is the paper about?
これは、AI 学生として私が ever 読んだ最も関連性の高い論文の一つです。大規模言語モデル(LLM) がどのように現在、知的な検索エージェントとして機能しているか surveyedします—質問を答えるだけでなく、複雑な質問の答えを見つけるために深く反復して検索するためのものです。
通常の検索エンジンには 10 つの青いリンクが与えられます。LLM ベースのディープ検索エージェントははるかに強力な何かを行います。それは:
複雑な質問を小さなサブ質問に分割します
各サブ質問を個別に検索します
結果を読み、それらについて推論します
学びました基づいてフォロアップ検索を生成します
すべてのものを一つの最終的、包括的な答えに統合します
How does this connect to our AI course?
これは、私が実際に論文で見たことの最も直接の結びつきです。
Connection to Sea
Original Content
From Classroom to Cutting-Edge Research: Agentic AI & LLM Search Agents Explained
By [Your Name] — BS-CS Student, FAST University Published: March 2026 | 5 min read
Introduction
When I first started studying Artificial Intelligence at FAST University under Dr. Bilal Jan, concepts like search algorithms, agent types, and constraint satisfaction problems felt abstract. Then I read two research papers that completely changed how I see these topics. Suddenly, everything we study in class is not just theory — it is powering real, cutting-edge AI systems being built right now in 2025 and 2026.
In this blog, I will break down two papers I analyzed, explain their core ideas in simple language, and show you exactly how they connect to what we learn in our AI course.
Paper 1: "The Rise of Agentic AI" (2025)
What is the paper about?
This paper reviews the emerging field of Agentic AI — a major shift in how AI systems work. Traditional AI answers one question at a time. Agentic AI is different. It can plan, take actions, use tools, and complete multi-step goals — all on its own.
Think about it this way. A basic AI is like a calculator. You give it input, it gives you output. An Agentic AI is more like an employee. You give it a goal, and it figures out the steps, uses tools, makes decisions, and gets it done.
Key ideas from the paper:
The paper defines three core properties of Agentic AI systems:
Autonomy — The agent acts without constant human guidance. It decides what to do next based on its current state and goal.
Tool Use — Modern agents do not just generate text. They can search the web, write and run code, read files, send emails, and interact with APIs.
Multi-Agent Collaboration — Multiple AI agents can work together, each specializing in a different task, coordinating like a team.
How does this connect to our AI course?
This is where it gets exciting. In class, we study agent types:
Simple Reflex Agent
Model-Based Reflex Agent
Goal-Based Agent
Utility-Based Agent
Learning Agent
The paper maps directly to this! Agentic AI systems are essentially Goal-Based and Utility-Based agents scaled up with language models. The "planning" that Agentic AI does is exactly what a Goal-Based agent does — it looks at the current state, defines what actions are possible, and selects the path toward the goal.
The multi-agent collaboration the paper describes is exactly the Multi-Agent environment we classify in our environment dimensions table. When I was doing Part A of Assignment 1 — classifying the GB flood rescue robot environment — I realized the robot operates in a Multi-Agent environment because other robots share its path. That is Agentic AI in a disaster zone.
What I found most interesting
What struck me most was the section on challenges. The paper is honest that Agentic AI systems still struggle with:
Hallucination during multi-step planning
Getting stuck in loops
Not knowing when to stop and ask a human
These are basically the same problems we discuss in class when comparing complete vs incomplete search algorithms. An agent that loops forever is like DFS going down an infinite path.
Paper 2: "A Survey of LLM-based Deep Search Agents" (2026)
What is the paper about?
This is one of the most relevant papers I have ever read as an AI student. It surveys how Large Language Models are now being used as intelligent search agents — not just to answer questions, but to search deeply and iteratively to find answers to complex questions.
A normal search engine gives you ten blue links. An LLM-based Deep Search Agent does something far more powerful. It:
Breaks your complex question into smaller sub-questions
Searches for each sub-question separately
Reads and reasons over the results
Generates follow-up searches based on what it learned
Synthesizes everything into one final, comprehensive answer
How does this connect to our AI course?
This is the most direct connection to our course content I have ever seen in a real paper.
Connection to Search Algorithms:
The way these LLM search agents work is almost identical to Iterative Deepening Search (IDDFS) — one of the algorithms we compare in class. Just like IDDFS starts at depth 1 and goes deeper with each iteration, the LLM search agent starts with a simple search, then goes deeper based on what it finds.
The paper also describes how the agent uses a heuristic to decide which sub-questions are most worth exploring — exactly like A Search* uses a heuristic h(n) to prioritize which nodes to expand.
Connection to Agent Types:
The search agent described in this paper is a perfect example of a Goal-Based Agent. It has:
A clear goal (answer the user's complex question)
Knowledge of current state (what has been found so far)
Actions (search, read, reason, synthesize)
A plan (the order of sub-searches)
Connection to CSPs:
Interestingly, some of the search agents in the paper use constraint-like logic to decide when to stop searching. They have constraints like "search budget = 10 queries" and "confidence threshold = 80%." This is very similar to the battery and risk constraints in the CSP formulation we did in Assignment 1 Part C.
What I found most interesting
The paper describes a concept called "search reflection" — where the agent evaluates its own search results and decides if it needs to search differently. This is like Simulated Annealing in our course. Just like SA accepts a worse solution temporarily to escape a local optimum, the search agent sometimes abandons a promising search path and tries a completely different approach to find a better answer.
My NotebookLM Experience
For this assignment, I used Google NotebookLM to help me understand both papers more deeply. Here is what I found:
When I read the papers manually first, I understood the surface-level ideas. But when I uploaded them to NotebookLM and started asking questions, I discovered connections I had completely missed.
For example, I asked NotebookLM: "How does the search reflection in Paper 2 relate to local optima problems?" — and the response made me realize that both papers are really about the same fundamental AI challenge: how do intelligent systems avoid getting stuck?
Agentic AI avoids getting stuck by having multiple agents with different specializations. LLM Search Agents avoid getting stuck by reflecting and re-searching. Simulated Annealing avoids getting stuck by accepting bad moves temporarily. They are all solving the same problem at different scales.
This was my biggest personal insight from reading and using NotebookLM together.
Summary: Course Connections at a Glance
Paper Concept
Our Course Topic
Agentic AI planning
Goal-Based Agents
Multi-agent coordination
Multi-Agent Environments
Iterative sub-question search
Iterative Deepening Search
Heuristic-guided search priority
A* Search Algorithm
Search budget constraints
CSP Constraints
Search reflection & re-routing
Simulated Annealing
Partial observability handling
Model-Based Reflex Agent
Conclusion
These two papers taught me that everything we study in our AI course is not just textbook theory. It is the foundation of systems being built and deployed right now. The next time you implement A* or model a CSP, remember that the same ideas are inside the most powerful AI agents in the world today.
If you are a CS student, I strongly recommend reading both papers. Start with "The Rise of Agentic AI" for the big picture, then read the LLM Search Agents survey to see how search algorithms come alive inside language models.
And use NotebookLM — it genuinely changes how you understand research papers.
Thanks for reading! This blog was written as part of my AI course assignment at FAST University under Dr. Bilal Jan. Feel free to connect with me on Hashnode and Dev.to.
Tags: #AgenticAI #ArtificialIntelligence #SearchAlgorithms #LLM #CSP #MachineLearning #FAST
📌 Video: Watch my 2-minute explanation of these papers here → [Your YouTube Unlisted Link] 📌 NotebookLM: https://notebooklm.google.com/notebook/87cda8ec-4139-4453-b4f0-9d50748438e9