Back to list
dev_to 2026年3月21日

OpenCode AI アgent 設定:本格的なワークフローガイド

OpenCode AI Agent Setup: Production-Ready Workflow Guide

Translated: 2026/3/21 6:06:38
opencodeai-agentworkflowdev-toolscoding-assistant

Japanese Translation

この記事は当初、BuildZn で公開されました。誰もが経験したことのある状況です:コードベースを見ていて、小さな機能を実装したり、問題のある関数をリファクタリングしたり、あるいは単にプレースホルダーを設定したりする必要がある時。ほとんどの新しいツール、特に急速に進化する AI 分野のツールについて、従来の README.md はまるで別の銀河で書かれたように感じさせます。それはゼロから「Hello World」まで連れていくものの、実際のプロジェクトへの統合において、あなたの混沌とした現実世界であなたに孤立させます。私が話すのは、熟練した開発者のための実際の、即時的な生産性向上についてのものではなく、理論的な可能性についてのものです。この投稿はノイズを切り抜いて、深遠に実用的な OpenCode AI agent 設定ガイドを提供します。 AI によるコーディングのパromise はある程度前からありました、しかしそれはオープンソース AI コーディングアシスタントツールが実際に本格的に有用になるほど成熟したまでには最近です。私はシニア開発者であり、具体的に例示し、示せる利益がないものが「私のワークフローを革命化する」と称しているものについては、正直に懐疑的です。OpenCode は本当に私の目に入ったほんの少ないオープンソース AI agent の之一就是です。それは単なる自動補完ツールではありません;それは複雑なタスクを理解し、それらを分解し、あなたのコードベースと相互作用し、そして命令を実行できる agentic システムです。 問題について、私が言ったように、これらのツールの欠如ではなく、それらを専門的な AI 開発ワークフローに統合するための実践的で意見のあるガイドの欠如です。多くの開発者はまだ手動でプレースホルダーを調整し、反復的なリファクタリングに苦戦し、あるいはゼロからユニットテストを書き続けています。これが、適切に構成された時 OpenCode が輝く所です。それは私たちが遭遇する実際の、日々の摩擦ポイントを解決します。私のここでは、この強力なツールを本当に本格的な部分にするために基本的なものを超えて、あなたに実用的な OpenCode AI agent 設定ガイドを与えることです。ツールキットの一部です。 OpenCode AI agent 設定の骨組みに飛び込む前に、まずそのコアアーキテクチャを簡単に脱敏させてみましょう。これらの概念を理解することは、効果的な構成とトラブルシューティングのために不可欠です。OpenCode は agentic ループ上で動作します、それは単一のプロンプトに対する回答ではなく、計画し、行動し、観察し、そしてアプローチを改善します。 その心臓部では、OpenCode はいくつかの重要なコンポーネントで構成されています: オーケストレーター:これは脳です。それはあなたの高レベルの目標を受け取り、それらをサブタスクに分解し、どのツールを使用するかを決定します。それは全体の執行情動の流れを管理します。 モデル:あなたは様々な大規模言語モデル(LLM)をプラグインできます - 既製の(OpenAI, Anthropic)とローカルオープンソースモデル(Llama, Mixtral via Ollama or vLLM)。モデルの選択はパフォーマンス、コスト、そして出力の品質に大きく影響します。robustなコーディング agent インテグレーションのために、より大きなコンテキストウィンドウと推論能力を持つため、複雑なタスクのために GPT-4 または Claude 3 のような有能なモデルはしばしば好まれます。 ツール:これはエージェントの「手と足」です。OpenCode は一般的な開発者のタスクのための組み込みのツールセットを搭載しています: ファイルシステムアクセス:ファイルとディレクトリを読み取り、書き込み、作成します。 コード実行:シェルコマンドを実行(例:npm install, pytest, eslint)。 Git インテグレーション:ステージング、コミット、ブランチのチェックアウト。 リンチング/テスト:プロジェクト固有のリンタやテストスイートを実行して変更を検証します。 あなたはさらにあなたのプロジェクトの具体的なニーズに合わせてカスタムツールを開発して OpenCode を拡張できます(例:内部 API との相互作用、ステージング環境へのデプロイ)。 メモリ:OpenCode は過去の相互作用、コードの変更、そして観察を記憶する必要があります。文脈を維持するために、これは単純な会話履歴から、関連するコードスニペットやドキュメントを取得するためのより高度なベクターデータベースまで範囲の広い可能性があります。これはより長いランニングなタスクのために、そして一貫した AI 開発ワークフローを維持するために不可欠です。 あなたが OpenCode にタスクを与えるとき、例えば「ユーザープロフィールのための新しい API エンドポイントを実装する」という、オーケストレーターは: 計画:ステップバイステップのアプローチを策定します(例:「関連するファイルを特定する」、「コードを変更する」)

Original Content

This article was originally published on BuildZn. We’ve all been there: staring at a codebase, needing to implement a small feature, refactor a troublesome function, or even just set up boilerplate. The traditional README.md for most new tools, especially in the rapidly evolving AI space, often feels like it's written for a different galaxy. It gets you from zero to "hello world" but leaves you stranded when it comes to integrating it into your messy, real-world project. I’m talking about actual, immediate productivity gains for seasoned developers, not just theoretical potential. This post cuts through the noise to deliver a deeply practical OpenCode AI agent setup guide. The promise of AI in coding has been around for a while, but it's only recently that open-source AI coding assistant tools have matured enough to be genuinely useful. I’m a senior developer, and frankly, I'm skeptical of anything that claims to "revolutionize" my workflow without concrete examples and demonstrable gains. OpenCode is one of the few open-source AI agents that genuinely caught my eye. It's not just another autocomplete tool; it's an agentic system capable of understanding complex tasks, breaking them down, interacting with your codebase, and even executing commands. The problem, as I mentioned, isn't the lack of these tools, but the lack of practical, opinionated guides for integrating them into a professional AI dev workflow. Many developers are still manually tweaking boilerplate, slogging through repetitive refactors, or writing unit tests from scratch. This is where OpenCode, when properly configured, shines. It addresses the real, daily friction points we encounter. My goal here is to give you a pragmatic OpenCode AI agent setup guide that goes beyond the basics, focusing on turning this powerful tool into a genuinely production-ready part of your toolkit. Before we dive into the nuts and bolts of the OpenCode AI agent setup, let’s quickly demystify its core architecture. Understanding these concepts is crucial for effective configuration and troubleshooting. OpenCode operates on an agentic loop, which means it doesn't just respond to a single prompt; it plans, acts, observes, and refines its approach. At its heart, OpenCode consists of several key components: Orchestrator: This is the brain. It takes your high-level goal, breaks it down into sub-tasks, and decides which tools to use. It manages the overall execution flow. Models: You can plug in various Large Language Models (LLMs) – both proprietary (OpenAI, Anthropic) and local open-source models (Llama, Mixtral via Ollama or vLLM). The choice of model heavily influences performance, cost, and output quality. For a robust coding agent integration, a capable model like GPT-4 or Claude 3 is often preferred for complex tasks due to their larger context windows and reasoning abilities. Tools: These are the agent's "hands and feet." OpenCode ships with a suite of built-in tools for common developer tasks: File System Access: Reading, writing, creating files and directories. Code Execution: Running shell commands (e.g., npm install, pytest, eslint). Git Integration: Staging, committing, checking out branches. Linting/Testing: Running project-specific linters or test suites to validate changes. You can also extend OpenCode with custom tools tailored to your project’s specific needs (e.g., interacting with an internal API, deploying to a staging environment). Memory: OpenCode needs to remember past interactions, code changes, and observations to maintain context. This memory can range from simple conversation history to more advanced vector databases for retrieving relevant code snippets or documentation. This is critical for longer-running tasks and for maintaining a consistent AI dev workflow. When you give OpenCode a task, say "Implement a new API endpoint for user profiles," the orchestrator will: Plan: Formulate a step-by-step approach (e.g., "identify relevant files," "create new route," "add controller logic," "write tests"). Act: Use its tools (e.g., read routes.py, create profile_controller.py). Observe: Execute commands (e.g., run tests, check linting) and analyze the output. Refine: Based on observations (e.g., "tests failed," "linting errors"), it will adjust its plan and repeat the cycle until the task is complete and validated. Understanding this loop helps you debug when things go wrong and optimize your prompts to guide the agent more effectively. It’s the foundational concept behind maximizing developer productivity AI with OpenCode. Alright, enough theory. Let's get our hands dirty with a practical OpenCode AI agent setup. I'll walk you through setting it up locally and integrating it into an existing project. First things first, ensure you have these installed: Python 3.9+: Essential for OpenCode itself. I recommend using pyenv or conda for environment management. Docker: While not strictly mandatory for all OpenCode functions, it's invaluable for running local LLMs (like Ollama) or ensuring a consistent execution environment for certain tools. API Keys: If you plan to use proprietary models (which I highly recommend for initial testing and complex tasks), you'll need API keys for providers like OpenAI, Anthropic, or Google. Getting OpenCode installed is straightforward using pip: pip install opencode-agent I always recommend installing into a virtual environment to avoid dependency conflicts: python3 -m venv opencode_env source opencode_env/bin/activate pip install opencode-agent .env and config.yaml) OpenCode relies on environment variables for sensitive information (API keys) and a config.yaml file for defining its operational parameters. Create a .env file in your project root where you'll run OpenCode: # .env file in your project root OPENAI_API_KEY="sk-YOUR_OPENAI_KEY_HERE" ANTHROPIC_API_KEY="sk-YOUR_ANTHROPIC_KEY_HERE" # If using a local model via Ollama: OLLAMA_BASE_URL="http://localhost:11434" # Specify a preferred default model OPENCODE_DEFAULT_MODEL="gpt-4o" # or "claude-3-opus-20240229", or "llama3" if local Pro-tip: For production environments or team setups, consider using a proper secrets management solution instead of .env files directly in source control. This is just for local dev. Create a opencode_config.yaml file (you can name it anything, but this is a good convention) in the same directory. This is where you configure models, tools, and general agent behavior. # opencode_config.yaml models: - id: gpt-4o provider: openai name: gpt-4o api_key_env: OPENAI_API_KEY - id: claude-3-opus provider: anthropic name: claude-3-opus-20240229 api_key_env: ANTHROPIC_API_KEY - id: llama3 provider: ollama name: llama3 # Ensure 'llama3' is pulled in Ollama: `ollama pull llama3` base_url_env: OLLAMA_BASE_URL agents: default: model: gpt-4o # Set your preferred default agent model max_iterations: 15 # Prevent infinite loops temperature: 0.2 # Define the tools available to this agent tools: - id: filesystem - id: shell - id: git - id: python_interpreter # Useful for quick code execution snippets # Optional: Define specific agent types for different tasks agents: refactor_agent: model: gpt-4o temperature: 0.1 max_iterations: 10 tools: - id: filesystem - id: shell - id: git - id: eslint_linter # Example of a custom or project-specific tool test_writer_agent: model: claude-3-opus temperature: 0.3 max_iterations: 8 tools: - id: filesystem - id: shell - id: python_pytest # Example of another custom tool Personal Experience: I found that defining specific agents for different tasks (e.g., refactor_agent, test_writer_agent) with tailored models and toolsets significantly improved performance and reduced token usage. A refactoring task benefits from a lower temperature (more deterministic), while test generation might benefit from slightly higher creativity. This fine-grained AI dev workflow configuration is where OpenCode truly shines for seasoned developers. Navigate to your existing project's root directory in your terminal. This is crucial because OpenCode, by default, will operate within the current working directory, granting it access to your files and the ability to execute commands in your project's context. Let's run a simple task. For this example, ensure you have some Python code in your directory. # Example: Assuming you have a file 'my_module.py' in your current directory # with a function like: # def calculate_sum(a, b): # return a + b # Run OpenCode with a task # Use the -c flag to specify your config file # Use the -a flag to specify a particular agent, or omit for default opencode --config opencode_config.yaml --agent refactor_agent "Refactor the `calculate_sum` function in `my_module.py` to handle arbitrary number of arguments and add a docstring. Make sure to update its callers if necessary. Provide a git diff." OpenCode will now start its agentic loop: planning, reading files, proposing changes, potentially running tests (if configured), and finally, presenting a git diff of its suggested changes. You'll observe it thinking, acting, and validating its steps in real-time in your terminal. This immediate feedback loop is critical for a productive coding agent integration. Even with a robust OpenCode AI agent setup guide, you'll encounter issues. Here are some of the most common ones I’ve run into and their solutions: API_KEY_NOT_FOUND or AuthenticationError Error Message Example: Error: Model 'gpt-4o' failed to initialize. Reason: Missing API key. Ensure OPENAI_API_KEY is set in your environment or .env file. Problem: OpenCode can't find the necessary API key for your chosen LLM. Fix: Double-check your .env file for typos in the key name (e.g., OPENAI_API_KEY vs OPENAI_APIKEY). Ensure the .env file is in the same directory from which you are running the opencode command. If you're using a different environment variable name in your config.yaml (api_key_env field), make sure it matches. Verify your API key itself is correct and hasn't expired. MODEL_NOT_AVAILABLE or InvalidModelError Error Message Example: Error: Model 'llama3' not found or not available. Please check model configuration or ensure Ollama server is running and model is pulled. Problem: The specified LLM is either misspelled, not accessible, or not pulled/running. Fix: Proprietary Models: Verify the name in config.yaml matches the provider's exact model identifier (e.g., gpt-4o vs gpt-4-turbo). Local Models (Ollama): Ensure your Ollama server is running (ollama serve). Confirm the model is pulled (ollama pull llama3). Verify OLLAMA_BASE_URL in your .env is correct (default http://localhost:11434). Check the id you're using in your agents section points to a valid models entry. Personal Experience: I ran into MODEL_NOT_AVAILABLE when I first tried to use a local LLM with an incorrect OLLAMA_BASE_URL after moving my Ollama instance to a Docker container. Always double-check your network settings! CONTEXT_WINDOW_EXCEEDED or InputTooLongError Error Message Example: Error: The agent tried to send a prompt exceeding the model's context window. Please try to simplify the task or use a model with a larger context. (Token count: 128000, Max: 120000) Problem: Your task description, combined with the codebase context OpenCode gathers, is too large for the chosen model's context window. Fix: Simplify the Task: Break down complex tasks into smaller, more focused sub-tasks. Instead of "Refactor entire codebase," try "Refactor ModuleA." Use Specific Prompts: Guide the agent to focus on relevant files or sections of code. For example, "Refactor functionX in fileY.py" is better than a generic refactor command. Exclude Irrelevant Files: OpenCode often has configuration options to exclude certain directories or file types from being read (e.g., node_modules, dist folders). This can dramatically reduce context. In your config.yaml, you might add a workspace section with exclude_patterns. Choose a Larger Context Model: Models like GPT-4o and Claude 3 Opus have significantly larger context windows (up to 200k tokens) compared to older models. If possible, upgrade your model for large projects. Optimize Memory Strategy: For very long-running tasks, consider how OpenCode's memory is managed. Advanced setups might involve a vector database for smarter context retrieval, though this is usually beyond basic opencode-agent usage. WORKSPACE_ACCESS_DENIED or PermissionError Error Message Example: Error: Failed to read file '/path/to/your/project/src/some_module.py'. Permission denied. Problem: OpenCode, or the user it's running as, doesn't have the necessary file system permissions to read or write files in your project directory. Fix: Check User Permissions: Ensure the user running opencode has read/write access to the project directory. Verify OpenCode's Working Directory: Make sure you're running OpenCode from the project's root or that you've explicitly configured its working directory to be correct. SELinux/AppArmor: On some Linux systems, security modules might restrict access. Temporarily disable them for testing, then configure rules if necessary. Getting OpenCode running is one thing; making it a force multiplier for developer productivity AI is another. Here are my battle-tested tips for getting the most out of your coding agent integration: Unlike simple chatbots, agentic systems thrive on structured prompts that encourage planning and validation. Task Decomposition: Instead of a single, monolithic task, break it down. You can do this implicitly in your prompt or by running OpenCode multiple times for sequential sub-tasks. Bad: "Fix all bugs and add features to this app." Good: "1. Refactor user_service.py for better error handling. 2. Write unit tests for user_service.py. 3. Implement the GET /users/{id}/profile endpoint." Explicit Constraints & Success Criteria: Tell the agent exactly what success looks like. "Refactor calculate_order_total in cart.py. Ensure it's idempotent, handles edge cases for discounts, and passes all existing tests. Linting must pass." Role-Play/Persona: Sometimes, asking the agent to act as a "senior Python architect" or "rigorous test engineer" can nudge its output quality. Few-Shot Examples: If you have specific code style or patterns, provide examples in your prompt (or point to them in your codebase) that the agent should emulate. Your config.yaml defines the tools. Don't just enable everything; curate them based on the agent's role. Linting & Testing Tools: Integrate project-specific linters (ESLint, Black, Prettier) and test runners (Pytest, Jest, Mocha). Configure them as shell commands or custom tools. tools: - id: shell name: run_eslint command: "npx eslint --fix" - id: shell name: run_pytest command: "pytest" # Add a success_criteria to interpret output if needed This allows OpenCode to self-correct based on real project feedback, mimicking a human developer's workflow. I've found this to be the single biggest boost to code quality from an AI coding assistant. Custom Tools: If your project has unique build steps, deployment scripts, or internal APIs, wrap them in simple Python scripts and expose them as custom tools to OpenCode. This truly integrates the agent into your unique AI dev workflow. 3. Cost Optimization and Performance Tuning Running LLMs, especially powerful ones, can get expensive. Model Tiering: Use cheaper, faster models (e.g., GPT-3.5-turbo or a small local LLM) for initial planning, code review, or simple modifications. Switch to a more capable but costlier model (GPT-4o, Claude 3 Opus) for complex refactoring, critical bug fixes, or test generation. Your agents configuration allows this. Token Management: Be ruthless about what context you give the agent. Use .opencodeignore (similar to .gitignore) to exclude large, irrelevant files or directories. Refine prompts to be concise. Consider models with better token efficiency. Caching: For long-running sessions, investigate OpenCode's options for caching LLM responses or intermediate agent thoughts to reduce redundant calls. Parallelism (for advanced tasks): While OpenCode usually runs a single agent, you might build wrapper scripts that run multiple OpenCode instances in parallel for independent sub-tasks if your hardware/APIs allow. Benchmark: In my own setup, by defining a "lightweight_linter" agent using gpt-3.5-turbo for initial checks and a "heavy_refactor" agent using gpt-4o, I reduced my average token cost per task by approximately 30% while maintaining code quality. A: No, absolutely not. OpenCode is an intelligent coding agent designed to augment your capabilities, automate repetitive tasks, and assist with complex problems. It's a powerful tool in your AI dev workflow, but it doesn't offer the visual interface, extensive plugins, or direct real-time interaction that a modern IDE provides. Think of it as a highly capable pair programmer that you control via the command line. A: For maximum developer productivity AI on complex, real-world tasks, I strongly recommend using top-tier proprietary models like GPT-4o (OpenAI) or Claude 3 Opus (Anthropic). Their superior reasoning, larger context windows, and instruction following are unmatched for agentic workflows. For simpler tasks or cost-sensitive operations, you can tier down to models like gpt-3.5-turbo or a well-tuned local LLM like Llama3 via Ollama. It's crucial to experiment and balance cost with quality for your specific use cases. A: Integrating OpenCode into CI/CD can automate pre-commit checks, code generation, or even automated bug fixes. Pre-commit Hooks: You can set up a Git pre-commit hook that runs OpenCode with a specific task (e.g., "ensure all new Python files have docstrings and pass Black formatting") before a commit is allowed. Dedicated CI Jobs: Create a CI job that triggers OpenCode based on certain events (e.g., a PR being opened on a specific branch). The agent could then automatically suggest improvements, run additional tests, or even generate boilerplate for new components. Environment Variables: Ensure your CI/CD environment securely provides the necessary API keys and configurations to OpenCode. This requires careful secrets management in your CI system (e.g., GitHub Actions Secrets, GitLab CI/CD Variables). A: The security of your proprietary code depends heavily on your OpenCode AI agent setup: Cloud Models (OpenAI, Anthropic): When using cloud-based LLMs, your code snippets and task descriptions are sent to the model provider's servers. While these providers have strong data privacy policies, many organizations prefer not to send proprietary code off-premise. Always review their data usage policies. Local Models (Ollama, vLLM): Running OpenCode with a local LLM ensures that your code never leaves your local machine or internal network. This is the most secure option for sensitive proprietary code and a strong reason why the open source AI coding assistant approach is gaining traction. Tool Execution: OpenCode executes shell commands. Ensure that the agent's permissions are scoped correctly and that you trust the tasks it's performing, especially in automated CI/CD scenarios. Always review its proposed changes before applying them. The era of truly useful, open source AI coding assistant tools is here, and OpenCode is leading the charge. This OpenCode AI agent setup guide has walked you through moving beyond the README.md to a production-ready integration, tackling common pitfalls, and optimizing for real-world developer productivity AI. It's not about replacing you; it's about giving you superpowers – automating the tedious, generating the boilerplate, and assisting with complex challenges, freeing you to focus on higher-level design and innovation. Embrace this coding agent integration, configure it thoughtfully, and watch your AI dev workflow transform. Don't let your valuable time be consumed by tasks an agent can handle. Now, go configure OpenCode for your next project. The future of coding is collaborative, and your new AI assistant is waiting.