Back to list
Ansible for DevOps: サーバー構成を 30 分で自動化(30 日ではなく)
Ansible for DevOps: Automate Server Configuration in 30 Minutes (Not 30 Days)
Translated: 2026/3/16 14:03:13
Japanese Translation
15 のサーバーがあります。すべてに同じパッケージ、ユーザー、ファイアウォール設定、監視エージェント、およびアプリケーション構成が必要です。
各サーバーに SSH 接続して同じコマンドを 15 回実行することもできます。または、一度 Ansible playbook を作成し、それをもとに 15 台のサーバーを並列で適用することもできます。
Ansible を一言で表すと、サーバーがどのような姿にあるべきかを定義し、Ansible がそれを満たすことです。
シェルスクリプトも機能します。ただし、機能しなくなることもあれば、機能しなくなることもあります。
# このシェルスクリプトは nginx をインストールします... あくまでも
apt-get update
apt-get install -y nginx
systemctl start nginx
systemctl enable nginx
問題点:
不直列性はない。一度実行すると apt-get install が警告を表示します。部分失敗の後で実行すると、不明瞭な状態に陥る可能性があります。
エラー処理がない。apt-get update が失敗した場合、スクリプトは続行し、古くなったパッケージリストからインストールを試みます。
OS 固有の。このスクリプトは Debian/Ubuntu 専用です。CentOS は yum、Alpine は apk を使用します。
インベントリがない。このスクリプトをどのサーバーで実行すべきか?ハードコーディされた IP?SSH ループ?
Ansible はこれをすべて解決します:
# この Ansible タスクは常に正しく nginx をインストールします
- name: Install and start nginx
hosts: webservers
become: true
tasks:
- name: Install nginx
ansible.builtin.package:
name: nginx
state: present
- name: Start and enable nginx
ansible.builtin.service:
name: nginx
state: started
enabled: true
直列性:100 回実行しても、nginx がすでにインストールされ動作している場合、Ansible は"OK"を報告し変更をしません。
クロスプラットフォーム: ansible.builtin.package は OS を検知し、適切なパッケージマネージャーを使用します。
インベントリ駆動型: hosts: webservers はインベントリファイルから取得されます。ハードコーディされた IP がない。
# macOS
brew install ansible
# Ubuntu/Debian
sudo apt-get install ansible
# pip (どの OS でも)
pip install ansible
# inventory.ini
[webservers]
web-1 ansible_host=10.0.1.10
web-2 ansible_host=10.0.1.11
web-3 ansible_host=10.0.1.12
databases
db-1 ansible_host=10.0.2.10
db-2 ansible_host=10.0.2.11
[all:vars]
ansible_user=deploy
ansible_ssh_private_key_file=~/.ssh/deploy_key
# 全てのホストをping
ansible all -i inventory.ini -m ping
# 出力:
# web-1 | SUCCESS => {"ping": "pong"}
# web-2 | SUCCESS => {"ping": "pong"}
# ...
# すべての webservers の稼働時間を確認
ansible webservers -i inventory.ini -m command -a "uptime"
# データベースのディスクスペースを確認
ansible databases -i inventory.ini -m command -a "df -h /"
# すべてのサーバーでパッケージをインストール
ansible all -i inventory.ini -m package -a "name=htop state=present" --become
playbook はサーバーの望みの状態を記述する YAML ファイルです。
# playbooks/setup-server.yml
---
- name: Base Server Configuration
hosts: all
become: true
vars:
admin_users:
- name: deploy
ssh_key: "ssh-rsa AAAA..."
- name: sanjay
ssh_key: "ssh-rsa BBBB.."
required_packages:
- curl
- wget
- git
- htop
- jq
- unzip
- net-tools
- vim
tasks:
# システム更新
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
when: ansible_os_family == "Debian"
- name: Install required packages
ansible.builtin.package:
name: "{{ required_packages }}"
state: present
# ユーザー管理
- name: Create admin users
ansible.builtin.user:
name: "{{ item.name }}"
groups: sudo
shell: /bin/bash
create_home: true
loop: "{{ admin_users }}"
- name: Add SSH keys for admin users
ansible.posix.authorized_key:
user: "{{ item.name }}"
key: "{{ item.ssh_key }}"
state: present
loop: "{{ admin_users }}"
# セキュリティハルディング
- name: Disable root SSH login
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: 'PermitRootLogin no'
notify: Restart SSH
Original Content
You have 15 servers. Each one needs the same packages, the same users, the same firewall rules, the same monitoring agent, and the same application configuration.
You can SSH into each one and run the same commands 15 times. Or you can write an Ansible playbook once and apply it to all 15 in parallel.
That's Ansible in one sentence: define what your servers should look like, and Ansible makes them look like that.
Shell scripts work. Until they don't.
# This shell script installs nginx... maybe
apt-get update
apt-get install -y nginx
systemctl start nginx
systemctl enable nginx
Problems:
Not idempotent. Run it twice and apt-get install shows warnings. Run it after a partial failure and you might be in an unknown state.
No error handling. If apt-get update fails, the script continues and tries to install from stale package lists.
OS-specific. This script only works on Debian/Ubuntu. CentOS uses yum. Alpine uses apk.
No inventory. Which servers to run this on? Hard-coded IPs? SSH in a loop?
Ansible solves all four:
# This Ansible task installs nginx — correctly, every time
- name: Install and start nginx
hosts: webservers
become: true
tasks:
- name: Install nginx
ansible.builtin.package: # Works on apt, yum, apk, etc.
name: nginx
state: present
- name: Start and enable nginx
ansible.builtin.service:
name: nginx
state: started
enabled: true
Idempotent: Run it 100 times — if nginx is already installed and running, Ansible reports "OK" and changes nothing.
Cross-platform: ansible.builtin.package detects the OS and uses the right package manager.
Inventory-driven: hosts: webservers pulls from your inventory file — no hard-coded IPs.
# macOS
brew install ansible
# Ubuntu/Debian
sudo apt-get install ansible
# pip (any OS)
pip install ansible
# inventory.ini
[webservers]
web-1 ansible_host=10.0.1.10
web-2 ansible_host=10.0.1.11
web-3 ansible_host=10.0.1.12
[databases]
db-1 ansible_host=10.0.2.10
db-2 ansible_host=10.0.2.11
[all:vars]
ansible_user=deploy
ansible_ssh_private_key_file=~/.ssh/deploy_key
# Ping all hosts
ansible all -i inventory.ini -m ping
# Output:
# web-1 | SUCCESS => {"ping": "pong"}
# web-2 | SUCCESS => {"ping": "pong"}
# ...
# Check uptime on all webservers
ansible webservers -i inventory.ini -m command -a "uptime"
# Check disk space on databases
ansible databases -i inventory.ini -m command -a "df -h /"
# Install a package across all servers
ansible all -i inventory.ini -m package -a "name=htop state=present" --become
A playbook is a YAML file describing the desired state of your servers.
# playbooks/setup-server.yml
---
- name: Base Server Configuration
hosts: all
become: true
vars:
admin_users:
- name: deploy
ssh_key: "ssh-rsa AAAA..."
- name: sanjay
ssh_key: "ssh-rsa BBBB..."
required_packages:
- curl
- wget
- git
- htop
- jq
- unzip
- net-tools
- vim
tasks:
# System updates
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600 # Don't update if cached within 1 hour
when: ansible_os_family == "Debian"
- name: Install required packages
ansible.builtin.package:
name: "{{ required_packages }}"
state: present
# User management
- name: Create admin users
ansible.builtin.user:
name: "{{ item.name }}"
groups: sudo
shell: /bin/bash
create_home: true
loop: "{{ admin_users }}"
- name: Add SSH keys for admin users
ansible.posix.authorized_key:
user: "{{ item.name }}"
key: "{{ item.ssh_key }}"
state: present
loop: "{{ admin_users }}"
# Security hardening
- name: Disable root SSH login
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: 'PermitRootLogin no'
notify: Restart SSH
- name: Disable password authentication
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PasswordAuthentication'
line: 'PasswordAuthentication no'
notify: Restart SSH
# Firewall
- name: Install UFW
ansible.builtin.apt:
name: ufw
state: present
when: ansible_os_family == "Debian"
- name: Allow SSH
community.general.ufw:
rule: allow
port: "22"
proto: tcp
- name: Allow HTTP/HTTPS
community.general.ufw:
rule: allow
port: "{{ item }}"
proto: tcp
loop: ["80", "443"]
when: "'webservers' in group_names"
- name: Enable UFW with default deny
community.general.ufw:
state: enabled
default: deny
direction: incoming
# Time synchronization
- name: Install chrony for NTP
ansible.builtin.package:
name: chrony
state: present
- name: Enable chrony
ansible.builtin.service:
name: chronyd
state: started
enabled: true
handlers:
- name: Restart SSH
ansible.builtin.service:
name: sshd
state: restarted
# Dry run (check mode) — shows what WOULD change
ansible-playbook -i inventory.ini playbooks/setup-server.yml --check --diff
# Apply
ansible-playbook -i inventory.ini playbooks/setup-server.yml
# Apply to specific hosts only
ansible-playbook -i inventory.ini playbooks/setup-server.yml --limit web-1,web-2
When your playbook grows beyond 100 lines, break it into roles. A role is a self-contained unit of configuration.
roles/
├── common/ # Base server config (every server)
│ ├── tasks/main.yml
│ ├── handlers/main.yml
│ ├── templates/
│ ├── files/
│ └── defaults/main.yml # Default variables (overridable)
├── nginx/ # Web server config
│ ├── tasks/main.yml
│ ├── handlers/main.yml
│ ├── templates/
│ │ └── nginx.conf.j2
│ └── defaults/main.yml
├── postgresql/ # Database config
│ ├── tasks/main.yml
│ ├── handlers/main.yml
│ ├── templates/
│ │ └── postgresql.conf.j2
│ └── defaults/main.yml
└── monitoring/ # Node exporter + Promtail
├── tasks/main.yml
└── defaults/main.yml
# roles/nginx/defaults/main.yml
nginx_worker_processes: auto
nginx_worker_connections: 1024
nginx_server_name: "_"
nginx_root: /var/www/html
nginx_ssl_enabled: false
# roles/nginx/tasks/main.yml
---
- name: Install nginx
ansible.builtin.package:
name: nginx
state: present
- name: Deploy nginx configuration
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: '0644'
validate: nginx -t -c %s # Validate before applying
notify: Reload nginx
- name: Deploy site configuration
ansible.builtin.template:
src: site.conf.j2
dest: /etc/nginx/sites-available/default
owner: root
group: root
mode: '0644'
notify: Reload nginx
- name: Start and enable nginx
ansible.builtin.service:
name: nginx
state: started
enabled: true
# roles/nginx/templates/nginx.conf.j2
worker_processes {{ nginx_worker_processes }};
events {
worker_connections {{ nginx_worker_connections }};
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/sites-available/*;
}
# roles/nginx/handlers/main.yml
---
- name: Reload nginx
ansible.builtin.service:
name: nginx
state: reloaded
# playbooks/webservers.yml
---
- name: Configure Web Servers
hosts: webservers
become: true
roles:
- common
- role: nginx
vars:
nginx_worker_connections: 4096
nginx_ssl_enabled: true
- monitoring
Never put passwords or API keys in plain text YAML:
# Create an encrypted variables file
ansible-vault create group_vars/all/vault.yml
# Edit an existing encrypted file
ansible-vault edit group_vars/all/vault.yml
# Run a playbook with vault (prompts for password)
ansible-playbook -i inventory.ini playbooks/deploy.yml --ask-vault-pass
# Or use a password file (for CI/CD)
ansible-playbook -i inventory.ini playbooks/deploy.yml --vault-password-file ~/.vault_pass
# group_vars/all/vault.yml (encrypted)
vault_db_password: "super-secret-password"
vault_api_key: "sk-1234567890"
vault_ssl_cert: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
# Reference in playbooks (Ansible decrypts automatically)
- name: Configure database connection
ansible.builtin.template:
src: db-config.j2
dest: /etc/app/database.yml
vars:
db_password: "{{ vault_db_password }}"
Hard-coded IPs don't work in cloud environments where VMs come and go. Use dynamic inventory to query your cloud provider:
# Azure dynamic inventory
pip install azure-mgmt-compute azure-identity
# inventory_azure.yml
plugin: azure.azcollection.azure_rm
auth_source: auto
include_vm_resource_groups:
- rg-production
- rg-staging
keyed_groups:
- prefix: tag
key: tags.role # Group VMs by the 'role' tag
# Now Ansible groups VMs by their Azure tags
ansible tag_webserver -i inventory_azure.yml -m ping
ansible tag_database -i inventory_azure.yml -m ping
1. Start with ad-hoc commands, then graduate to playbooks, then roles. Don't over-engineer from day one.
2. Always use --check --diff first. See what would change before applying. This builds confidence and catches mistakes.
3. Keep playbooks idempotent. Every task should be safe to run multiple times. Use state: present instead of install commands.
4. Group variables by environment. group_vars/production/, group_vars/staging/ — same playbook, different configs per environment.
5. Version control everything. Playbooks, roles, inventory, vault files — all in Git. Your server configuration is code; treat it like code.
Ansible won't replace your cloud-native tools (Terraform for provisioning, Kubernetes for orchestration). But for the servers, VMs, and bare-metal machines that still exist in every organization, Ansible is the fastest path from "manually configured" to "fully automated."
What's your go-to configuration management tool? Ansible, Chef, Puppet, or something else? Share your preference in the comments.
Follow me for more DevOps automation content.