Back to list
ホームで Raspberry Pi を使って Kubernetes クラスターを構築する
Build a Kubernetes Cluster at Home with Raspberry Pis
Translated: 2026/3/15 18:00:48
Japanese Translation
多くの人はクラウドを通じて Kubernetes とやり取りしています。彼らはおそらく、クラスターに自分のラップトップと同じくらい近いです。クラウドでクラスターを使用し、ウェブブラウザまたはターミナルから操作するのは構いませんが、金属(bare metal)上に Kubernetes クラスターをデプロイすることには、より近しいまたは心地よい何かがあります。さらに心地よいのは、ノードを物理的に拔がれ、Kubernetes が作業負荷を再平衡化する様子を見ることです。総費用は、既に所有しているものの内容、使用するノードの数、そして全てのものを美しく収容するためのケースを得るかどうかによっても約 300 ヶ Euro 程度です。
4x Raspberry Pi 4(または Pi 5)(4GB RAM またはそれ以上)。ボードごとに 4GB あり、それ以上の作業負荷を同時に実行する十分な余白があります。8GB モデルを持つことは良しとされますが、必須ではありません。
4x microSD カード(32GB)。OS、コンテナイメージ、Kubernetes バイナリがすべてここに住んでいます。32GB は快適な余白を与えます。
マルチポート USB-C チャージャー(ポートあたり少なくとも 15W)。各 Pi 4 は負荷下で最大 15W を引きません。ポート 4 つの USB-C PD GaN チャージャーは、壁のコンセントから 1 つだけで全体のクラスターを動力に変えます。
4x USB-C ケーブル。イーサネットスイッチ。4xイーサネットケーブル。microSD カードリーダー、モニター(Pi 4 は 2 つの micro-HDMI ポートを持つ)および USB キーボード。モニターとキーボードは初期設定だけには必要です。SSH が構成された後、すべてはラップトップからリモートで行われます。
クラスターケース。UCTRONICS Raspberry Pi cluster enclosure などのものは、4 つのボードを適切に換気しつつ整然とスタックさせるのに役立ちます。デスク上で見た場合、追加で推奨されます。
プラグインを開始する前に、私たちが構築しているものとは何か、そしてそれがそのように形作られた理由を理解しましょう。Pi 1 は二重の役割を果たします。Wi-Fi インターフェースは家庭ルーターおよびインターネットに接続し、イーサネットインターフェースは 3 つのワーカーノードが住むプライベートスイッチに接続します。Pi 1 はゲートウェイとして機能します。それは NAT(ネットワークアドレス変換)を実行し、ワーカーがインターネットを通じてそれを通過するのを可能にし、あなたの家庭ルーターがあなたのデバイスを単一の公知 IP を通じてインターネットに到達させるのと同じようにします。
このトポロジーの理由は?ワーカーは隔离された 10.0.0.0/24 セグメントに居住します。これは意図的です。これは、プロダクションクラスターがどのように機能するかを反映しており、ノードはプライベートネットワークに置かれ、電話が使用する同じ Wi-Fi を経由するのではなく、専用ファブを通じて通信します。これはまた、あなたのクラスターが他のデバイスに干渉しないことを意味します。
すべての 4 つの Pi で Ubuntu Server(64 ビット、aarch64 版)を使用します。Raspberry Pi Imager を使って各カードをフラッシュします。
記述する前に、ギアアイコンをクリックして各ノードを事前構成してください。
各ノードに対して:
- SSH を有効にして公開鍵認証を使用する。
- もし持っているなら、あなたのラップトップ上で ed25519 キーペアを生成し(ssh-keygen -t ed25519)、その公開キーを貼り付ける。
- ユーザー名を設定する。
- 各ノードのユニークホスト名を設定する:
- Pi 1: k8s-control(これは私たちのコントロールプレーンおよびゲートウェイになります)
- Pi 2: worker-1
- Pi 3: worker-2
- Pi 4: worker-3
各カードをフラッシュし、Pi に入力します。このフェーズは、すべてが依存する基盤です。ノードが互いに信頼できなく連絡できない場合、次のことは機能しません。
Pi 1 をモニターおよびキーボードを接続して起動してください。まず、Wi-Fi インターフェースを設定して、あなたの家庭ルーターに接続し、イーサネットインターフェースにクラスターネットワークのゲートウェイとして機能する静的 IP を与えてください。
/etc/netplan/01-cluster.yaml 上に netplan 構成を生成してください:
network:
version: 2
wifis:
wlan0:
dhcp4: true
access-points: "YOUR_WIFI_SSID"
password: "your-wifi-password"
ethernets:
eth0:
addresses:
- 10.0.0.1/24
適用:
sudo netplan apply
これで Pi 1 は 2 つのインターフェースを持っています:wlan0 はあなたの家庭ルーターから IP を受け、インターネットにアクセスできます。eth0 はクラスターゲートウェイアドレスである 10.0.0.1 に静的に設定されています。
なぜ 10.0.0.0/24 ?
私たちは、範囲が家庭ネットワーク(おそらく 192.168.x.x)に衝突しないことを保証します。ワーカーは 10.0.0.0/24 ネットワークに居住しますが、このネットワークはインターネットへの直接的なルートはありません。Pi 1 はルーターとして機能する必要があるのです。
# IP フォワーディングを有効にする
# これは Linux カーネルに、異なるインターフェース間のパケットルーティングを
# 伝えることです
Original Content
Most people interact with Kubernetes through the cloud. They probably are as close to the cluster as they are to their laptop. Using a cluster in the cloud with a web browser or terminal is fine but there is something more intimate or rewarding to deploying a kubernetes cluster on bare metal. And even more rewarding is physically unplugging a node and watch Kubernetes rebalancing the workloads. The total cost is roughly €300 depending on what you already own, the number of nodes you will choose to have and whether you will get a case to wrap everything nicely. 4x Raspberry Pi 4 (or Pi 5) (4GB RAM or more). With 4GB per board, there's enough headroom to run workloads on top. 8GB model is nice to have but not necessary. 4x microSD cards (32GB). The OS, container images, and Kubernetes binaries all live on these. 32GB gives you comfortable room. A multi-port USB-C charger (at least 15W per port). Each Pi 4 draws up to 15W under load. A 4-port USB-C PD GaN charger powers the whole cluster from a single wall outlet. 4x USB-C cables. An ethernet switch. 4x ethernet cables. A microSD card reader, a monitor (the Pi 4 has two micro-HDMI ports) and a USB keyboard. You'll only need the monitor and keyboard briefly during initial setup. Once SSH is configured, everything happens remotely from your laptop. A cluster case. Something like the UCTRONICS Raspberry Pi cluster enclosure keeps your four boards stacked neatly with proper airflow. Optional but recommended plus it looks great on a desk. Before we start plugging things in, let's understand what we're building and why it's shaped this way. Pi 1 has a dual role. Its Wi-Fi interface connects to your home router and the internet. Its ethernet interface connects to a private switch where the three worker nodes live. Pi 1 acts as a gateway. It performs NAT (Network Address Translation) so the workers can reach the internet through it, the same way your home router lets your devices reach the internet through its single public IP. Why this topology? The workers live on an isolated 10.0.0.0/24 subnet. This is deliberate. It mirrors how production clusters work where nodes sit in a private network and communicate over a dedicated fabric, not over the same Wi-Fi your phone uses. It also means your cluster won't interfere with other devices on your home network. We'll use Ubuntu Server (64-bit, aarch64 edition) on all four Pis. Use the Raspberry Pi Imager to flash each card. Before writing, click the gear icon to pre-configure each node: For every node: Enable SSH with public-key authentication. Generate an ed25519 key pair on your laptop (ssh-keygen -t ed25519) if you don't have one, and paste the public key here. Set the username. Set unique hostnames for each node: Pi 1: k8s-control (this will be our control plane and gateway) Pi 2: worker-1 Pi 3: worker-2 Pi 4: worker-3 Flash each card and insert them into the Pis. This phase is the foundation everything else depends on. If the nodes can't reliably talk to each other, nothing that follows will work. Boot Pi 1 with a monitor and keyboard attached. First, set up the Wi-Fi interface so it connects to your home router, and give the ethernet interface a static IP that will serve as the gateway for the cluster network. Create a netplan configuration at /etc/netplan/01-cluster.yaml: network: version: 2 wifis: wlan0: dhcp4: true access-points: "YOUR_WIFI_SSID": password: "your-wifi-password" ethernets: eth0: addresses: - 10.0.0.1/24 Apply it: sudo netplan apply Now Pi 1 has two interfaces: wlan0 gets an IP from your home router (internet access), and eth0 is statically set to 10.0.0.1 (the cluster's gateway address). Why 10.0.0.0/24? We make sure the range doesn't collide with the home network (which is probably 192.168.x.x). The workers will live on the 10.0.0.0/24 network, but that network has no direct route to the internet. Pi 1 needs to act as a router. # Enable IP forwarding # This tells the Linux kernel to route packets # between interfaces instead of dropping them echo 'net.ipv4.ip_forward = 1' | sudo tee /etc/sysctl.d/ip-forward.conf sudo sysctl --system # Set up NAT masquerade # Rewrite source IPs from 10.0.0.x to Pi 1's # Wi-Fi address so the internet sees them as coming from Pi 1 sudo apt-get install -y iptables-persistent sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE sudo netfilter-persistent save Without this step, the workers could talk to each other and to Pi 1, but any attempt to reach the internet (which they'll need to pull container images) would silently fail. Instead of manually configuring a static IP on each worker, we'll run a DHCP server on Pi 1 that hands out addresses automatically. More importantly, we'll bind specific IPs to specific MAC addresses so that each worker always gets the same IP. Kubernetes nodes register with their IP, so you don't want those changing. sudo apt-get install -y isc-dhcp-server Edit /etc/dhcp/dhcpd.conf: # The subnet the DHCP server manages subnet 10.0.0.0 netmask 255.255.255.0 { range 10.0.0.10 10.0.0.50; # dynamic range for any new devices option routers 10.0.0.1; # point clients to Pi 1 as gateway option domain-name-servers 8.8.8.8, 1.1.1.1; # public DNS servers } # Static leases. Each worker always gets the same IP # Find your Pi's MAC with: ip link show eth0 host worker-1 { hardware ethernet XX:XX:XX:XX:XX:XX; # replace with Pi 2's eth0 MAC fixed-address 10.0.0.11; } host worker-2 { hardware ethernet XX:XX:XX:XX:XX:XX; # replace with Pi 3's eth0 MAC fixed-address 10.0.0.12; } host worker-3 { hardware ethernet XX:XX:XX:XX:XX:XX; # replace with Pi 4's eth0 MAC fixed-address 10.0.0.13; } Tell the DHCP server to only listen on eth0 (we don't want it interfering with your home network's Wi-Fi). Edit /etc/default/isc-dhcp-server: INTERFACESv4="eth0" Now, there's a common boot-order problem: the DHCP server can start before the ethernet interface has its static IP assigned. When that happens, the server sees no valid address on eth0 and crashes. To fix this, create a systemd drop-in that makes the DHCP service wait for eth0 to be ready: sudo mkdir -p /etc/systemd/system/isc-dhcp-server.service.d cat <<'EOF' | sudo tee /etc/systemd/system/isc-dhcp-server.service.d/wait-for-eth0.conf [Unit] After=sys-subsystem-net-devices-eth0.device Wants=sys-subsystem-net-devices-eth0.device [Service] ExecStartPre=/bin/sh -c 'i=0; while [ $i -lt 30 ]; do ip -4 addr show eth0 | grep -q inet && exit 0; sleep 1; i=$((i+1)); done; exit 1' Restart=on-failure RestartSec=5 EOF sudo systemctl daemon-reload sudo systemctl enable --now isc-dhcp-server The ExecStartPre script polls up to 30 seconds for eth0 to have an IPv4 address. The Restart=on-failure is a safety net, if something else goes wrong, systemd will retry instead of leaving you with no DHCP. Plug all three workers into the switch, power them on, and give them a minute to boot and request DHCP leases. From Pi 1, verify: # Check DHCP leases were assigned cat /var/lib/dhcp/dhcpd.leases # Ping each worker ping -c 2 10.0.0.11 ping -c 2 10.0.0.12 ping -c 2 10.0.0.13 # Verify workers can reach the internet through Pi 1's NAT ssh ubuntu@10.0.0.11 'ping -c 2 8.8.8.8' You'll be managing this cluster from your laptop, not from a monitor plugged into Pi 1. The workers aren't directly reachable from your laptop (they're on the 10.0.0.0/24 subnet, which your laptop doesn't know about), so we use Pi 1 as a jump host. Add this to ~/.ssh/config on your laptop: Host k8s-control HostName User ubuntu IdentityFile ~/.ssh/id_ed25519 Host worker-1 HostName 10.0.0.11 User ubuntu IdentityFile ~/.ssh/id_ed25519 ProxyJump k8s-control Host worker-2 HostName 10.0.0.12 User ubuntu IdentityFile ~/.ssh/id_ed25519 ProxyJump k8s-control Host worker-3 HostName 10.0.0.13 User ubuntu IdentityFile ~/.ssh/id_ed25519 ProxyJump k8s-control Now ssh worker-2 from your laptop will automatically tunnel through Pi 1. The ProxyJump directive tells SSH to first connect to k8s-control and then hop to the worker from there. Before installing Kubernetes the worker nodes need a few changes. sudo swapoff -a sudo sed -i '/swap/d' /etc/fstab Why: Swap lets the OS spill memory to disk when RAM is full. That's useful in general, but it undermines Kubernetes resource management. Disabling swap means a container that exceeds its memory limit gets OOMKilled immediately, which is easier to debug and more honest. Recent Kubernetes versions (1.28+) do support swap, but configuring it properly doesn't belong in a getting-started guide. cat < \ --discovery-token-ca-cert-hash sha256: What happens during a join: the worker contacts the API server at 10.0.0.1:6443, verifies the server's TLS certificate against the hash you provided, sends a certificate signing request which the control plane auto-approves and then starts its kubelet with the signed certificate. The kubelet registers the node with the API server and Kubernetes begins scheduling the Flannel DaemonSet pod onto it. Back on Pi 1: kubectl get nodes -o wide After about a minute (while Flannel pulls the container images), all four nodes should show Ready: Check that all system pods are running: kubectl get pods -A Note: The control plane node has a taint (node-role.kubernetes.io/control-plane:NoSchedule) that prevents regular workloads from being scheduled on it. Congratulations! You have a functioning Kubernetes cluster. Specifically: A control plane (Pi 1) running etcd, the API server, the controller manager and the scheduler. Three worker nodes (Pi 2–4) running kubelet and kube-proxy, ready to accept workloads. An overlay network (Flannel) that gives every pod a unique IP and routes traffic between them, even across nodes. Cluster DNS (CoreDNS) so pods can find each other by name instead of IP. This cluster is a foundation. Some next steps to consider: Add metrics server. Then you will be able to check node and pod metrics like kubectl top nodes Deploy a real application. Applications like Amazon's retail store or Google's microservice demo is a good place to start. Try breaking things. Drain a node (kubectl drain), scale a deployment to more replicas than the cluster can handle, set resource limits and watch OOMKill, delete a pod and watch its controller recreate it. Set up an Ingress controller. Install something like Traefik or ingress-nginx to route external HTTP traffic to services inside the cluster. This is how production clusters expose web applications. Install a GitOps tool. Tools like ArgoCD let you declare your desired cluster state in a Git repository and have it automatically applied. Add persistent storage. Explore local-path-provisioner or NFS to give your pods storage that survives pod restarts. Stateful workloads (databases, message queues) need this. It may look overwhelming but in reality it's nothing more than a weekend project. It's the closest you'll get to hands on Kubernetes experience. Pun intended. Have fun!