How to setup simple Kubernetes Cluster with GCE
Chapter 1: Introduction
As an Site Reliability Engineer in the modern software company, I can’t never keep my hand from tinkering and operating Kubernetes clusters. If you’re a junior like me, you’ll mostly doing operation task with Kubernetes like creating new deployment, make sure configmap variable values are correct, or investigating why this CRD is not working. I rarely have an opportunity to deploy a kubernetes cluster from beginning. So in this article I want to show you how I deploy my own staging kubernetes cluster.
We’ll start with the tools needed to spin up a minimal Kubernetes cluster. I will deploy a Kubernetes cluster in Google Cloud Platform using Compute Engine Instances. More or less we need these resources in Google Compute Engine:
- A VPC with at least one subnet.
- A Public IP address for control plane node.
- One Router and one NAT in the same region.
- Several security groups.
- An Instance template.
- One Compute Engine as control plane node.
- Two Compute Engine as regular nodes.
Chapter 2: Prepare Infrastructure Resources
To setup the infrastructure resources, we’ll use terrafrom
and terragrunt
masked as task
subcommand as our Infrastructure as Code tool. You can see the code repository in here.
Before applying infra resources in this folder, please read 000-main-infrastructure.
Task subcommand you'll need to execute:
# Plan or Dry run all terraform manifest
task plan-all -- 001-how-to-setup-simple-kubernetes-cluster-with-gce/infrastructure
# Apply all terraform manifest. Will create all infra resources
task apply-all -- 001-how-to-setup-simple-kubernetes-cluster-with-gce/infrastructure
# Destroy all infra resources. Will delete all resource. Use with cautions!
task destroy-all -- 001-how-to-setup-simple-kubernetes-cluster-with-gce/infrastructure
Chapter 3: Bootstrap Kubernetes Cluster
Let’s start with your control plane or master node.
-
Check if the MAC address is available with this command
ip link ifconfig -a
-
Check product ID with this command
sudo cat /sys/class/dmi/id/product_uuid
-
(Optional) Check available ports with command
nc
. If nonc
is installed, please install it with the commandsudo apt install netcut
. -
Install Container Runtime. For container.d on Debian, use this command [1]
# Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl gnupg gpg -y sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg # Add the repository to Apt sources: echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install containerd.io -y
-
Install
kubeadm, kubectl, and kubelet
. For Debian# Get kubernetes stable v1.28 curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg # This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
-
Set IpTables [2]
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # sysctl params required by setup, params persist across reboots cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl params without reboot sudo sysctl --system
-
Verify the net_filter and overlay. The result will be like this
lsmod | grep br_netfilter lsmod | grep overlay
-
Verify the system variables
sudo sysctl \ net.bridge.bridge-nf-call-iptables \ net.bridge.bridge-nf-call-ip6tables \ net.ipv4.ip_forward
-
Check the cgroup that is used in the node
ps -p 1 # Result. This node use systemd # PID TTY TIME CMD # 1 ? 00:00:01 systemd
-
Configure cgroup for systems driver and restart containerd [3]
cat <<EOT | sudo tee /etc/containerd/config.toml [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true EOT sudo systemctl restart containerd
-
Do steps 1 - 10 for all nodes.
-
Bootstrap Cluster in the master node. Save the join command in the last line of stdout [4]
export INT_IP_ADDR="CONTROL_PLANE_INTERNAL_IP" export EXT_IP_ADDR="CONTROL_PLANE_EXTERNAL_IP" export POD_CIDR="10.244.0.0/16" sudo kubeadm init \ --apiserver-cert-extra-sans=$EXT_IP_ADDR \ --apiserver-advertise-address $INT_IP_ADDR \ --pod-network-cidr=$POD_CIDR
-
Setup kubeconfig after cluster bootstrap finished
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
Install Cilium or Weave-Net (Deprecated) as CNI from master node
-
Install Weave-Net (Deprecated) [5]
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
-
Update the IP_ALLOC environment variable in weavenet daemon set the same as pod network cidr [6]
containers: - name: weave env: - name: IPALLOC_RANGE value: 10.244.0.0/16
-
Join the other node with our master node. The command is in step 12 stdout
sudo kubeadm join INT_IP_ADDR:6443 --token TOKEN \ --discovery-token-ca-cert-hash CA_CERT_HASH
-
-
Install Cilium with helm
-
Make sure linux kernel version is
>= 4.9.17
uname -r
-
Taint master node [7]
kubectl taint nodes k8s-master-001 node.cilium.io/agent-not-ready=true:NoExecute
-
Install Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
-
Add Cilium repository and Install Cilium CRD [8]
helm repo add cilium https://helm.cilium.io/ helm install cilium cilium/cilium --version 1.15.1 --namespace kube-system
-
Install Cilium CLI
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) CLI_ARCH=amd64 if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
-
Join the other node with our master node. The command is in step 12 stdout
sudo kubeadm join INT_IP_ADDR:6443 --token TOKEN \ --discovery-token-ca-cert-hash CA_CERT_HASH
-
(Optional) Restart unmanaged pods
kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '<none>' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod
-
Start verification process
cilium status
-
-
-
Your kubernetes cluster is ready 🥳
Chapter 4: Access the Kubernetes Cluster
After you kubernetes cluster is ready, you can get your kubeconfig file in ~/.kube/config
file.
Copy that config file and you can access your kubernetes cluster from you local device.
Don't forget to change the internal control plane IP to the external one.
-- server: https://CONTROL_PLANE_INTERNAL_IP:6443
++ server: https://CONTROL_PLANE_EXTERNAL_IP:6443
References
[1] Install using the apt repository | Docker Docs
[2] Forwarding IPv4 and letting iptables see bridged traffic
[3] Configuring the systemd cgroup driver
[4] Initializing your control-plane node
[6] Installing Weave
[7] Taint Effects