Kubernetes Cluster On Ubuntu 22.04: A Step-by-Step Guide
Kubernetes Cluster on Ubuntu 22.04: A Step-by-Step Guide
Hey everyone! So, you’re looking to set up a Kubernetes cluster on Ubuntu 22.04 , huh? Awesome! You’ve come to the right place, guys. Getting Kubernetes up and running can seem a bit daunting at first, especially if you’re new to the whole container orchestration scene. But don’t sweat it! This guide is designed to break down the process into manageable steps, making it super clear and, dare I say, even fun . We’ll be diving deep into everything you need to know, from prepping your servers to deploying your first application. So, grab your favorite beverage, get comfortable, and let’s get this cluster built!
Table of Contents
- Why Ubuntu 22.04 for Your Kubernetes Cluster?
- Prerequisites: What You’ll Need
- Step 1: Preparing Your Nodes (Master & Workers)
- Step 2: Installing Kubernetes Components (kubeadm, kubelet, kubectl)
- Step 3: Initializing the Control Plane (Master Node)
- Step 4: Installing a Pod Network (CNI Plugin)
- Step 5: Joining Worker Nodes to the Cluster
- Step 6: Deploying Your First Application
Why Ubuntu 22.04 for Your Kubernetes Cluster?
First off, why Ubuntu 22.04, also known as ‘Jammy Jellyfish’? Well, this LTS (Long Term Support) release from Canonical is a fantastic choice for running your
Kubernetes cluster
. It’s got solid stability, a massive community backing, and is generally a breeze to work with. Plus, it supports the latest software versions, which is crucial when you’re dealing with cutting-edge tech like Kubernetes. When you’re building the foundation for your containerized applications, you want a stable, well-supported OS underneath. Ubuntu 22.04 delivers just that. It’s packed with newer kernel versions, enhanced security features, and excellent hardware compatibility, all of which contribute to a robust Kubernetes environment. Think of it as building a skyscraper – you need a
really
solid foundation, and Jammy Jellyfish is just that. For anyone serious about deploying scalable, reliable applications, choosing the right operating system is paramount, and Ubuntu 22.04 definitely ticks all the boxes. We’re going to assume you have at least two Ubuntu 22.04 machines ready to go – one for the control plane (master node) and at least one for the worker nodes. The more worker nodes you have, the more resilient and scalable your cluster will be, but let’s start simple with one of each. Ensure these machines have static IP addresses; this is super important for reliable cluster communication. You can set these up via your network configuration files or through your router’s DHCP reservation. Don’t forget to update your system packages on all nodes before we start! Run
sudo apt update && sudo apt upgrade -y
on each machine. This ensures you’re starting with the latest security patches and software versions, which is always a good practice in the sysadmin world.
Prerequisites: What You’ll Need
Before we jump into the nitty-gritty of setting up our
Kubernetes cluster on Ubuntu 22.04
, let’s make sure you’ve got all your ducks in a row. Having the right prerequisites will save you a ton of headaches down the line. Seriously, guys, don’t skip this part! We’re talking about having at least two machines (virtual or physical) running Ubuntu 22.04 LTS. One will be your
control plane node
(the brain of the operation), and the others will be your
worker nodes
(where your applications actually run). For a serious setup, you’ll want multiple worker nodes, but for learning, one of each is perfect. Make sure each machine has a
unique hostname
and a
static IP address
. This is non-negotiable for stable cluster communication. You can set static IPs via your network interface configuration files (e.g.,
/etc/netplan/
) or by reserving IPs in your router’s DHCP settings. Trust me, relying on dynamic IPs for cluster nodes is asking for trouble. Also, you’ll need
SSH access
to all these machines, preferably with passwordless SSH key authentication set up between your master and worker nodes. This makes running commands across your machines way easier. On the software side, you’ll need
curl
and
gnupg
installed on all nodes, as we’ll use them to add repositories and download necessary packages. You can install them with
sudo apt install -y curl gnupg
. Finally, and this is a big one, you need to
disable swap
on all nodes. Kubernetes doesn’t play nicely with swap enabled. To do this, run
sudo swapoff -a
and then comment out the swap line in your
/etc/fstab
file. This ensures swap doesn’t get re-enabled after a reboot. It sounds like a lot, but these steps are crucial for a stable and performant Kubernetes cluster. Get these sorted, and you’re golden!
Step 1: Preparing Your Nodes (Master & Workers)
Alright, folks, let’s get our Ubuntu 22.04 machines ready for the Kubernetes fiesta! This preparation step is
super
important. Think of it like prepping your kitchen before you start cooking – you want everything clean, organized, and ready to go. We need to make sure all our nodes (master and workers) are speaking the same language and have the necessary tools installed. First things first, let’s update our package lists and upgrade existing packages on
all
your nodes. Run this command on each machine:
sudo apt update && sudo apt upgrade -y
. This ensures you’re running the latest software and security patches, which is always a good idea, especially when setting up critical infrastructure like a Kubernetes cluster. Next up, we need to enable some kernel modules and configure sysctl parameters. These settings help optimize network traffic and ensure Kubernetes components can function correctly. On each node, create a new sysctl configuration file:
sudo nano /etc/sysctl.d/kubernetes.conf
. Inside this file, add the following lines:
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
Save the file (Ctrl+X, then Y, then Enter in nano) and apply these settings immediately with:
sudo sysctl --system
. This command loads the new configuration. Now, let’s disable
swap
. Kubernetes requires swap to be disabled to function optimally. If swap is enabled, it can lead to performance issues and unexpected behavior. To disable it temporarily, run
sudo swapoff -a
. To make this change permanent, you need to edit the
/etc/fstab
file. Open it with
sudo nano /etc/fstab
and find the line that refers to swap (it usually looks something like
/dev/sdaX none swap sw 0 0
) and add a ‘#’ at the beginning of the line to comment it out. Save the file. Finally, we need to install
containerd
, the container runtime that Kubernetes will use. It’s a lightweight, powerful option. Run the following commands on
each
node to install and configure containerd:
sudo apt update
sudo apt install -y containerd
# Create default configuration file if it doesn't exist
if [ ! -f /etc/containerd/config.toml ]; then
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
fi
# Enable systemd cgroup driver
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# Restart containerd to apply changes
sudo systemctl restart containerd
sudo systemctl enable containerd
This installs containerd, generates its default configuration, ensures the
SystemdCgroup
option is set to
true
(which is important for compatibility with systemd, the init system used by Ubuntu), and then restarts and enables the service. Phew! That’s a lot, but getting these nodes prepped correctly is the bedrock of a successful Kubernetes setup. You’ve just laid a solid foundation, guys!
Step 2: Installing Kubernetes Components (kubeadm, kubelet, kubectl)
Now that our nodes are prepped and ready to party, it’s time to install the core Kubernetes tools:
kubeadm
,
kubelet
, and
kubectl
. These are the essential pieces of software that will allow us to bootstrap and manage our cluster. We’ll install these on
all
nodes, including the master and workers, because they all need to understand how to interact with the cluster. Let’s start by adding the official Kubernetes package repository to your system. This ensures you get the latest stable versions. First, you need to download the Google Cloud public signing key and add it to your system’s keyring:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg
(Note: Replace
v1.28
with the Kubernetes version you intend to use if needed. Using the latest stable is generally recommended.)
Next, add the Kubernetes APT repository itself. This tells your system where to find the Kubernetes packages:
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
(Again, adjust the version in the URL if necessary.)
Now that the repository is added, we need to update your package list again so your system knows about the new packages available:
sudo apt update
Before we install, there’s one crucial step: we need to
hold
the versions of
kubeadm
,
kubelet
, and
kubectl
. This prevents them from being automatically upgraded by
apt upgrade
in the future, which could potentially break your cluster if you’re not ready for it. Run this command:
sudo apt-mark hold kubeadm kubelet kubectl
Now, we can finally install the actual packages:
sudo apt install -y kubeadm kubelet kubectl
Once the installation is complete, it’s a good idea to verify that everything is installed correctly. You can check the versions with:
kubeadm version
And confirm that the
kubelet
service is enabled (it will start automatically once the cluster is initialized):
sudo systemctl status kubelet
You should see that the
kubelet
service is active and running, or at least enabled. If it’s not running yet, that’s fine because it needs a cluster configuration to actually start up properly. We’ve now installed the brains and the communication tools for our Kubernetes cluster. High five, guys! Next up, we initialize the control plane.
Step 3: Initializing the Control Plane (Master Node)
Alright, team, this is where the magic happens! We’re going to initialize the
control plane
on our designated master node using
kubeadm
. This command sets up all the necessary components like the API server, scheduler, and controller manager, turning our Ubuntu 22.04 machine into the heart of our Kubernetes cluster. Make sure you’re logged into your master node for this step. First, we need to configure
kubeadm
to use the correct network settings. Run the following command. Replace
[MASTER_NODE_IP]
with the actual IP address of your master node, and
[POD_CIDR]
with the IP address range you want to use for your pods. A common choice is
10.244.0.0/16
if you plan to use Flannel as your CNI plugin later.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=[MASTER_NODE_IP]
This command might take a few minutes to complete.
kubeadm
will pull the necessary container images, set up the control plane components, and generate some important configuration files. Once it’s finished, you’ll see a success message, and critically, you’ll get a
kubeadm join
command.
Copy this command down and save it somewhere safe!
It contains a token and a discovery hash needed to join worker nodes to your cluster. It will look something like this:
kubeadm join [MASTER_NODE_IP]:6443 --token [TOKEN]
--discovery-token-ca-cert-hash sha256:[HASH]
After
kubeadm init
completes, you need to configure
kubectl
for your regular user so you can interact with the cluster. Run these commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now you can test if
kubectl
is working by running:
kubectl get nodes
You should see your master node listed, but it will likely be in a
NotReady
status. That’s normal for now because we haven’t set up a
Container Network Interface (CNI)
plugin yet, which is essential for pod networking. Also, the node might be
SchedulingDisabled
because it doesn’t have a CNI. We’ll fix that in the next steps. You’ve officially bootstrapped your Kubernetes control plane, guys! Give yourselves a pat on the back. The next crucial step is to get your worker nodes connected.
Step 4: Installing a Pod Network (CNI Plugin)
Okay, so we’ve got our control plane up and running, but our nodes are still
NotReady
. Why? Because Kubernetes needs a way for pods on different nodes to talk to each other – that’s where a
Container Network Interface (CNI)
plugin comes in. Think of it as setting up the roads and highways for your container traffic. Without it, pods can’t communicate effectively across the cluster. There are several CNI options available, like Calico, Flannel, Weave Net, and Cilium. For this guide, we’ll use
Flannel
, as it’s simple to set up and works great for most use cases, especially for getting started. We’ll install Flannel using a YAML manifest file. First, make sure you’re still on your
master node
and that
kubectl
is configured correctly (you should be able to run
kubectl get nodes
). Now, apply the Flannel manifest. You can usually find the latest manifest on the Flannel GitHub repository, but here’s a common one for recent Kubernetes versions (ensure it matches your Kubernetes version if possible):
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
This command downloads the YAML definition from the specified URL and applies it to your cluster.
kubectl
will create the necessary
DaemonSet
and
ConfigMap
resources for Flannel. It might take a minute or two for the Flannel pods to start running on all your nodes (including the master). You can monitor their status with:
kubectl get pods -n kube-flannel
Once the Flannel pods are running, your nodes should transition from
NotReady
to
Ready
. Let’s verify this. Run the
kubectl get nodes
command again:
kubectl get nodes
You should now see all your nodes listed with a
Ready
status! 🎉 This means your cluster’s network is properly configured, and your nodes are ready to accept workloads. If your nodes are still not ready, double-check the sysctl settings (
net.bridge.bridge-nf-call-iptables
and
net.ipv4.ip_forward
) and ensure containerd is running correctly on all nodes. Sometimes, a reboot of the nodes after applying sysctl settings can also help. With a working CNI, our cluster is much closer to being fully functional. We’ve got networking sorted, so now it’s time to bring our worker nodes into the fold!
Step 5: Joining Worker Nodes to the Cluster
Alright, guys, the moment of truth! We’ve set up the control plane, installed the network, and now it’s time to add our
worker nodes
to the party. Remember that
kubeadm join
command we saved earlier from the
kubeadm init
output on the master node? This is where we use it. If you lost it, don’t panic! You can generate a new join command on the master node with these commands:
On the master node:
# Generate a new token (expires in 24 hours by default)
openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text | grep 'Subject:'
# Get the CA cert hash
openssl x509 -pubkey -in /etc/kubernetes/ca.pem -noout | openssl sha256
# Get a new token
kubeadm token create --print-join-command
This will output a command similar to the one you got initially. Copy that
entire
command. Now, SSH into each of your
worker nodes
one by one. Once you’re logged into a worker node, paste and run the
kubeadm join
command you copied. It will look something like this (don’t use this exact one, use the one generated for
your
cluster!):
sudo kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Run this command with
sudo
on each worker node. This command tells the worker node how to find the master node (using its IP address and port 6443) and how to authenticate itself using the token and CA certificate hash. If the command succeeds, you’ll see a message indicating that the node has joined the cluster successfully and is now ready to accept pods.
Now, head back over to your
master node
. Run the
kubectl get nodes
command again:
kubectl get nodes -o wide
(The
-o wide
flag gives you more information, including the internal and external IP addresses of your nodes.)
You should now see your newly added worker node(s) listed, and they should have a
Ready
status! If a node doesn’t become
Ready
within a few minutes, check the
kubelet
service status on that specific worker node (
sudo systemctl status kubelet
) and ensure it can reach the master node’s IP address and port 6443. Also, verify that the CNI pods (Flannel in our case) are running on the worker node (
kubectl get pods -n kube-flannel
). Congratulations, you’ve successfully added worker nodes to your
Kubernetes cluster on Ubuntu 22.04
! You now have a functioning, multi-node Kubernetes environment ready for deployment.
Step 6: Deploying Your First Application
Woohoo! You’ve made it! Your
Kubernetes cluster on Ubuntu 22.04
is up, running, and ready to host some applications. It’s time for the fun part: deploying something! Let’s deploy a simple Nginx web server. We’ll create a Kubernetes
Deployment
to manage the Nginx pods and a
Service
of type
LoadBalancer
to expose it to the outside world. You can do this using
kubectl
commands or by writing a YAML manifest file. Using a YAML file is the recommended approach for managing your applications in Kubernetes, as it provides a declarative way to define your desired state. Let’s create a file named
nginx-app.yaml
and add the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3 # We want 3 instances of our Nginx pod
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest # Use the latest Nginx image
ports:
- containerPort: 80
--- # Separator for the next resource
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx # Selects pods with the label 'app: nginx'
ports:
- protocol: TCP
port: 80 # The port the service will be available on
targetPort: 80 # The port on the pods to forward traffic to
type: LoadBalancer # Exposes the service externally using a cloud provider's load balancer, or MetalLB if configured
Save this file. Now, apply it to your cluster using
kubectl
:
kubectl apply -f nginx-app.yaml
Kubernetes will now create a Deployment named
nginx-deployment
with 3 replicas of the Nginx container. It will also create a Service named
nginx-service
. Since we’ve specified
type: LoadBalancer
, Kubernetes will try to provision an external load balancer.
Important Note:
If you’re running this on bare-metal or in a lab environment like this without a cloud provider, the
LoadBalancer
type won’t automatically work unless you have a load balancing solution like
MetalLB
configured. For a basic setup, you might initially see the service stuck in a
Pending
state. If you want to access it immediately without MetalLB, you could change the service type to
NodePort
and access it via
http://<any-node-ip>:<node-port>
. To check the status of your deployment and service, run:
kubectl get deployments
kubectl get pods
kubectl get services
You should see your
nginx-deployment
with 3 available pods, and
nginx-service
with an assigned IP address (if using MetalLB or a cloud provider) or a
NodePort
. If you used
NodePort
, find the assigned NodePort (e.g., 3xxxx) and access Nginx via
http://<your-master-node-ip>:<nodeport>
or
http://<your-worker-node-ip>:<nodeport>
. And there you have it! Your first application running on your very own
Kubernetes cluster on Ubuntu 22.04
. You’ve come a long way, guys, from setting up nodes to deploying live applications. This is just the beginning of your Kubernetes journey!