Kubernetes Installation On Ubuntu 20.04: A Step-by-Step Guide
Install Kubernetes on Ubuntu 20.04: Your Ultimate Guide
What’s up, everyone! Today, we’re diving deep into something super exciting for all you budding DevOps gurus and cloud enthusiasts: installing Kubernetes on Ubuntu 20.04 . If you’ve been hearing all the buzz about container orchestration and want to get your hands dirty with the de facto standard, then you’ve come to the right place, guys. We’ll be walking through the entire process, step-by-step, making sure even if you’re new to this, you’ll be able to get a working Kubernetes cluster up and running. So, grab your favorite beverage, settle in, and let’s get this Kubernetes party started!
Table of Contents
- Why Kubernetes, Anyway?
- Prerequisites: Getting Ready for the Big Show
- Step 1: Preparing Your Nodes (Master & Worker)
- Step 2: Installing Kubernetes Components (Master Node)
- Step 3: Initializing the Kubernetes Control Plane (Master Node)
- Step 4: Installing a Pod Network Add-on (CNI)
- Step 5: Joining Worker Nodes to the Cluster
- What’s Next?
- Troubleshooting Common Issues
- Conclusion
Why Kubernetes, Anyway?
Before we jump headfirst into the installation, let’s quickly chat about why Kubernetes is such a big deal. Think of it as the ultimate conductor for your containerized applications. You’ve got your apps all packaged up in containers (like Docker), which is awesome for consistency and portability. But when you start running lots of these containers, managing them can become a real headache. You need to deploy them, scale them up or down based on traffic, handle failures, update them without downtime, and so much more. This is where Kubernetes swoops in like a superhero. It automates all these tasks, making it way easier to manage your applications at scale. It ensures your applications are available, resilient, and can grow with your needs. Basically, Kubernetes helps you build and run applications that are robust, scalable, and easy to manage , which is exactly what you want in today’s fast-paced digital world.
Prerequisites: Getting Ready for the Big Show
Alright, team, before we start typing commands like there’s no tomorrow, let’s make sure we’ve got everything we need. For this guide, we’ll be focusing on a single-node cluster, which is perfect for learning and testing. If you’re planning a production environment, you’ll need multiple nodes, but the core concepts we cover here will still apply. So, what do you need?
-
Two Ubuntu 20.04 Machines:
These can be physical servers, virtual machines (like VirtualBox or VMware), or even cloud instances. For simplicity, we’ll refer to them as
master(your control plane node) andworker(your node where your applications will run). Make sure they have at least 2GB of RAM and 2 CPUs each. Less than that, and Kubernetes might get a bit grumpy. - A Stable Internet Connection: You’ll be downloading quite a few packages, so a good connection is key.
-
SSH Access:
You should be able to SSH into both your
masterandworkermachines. It’s also super handy if you can set up passwordless SSH between them. This will make running commands across nodes a breeze. - Basic Linux Command-Line Familiarity: You should be comfortable with using the terminal, editing files, and running basic commands.
Important Note:
Kubernetes doesn’t play nicely with machines that use
swap
memory. It can cause issues with how pods are scheduled and managed. So, for both your
master
and
worker
nodes, you’ll need to
disable swap
. We’ll cover how to do this in the next section.
Step 1: Preparing Your Nodes (Master & Worker)
This is where the real preparation begins, folks. We need to get both our
master
and
worker
nodes in tip-top shape before we install Kubernetes. These steps are crucial, so pay close attention!
Disabling Swap Memory
As I mentioned, Kubernetes really doesn’t like swap memory. To disable it, SSH into
both
your
master
and
worker
machines and run the following commands:
sudo swapoff -a
This command will immediately disable swap. However, it will be re-enabled after a reboot. To make this change permanent, we need to edit the
/etc/fstab
file. Open it with your favorite text editor (like
nano
or
vim
):
sudo nano /etc/fstab
Now, find the line that refers to swap (it will likely contain
/swap
or
swapfile
) and
comment it out
by adding a
#
at the beginning of the line. It should look something like this:
# /swapfile none swap sw 0 0
Save the file and exit the editor. You can verify that swap is disabled by running
sudo swapon --show
– it should return nothing.
Enabling Kernel Modules and System Settings
Kubernetes needs certain kernel modules to function correctly, and we need to configure some system settings to ensure smooth operation. Run these commands on
both
your
master
and
worker
nodes:
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo sysctl net.ipv4.ip_forward=1
sudo tee /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
Let’s break down what these commands do, guys:
-
sudo modprobe overlayandsudo modprobe br_netfilter: These load theoverlayandbr_netfilterkernel modules. Theoverlaymodule is used by container runtimes like Docker for layered filesystems, andbr_netfilteris crucial for network traffic bridging, which Kubernetes relies on heavily. -
sudo tee /etc/modules-load.d/k8s.conf <<EOF ... EOF: This ensures that theoverlayandbr_netfiltermodules are loaded automatically every time your system boots. -
sudo sysctl net.bridge.bridge-nf-call-iptables=1andsudo sysctl net.ipv4.ip_forward=1: These commands enable IP forwarding and ensure that network packets are correctly processed byiptableswhen they cross bridge network interfaces. This is essential for Kubernetes networking. -
sudo tee /etc/sysctl.d/k8s.conf <<EOF ... EOF: Similar to the previousteecommand, this makes the IP forwarding andiptablessettings persistent across reboots. -
sudo sysctl --system: This applies all the system-widesysctlsettings from configuration files, including the ones we just added.
Installing Containerd (The Container Runtime)
Kubernetes needs a container runtime to actually run your containers. While Docker is popular, Kubernetes now officially recommends
containerd
. So, let’s get that installed on
both
nodes.
First, install the necessary packages:
sudo apt update
sudo apt install -y containerd
Now, we need to configure
containerd
. It generates a default configuration file, but we need to modify it slightly to ensure it uses the
systemd
cgroup driver, which is generally recommended for Kubernetes.
Create the
containerd
configuration directory if it doesn’t exist:
sudo mkdir -p /etc/containerd
Generate the default configuration and then edit it:
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo nano /etc/containerd/config.toml
Inside the
config.toml
file, find the
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
section and make sure
SystemdCgroup
is set to
true
:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
... # other options
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Save and exit the file. Finally, restart the
containerd
service to apply the changes:
sudo systemctl restart containerd
Step 2: Installing Kubernetes Components (Master Node)
Now that our nodes are prepped, let’s focus on the master node . This is where the Kubernetes control plane lives – the brains of the operation.
Installing
kubeadm
,
kubelet
, and
kubectl
We’ll use
kubeadm
to bootstrap our cluster,
kubelet
to run on each node and manage pods, and
kubectl
to interact with the cluster. These tools need to be installed from a Kubernetes-specific APT repository.
First, update your package list and install
apt-transport-https
which allows APT to retrieve packages from repositories over HTTPS:
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gpg
Next, download the Google Cloud public signing key:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Add the Kubernetes APT repository to your sources list:
# For Kubernetes v1.29
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Note: If you’re aiming for a different Kubernetes version, replace
v1.29
with your desired version (e.g.,
v1.28
). Always check the official Kubernetes documentation for the latest stable versions and repository configurations.
Update your package list again to include the new repository:
sudo apt update
Finally, install
kubeadm
,
kubelet
, and
kubectl
. We’ll also pin the version to prevent automatic upgrades to potentially incompatible versions:
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
The
apt-mark hold
command is super important here. It tells APT not to automatically upgrade these packages, which can save you from breaking your cluster with an unexpected update.
Step 3: Initializing the Kubernetes Control Plane (Master Node)
This is the moment of truth, guys! We’re going to initialize the Kubernetes control plane on our
master
node using
kubeadm
. This command sets up all the necessary components for your cluster’s brain.
First, we need to know the IP address of our master node. You can find this using
ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -d/ -f1
(replace
eth0
with your network interface if different). Let’s assume your master’s IP is
192.168.1.100
for this example.
Now, run the
kubeadm init
command. We’ll specify the Pod network CIDR. This is a crucial setting for your cluster’s networking. A common choice is
10.244.0.0/16
for Flannel or
192.168.0.0/16
for Calico. We’ll use
10.244.0.0/16
as it’s commonly used with Flannel, a popular network plugin.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.100
-
--pod-network-cidr=10.244.0.0/16: This tells Kubernetes that the IP addresses for your pods will be in this range. It’s essential for the CNI (Container Network Interface) plugin you’ll install later. -
--apiserver-advertise-address=192.168.1.100: This is the IP address that the Kubernetes API server will advertise on. Make sure this is the correct IP of your master node.
This command can take a few minutes to complete. If it’s successful, you’ll see a message indicating that the control plane has been initialized. Crucially, it will also provide you with commands to run to join your worker nodes to the cluster. Make sure you copy and save these commands securely! They contain a token and a discovery hash.
Setting up
kubectl
for the Current User
After
kubeadm init
completes, you won’t be able to use
kubectl
as a regular user yet. You need to configure
kubectl
to point to your new cluster. Run these commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now, you should be able to run
kubectl
commands. Try this to see if your control plane node is ready:
kubectl get nodes
You should see your
master
node listed, but it will likely be in a
NotReady
state. That’s perfectly normal because we haven’t set up networking yet!
Step 4: Installing a Pod Network Add-on (CNI)
Kubernetes needs a network plugin (a CNI - Container Network Interface) to allow pods to communicate with each other across different nodes. Without it, your pods won’t be able to talk, and your cluster won’t function correctly. A popular and simple choice is Flannel . Let’s get it installed on your master node .
Apply the Flannel manifest file. You can download the latest version directly from the Flannel GitHub repository. Check their documentation for the most up-to-date YAML file. Usually, it looks something like this:
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
This command downloads the Flannel configuration and applies it to your cluster. It will create the necessary pods (like a
kube-flannel-ds
DaemonSet) that run on each node to manage the network.
After applying the Flannel manifest, wait a minute or two and then check your nodes again:
kubectl get nodes
You should now see your
master
node in the
Ready
state! 🎉
Step 5: Joining Worker Nodes to the Cluster
It’s time to bring our
worker
node into the fold! Remember those commands that
kubeadm init
gave you? They contain a token and a discovery hash needed to join the cluster. If you didn’t save them, don’t sweat it. You can generate a new token on the
master node
with:
sudo kubeadm token create --print-join-command
This command will output something like:
kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234...xyz
Copy this entire command.
Now, SSH into your
worker node
and run this copied command (with
sudo
):
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Replace
<master-ip>
,
<token>
, and
<hash>
with the actual values you got. For instance:
sudo kubeadm join 192.168.1.100:6443 --token a1b2c3d4e5f6a7b8.1c2d3e4f5a6b7c8d --discovery-token-ca-cert-hash sha256:97e5a2413652b04f9e61c75c81319102d9b32c1a652d0152159f8a094e707d4f
Once the command finishes on the worker node, head back to your
master node
and run
kubectl get nodes
again.
kubectl get nodes
You should now see both your
master
and
worker
nodes listed, and they should both be in the
Ready
state! Congratulations, you’ve successfully installed a Kubernetes cluster on Ubuntu 20.04!
What’s Next?
So, you’ve got a working Kubernetes cluster – awesome! But this is just the beginning, guys. What can you do now?
-
Deploy an Application:
Try deploying a simple application like Nginx or a multi-tier web app. You’ll use
kubectl create deploymentandkubectl expose deploymentto get it running. -
Explore
kubectl: Get familiar withkubectlcommands for managing deployments, services, pods, and more. Commands likekubectl get pods,kubectl logs <pod-name>,kubectl describe <resource-name>will become your best friends. - Learn about Kubernetes Objects: Dive into Deployments, Services, Pods, Namespaces, ConfigMaps, Secrets, and PersistentVolumes. Understanding these core objects is key to building resilient applications.
- Consider Other CNI Plugins: While Flannel is great for getting started, explore other CNI plugins like Calico, Cilium, or Weave Net, which offer different features and performance characteristics.
- Set up Ingress: To access your applications from outside the cluster, you’ll need an Ingress controller.
- Explore Helm: For easier management of complex applications, learn about Helm, the package manager for Kubernetes.
Troubleshooting Common Issues
Even with the best guides, sometimes things don’t go as planned. Here are a few common hiccups and how to fix them:
-
Node Not Ready:
If a node is stuck in
NotReadystatus, the most common culprits are:-
Networking:
Ensure your CNI plugin (like Flannel) is installed correctly and its pods are running (
kubectl get pods -n kube-system). Check logs for Flannel pods. -
kubeletservice: Make sure thekubeletservice is running on the node (sudo systemctl status kubelet). Check its logs (sudo journalctl -u kubelet). - Firewall: Ensure your firewall isn’t blocking necessary Kubernetes ports (especially between nodes).
-
containerd: Verify thatcontainerdis running and configured correctly.
-
Networking:
Ensure your CNI plugin (like Flannel) is installed correctly and its pods are running (
-
kubeadm joinfails:-
Token Expired:
Tokens generated by
kubeadm token createexpire. Generate a new one and try joining again. -
Incorrect Hash:
Double-check the
discovery-token-ca-cert-hash. - Network Connectivity: Ensure the worker node can reach the master node on port 6443.
-
swapenabled: Make sure swap is disabled on the worker node.
-
Token Expired:
Tokens generated by
-
kubectlcommand not found: Ensurekubectlis installed correctly and that your$PATHenvironment variable includes the directory where it’s installed.
Conclusion
And there you have it, legends! You’ve successfully navigated the installation of Kubernetes on Ubuntu 20.04. Getting a cluster up and running is a huge step, and it opens up a world of possibilities for deploying and managing modern applications. Remember, practice makes perfect, so keep experimenting, keep learning, and don’t be afraid to break things and fix them. This is how we grow, right? Happy containerizing, and I’ll catch you in the next one!