Kubernetes Cluster Setup On Ubuntu 24.04 LTS: A Simple Guide
Kubernetes Cluster Setup on Ubuntu 24.04 LTS: A Simple Guide
Hey everyone! So you’re looking to get your hands dirty with Kubernetes, huh? Awesome choice, guys! Setting up a Kubernetes cluster on Ubuntu 24.04 LTS might sound a bit daunting at first, but trust me, it’s totally doable and incredibly rewarding. This guide is all about breaking down that Kubernetes cluster setup on Ubuntu 24.04 LTS server into bite-sized, manageable steps. We’re going to walk through everything from the initial server prep to getting your first pods running, making sure you understand each part of the process. Whether you’re a seasoned sysadmin or just dipping your toes into the world of container orchestration, this tutorial is designed for you. We’ll focus on a practical, hands-on approach, so get ready to type some commands and see your cluster come to life! Let’s dive in and get this Kubernetes adventure started.
Table of Contents
- Pre-flight Checks: Getting Your Ubuntu 24.04 Servers Ready for Kubernetes
- Installing Kubernetes Components: kubeadm, kubelet, and kubectl on Ubuntu 24.04
- Initializing the Control Plane: Bootstrapping Your Kubernetes Cluster
- Joining Worker Nodes to the Cluster: Expanding Your Kubernetes Power
- Installing a Pod Network Add-on: Enabling Pod-to-Pod Communication
- Deploying Your First Application: Testing Your Kubernetes Cluster
- Conclusion: Your Kubernetes Journey on Ubuntu 24.04 LTS Has Just Begun!
Pre-flight Checks: Getting Your Ubuntu 24.04 Servers Ready for Kubernetes
Alright, before we even think about installing Kubernetes, we gotta make sure our Ubuntu 24.04 LTS servers are in tip-top shape. Think of this as prepping your race car before hitting the track – no shortcuts here, guys! First things first, you’ll need at least two Ubuntu 24.04 servers. One will act as your control plane (the brain of the operation), and the others will be your worker nodes (the muscle). Make sure they have stable network connectivity and can talk to each other. Kubernetes cluster setup on Ubuntu 24.04 LTS server really hinges on good network communication.
Now, let’s get these machines hardened up. We need to disable swap. Kubernetes doesn’t play well with swap enabled, as it can lead to unexpected performance issues and scheduling problems. So, on
all
your nodes (control plane and workers), run
sudo swapoff -a
and then comment out the swap line in
/etc/fstab
by adding a
#
at the beginning. This ensures swap stays off after a reboot. Next up, we need to configure some kernel modules and sysctl parameters. Kubernetes needs certain network configurations to function correctly. Run these commands on all nodes:
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
These commands load essential modules and set up IP forwarding and iptables bridging, which are crucial for how Kubernetes handles network traffic between pods. Seriously, don’t skip this part! It’s a common stumbling block for beginners when doing a Kubernetes cluster setup on Ubuntu 24.04 LTS .
Another critical step is ensuring that your nodes can be identified uniquely. Kubernetes uses hostnames to identify nodes. Make sure each of your servers has a unique hostname. You can check your current hostname with
hostnamectl
and set it using
sudo hostnamectl set-hostname <your-new-hostname>
. You might need to update your
/etc/hosts
file as well to reflect these changes, especially if you’re not using DNS.
Finally, let’s talk about container runtimes. Kubernetes needs a way to run your containers, and the most common choice is containerd . We need to install and configure it. On all nodes, run:
# Install containerd
sudo apt-get update && sudo apt-get install -y containerd
# Configure containerd to use systemd as the cgroup driver
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# Restart containerd
sudo systemctl restart containerd
Using
systemd
as the cgroup driver is important because Ubuntu 24.04 LTS uses systemd. Getting these preliminary steps right is key to a smooth
Kubernetes cluster setup on Ubuntu 24.04 LTS
. It might seem like a lot, but it lays a super solid foundation. Now that our servers are prepped, we’re ready to install the actual Kubernetes components!
Installing Kubernetes Components: kubeadm, kubelet, and kubectl on Ubuntu 24.04
Alright, guys, we’ve prepped our Ubuntu 24.04 servers, and now it’s time to install the stars of the show: kubeadm, kubelet, and kubectl . These are the essential tools you need to bootstrap and manage your Kubernetes cluster. Think of kubeadm as the installer, kubelet as the agent that runs on each node, and kubectl as your command-line remote control. The process is pretty similar across all your nodes, but we’ll focus on the control plane first.
First, we need to add the Kubernetes package repositories. This ensures you get the latest stable versions. On all your nodes (yes, even the worker nodes!), run the following commands:
# Update package list and install dependencies
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# Download the Google Cloud public signing key
# Note: The GPG key URL might change. Always check the official Kubernetes documentation for the latest key.
curl -fsSL https://pkgs.k8s.io/core-deb/$(lsb_release -cs)/DOCKER-KEY.public | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add the Kubernetes apt repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core-deb/ current-{{ arch }} main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Make sure to replace
{{ arch }}
with your system’s architecture (e.g.,
amd64
). After adding the repository, update your package list again:
sudo apt-get update
Now, we can install the Kubernetes components. On all your nodes, run:
sudo apt-get install -y kubelet kubeadm kubectl
This command installs the necessary binaries. However, we need to prevent them from being automatically upgraded or uninstalled. So, run this on all nodes:
sudo apt-mark hold kubelet kubeadm kubectl
This
apt-mark hold
command is super important. It tells the package manager not to touch these packages, which is vital for maintaining a stable cluster. If you ever need to upgrade them later, you’ll need to
unhold
them first.
Important Note:
kubelet
will try to start, but it will fail because it’s not yet configured. This is expected behavior! We’ll configure it during the cluster initialization. So, don’t worry if you see errors related to kubelet starting up.
With these packages installed and held, your nodes are now ready to participate in a Kubernetes cluster. This stage is crucial for any Kubernetes cluster setup on Ubuntu 24.04 LTS server . We’ve got the core tools in place. The next big step is initializing the control plane, which is where the magic really begins!
Initializing the Control Plane: Bootstrapping Your Kubernetes Cluster
Alright, we’ve installed our Kubernetes tools, and now it’s time for the main event:
initializing the control plane
! This is where
kubeadm
shines. We’ll run this command only on our designated control plane node. This command bootstraps the cluster, sets up the control plane components (like the API server, scheduler, and controller manager), and prepares the node to act as the master. For a successful
Kubernetes cluster setup on Ubuntu 24.04 LTS server
, this step is absolutely critical.
Before we run
kubeadm init
, we need to decide on a pod network. Kubernetes needs a network plugin (CNI - Container Network Interface) to allow pods to communicate with each other. Popular choices include Calico, Flannel, and Weave Net. For this guide, let’s assume we’ll use
Calico
, as it’s widely used and offers great features. You’ll need to know the IP address of your control plane node that will be reachable by your worker nodes. You can find this using
ip addr show
.
Now, let’s run the
kubeadm init
command. We’ll pass a few important flags to configure our cluster. Here’s a typical command:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint=<your-control-plane-ip>:6443 --upload-certs
Let’s break down these flags:
-
--pod-network-cidr=192.168.0.0/16: This specifies the IP address range for your pods. Make sure this CIDR does not overlap with your existing network ranges. Calico uses this range by default, so192.168.0.0/16is a good choice. -
--control-plane-endpoint=<your-control-plane-ip>:6443: This is crucial. Replace<your-control-plane-ip>with the actual IP address of your control plane node. This endpoint is how worker nodes will find and communicate with the control plane. Port 6443 is the default Kubernetes API server port. -
--upload-certs: This flag uploads the necessary certificates to the cluster secrets, making it easier for other control plane nodes (if you add them later) to join securely. It’s a good practice for production-like setups.
Once
kubeadm init
completes successfully, you’ll see some important output. Pay close attention to it! It will give you two main pieces of information:
-
Commands to run as a regular user:
You’ll see instructions like
mkdir -p $HOME/.kubeandsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config. These commands are necessary so that your regular user can interact with the cluster usingkubectl. Execute these commands immediately. -
kubeadm joincommand: This is the command you’ll use to join your worker nodes to the cluster. It will look something likesudo kubeadm join <your-control-plane-ip>:6443 --token <your-token> --discovery-token-ca-cert-hash sha256:<your-hash>. Save this command securely! You’ll need it for each worker node you want to add.
If
kubeadm init
fails, don’t panic! Usually, it’s because of one of the pre-flight checks we discussed earlier (like swap not being disabled or incorrect network configurations). Review the error messages, fix the underlying issue, and then run
sudo kubeadm reset
on the control plane node before trying
kubeadm init
again.
After running the commands to configure
kubectl
for your user, you should be able to verify that the control plane components are running. Try running:
kubectl get pods -n kube-system
You should see pods like
etcd
,
kube-apiserver
,
kube-controller-manager
, and
kube-scheduler
running. This confirms that your control plane is up and running! The
Kubernetes cluster setup on Ubuntu 24.04 LTS
is well underway. Now, let’s get those worker nodes joining the party.
Joining Worker Nodes to the Cluster: Expanding Your Kubernetes Power
We’ve got our control plane humming along beautifully, and now it’s time to bring our worker nodes into the fold. This is where the
kubeadm join
command we saved earlier comes into play. Remember that command? It’s the golden ticket for your worker nodes to become part of your
Kubernetes cluster setup on Ubuntu 24.04 LTS server
. Each worker node needs to run this command to register itself with the control plane.
Make sure each of your worker nodes has gone through the initial server preparation steps we covered in the first section (disabling swap, configuring kernel modules, installing containerd, and installing
kubelet
,
kubeadm
,
kubectl
with
apt-mark hold
). If you haven’t done this, go back and do it now on
each
worker node. It’s crucial for them to have these components installed and configured correctly.
Now, SSH into each of your worker nodes. You’ll need root privileges (or use
sudo
) to run the
kubeadm join
command. Paste the command you saved from the
kubeadm init
output. It will look something like this:
sudo kubeadm join <your-control-plane-ip>:6443 --token <your-token> --discovery-token-ca-cert-hash sha256:<your-hash>
Replace
<your-control-plane-ip>
,
<your-token>
, and
<your-hash>
with the actual values from your control plane node.
What if you lost the join command or the token expired? No worries, guys! You can generate a new token on the control plane node with:
# Generate a new token
sudo kubeadm token create
And you can get the CA certificate hash again with:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex
Then, construct your
kubeadm join
command using the new token and the hash. Remember to run the
kubeadm join
command with
sudo
on the worker node.
Once the
kubeadm join
command runs successfully on a worker node, it will register itself with the cluster. You should see a confirmation message indicating that the node has joined.
Now, head back to your control plane node (or any machine where you have
kubectl
configured). You can verify that the worker nodes have joined by running:
kubectl get nodes
You should see your control plane node listed, along with your worker nodes. Initially, the worker nodes might show up with a
NotReady
status. This is perfectly normal because we haven’t installed a network plugin yet!
This is a critical part of the
Kubernetes cluster setup on Ubuntu 24.04 LTS server
process. Having your nodes listed and eventually showing
Ready
status confirms that your cluster is forming correctly. The final piece of the puzzle is installing that network plugin.
Installing a Pod Network Add-on: Enabling Pod-to-Pod Communication
We’ve successfully initialized our control plane and joined our worker nodes. The
kubectl get nodes
command shows our nodes, but they’re probably still in a
NotReady
state. Why? Because Kubernetes needs a
pod network add-on
(also known as a CNI plugin) to enable communication between pods running on different nodes. Without this, your pods can’t talk to each other, and your cluster isn’t truly functional.
This is a super important step for any
Kubernetes cluster setup on Ubuntu 24.04 LTS server
. Let’s stick with
Calico
, which we mentioned earlier. Calico is a powerful and widely-used CNI that provides networking and network policy features. To install Calico, you’ll apply a YAML manifest file directly to your cluster using
kubectl
.
First, ensure you are on your control plane node, or a machine where
kubectl
is configured to talk to your cluster. Then, run the following command to download and apply the Calico manifest:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml
Note:
The URL provided is for a specific version of Calico. It’s always a good idea to check the official Calico documentation for the latest stable version and the correct manifest URL. You can usually find these on their GitHub repository or official website. For example, if a newer version like
v3.29.0
is available and recommended, you’d replace the URL accordingly.
This command tells Kubernetes to create all the necessary resources (Deployments, DaemonSets, ConfigMaps, etc.) for Calico to run. It will pull the Calico container images and deploy them across your cluster. This process might take a few minutes to complete.
Once the command finishes, you can monitor the progress by checking the pods, especially the ones related to Calico, and the status of your nodes:
# Check Calico pods (they should start appearing)
kubectl get pods -n calico-system
# Check all pods to see if everything is coming up
kubectl get pods --all-namespaces
# Check the status of your nodes again
kubectl get nodes
After a short while, you should see the Calico pods running in the
calico-system
namespace. Crucially, your worker nodes should transition from
NotReady
to
Ready
status when
kubectl get nodes
is run again. This means they have successfully joined the network and can communicate properly.
If your nodes remain
NotReady
, double-check the Calico pod logs for any errors. Common issues include network connectivity problems between nodes or incorrect configuration of the pod network CIDR during
kubeadm init
. The
kubectl logs <pod-name> -n calico-system
command is your best friend here.
Congratulations! You’ve now completed the core Kubernetes cluster setup on Ubuntu 24.04 LTS server . You have a fully functional Kubernetes cluster ready to deploy your applications. In the next section, we’ll quickly cover how to deploy a simple application to test your new setup.
Deploying Your First Application: Testing Your Kubernetes Cluster
Boom! You’ve done it! You’ve successfully set up a Kubernetes cluster on Ubuntu 24.04 LTS server . But how do you know it’s actually working? The best way to find out is to deploy a simple application and see it in action. This is the moment of truth, guys!
Let’s deploy a classic: a simple Nginx web server. We’ll create a Kubernetes Deployment, which is an object that manages a set of identical pods, and then expose it with a Service so we can access it from outside the cluster.
First, let’s create a
deployment.yaml
file. You can use your favorite text editor (like
nano
or
vim
) for this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # Let's run two instances of Nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Save this file. This manifest tells Kubernetes to create a Deployment named
nginx-deployment
, which will ensure that there are always 2 pods running, each containing an Nginx container listening on port 80.
Now, apply this deployment to your cluster using
kubectl
:
kubectl apply -f deployment.yaml
Kubernetes will now create the pods for your Nginx deployment. You can check the status:
kubectl get deployments
kubectl get pods
You should see your
nginx-deployment
and the pods it created, all in a
Running
state.
Next, we need to expose this deployment so we can access Nginx. We’ll create a Service of type
LoadBalancer
. Note that for a true LoadBalancer service, you’d typically need an external cloud provider or a specific load balancer solution integrated with Kubernetes. For a local setup like this,
NodePort
is often more practical, or you might use a tool like MetalLB if you have a bare-metal environment. For simplicity, let’s create a
NodePort
service.
Create a
service.yaml
file:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx # This selects pods with the label app=nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080 # You can choose a port in the range 30000-32767
type: NodePort
Apply this service definition:
kubectl apply -f service.yaml
Now, you can access your Nginx server by opening a web browser and navigating to
http://<your-node-ip>:30080
. Replace
<your-node-ip>
with the IP address of
any
of your nodes (control plane or worker). You should see the default Nginx welcome page!
This successful deployment and access confirms that your Kubernetes cluster setup on Ubuntu 24.04 LTS server is working as expected. You’ve got pods running, and you can access services. This is just the beginning of what you can do with Kubernetes. Keep exploring, keep learning, and happy containerizing!
Conclusion: Your Kubernetes Journey on Ubuntu 24.04 LTS Has Just Begun!
And there you have it, folks! You’ve successfully navigated the process of
Kubernetes cluster setup on Ubuntu 24.04 LTS server
. From prepping your machines and installing the core components like
kubeadm
,
kubelet
, and
kubectl
, to initializing the control plane, joining worker nodes, and finally enabling inter-pod communication with a network add-on, you’ve tackled it all. Deploying that simple Nginx application was the cherry on top, proving your cluster is alive and kicking.
Remember, this is just the starting point. The world of Kubernetes is vast and incredibly powerful. You’ve laid a solid foundation on a stable platform – Ubuntu 24.04 LTS. This Kubernetes cluster setup on Ubuntu 24.04 LTS server guide was designed to be practical and straightforward, cutting through the jargon to get you up and running efficiently.
What’s next? Dive deeper into Kubernetes concepts like Deployments, StatefulSets, Persistent Volumes, and Ingress controllers. Explore different CNI plugins, experiment with different container runtimes, and learn about security best practices. The possibilities are truly endless. Whether you’re building for development, staging, or production, mastering Kubernetes will open up a whole new level of efficiency and scalability for your applications.
Keep experimenting, keep learning, and don’t be afraid to break things and fix them – that’s how we all learn! Thanks for following along, and happy orchestrating!