Kafka, Zookeeper & Docker Compose: A Quick Setup Guide
Kafka, Zookeeper & Docker Compose: A Quick Setup Guide
Hey everyone! Ever wanted to dive into the world of Kafka but got bogged down in the setup? Well, you’re in the right place. This guide will walk you through setting up Kafka and Zookeeper using Docker Compose . It’s a super handy way to get a local development environment up and running quickly. So, let’s get started!
Table of Contents
- Why Docker Compose for Kafka and Zookeeper?
- Benefits of Using Docker Compose
- Prerequisites
- Creating the
- Breaking Down the
- Important Configuration Details
- Starting the Kafka and Zookeeper Cluster
- Monitoring the Startup Process
- Accessing Kafka and Zookeeper
- Kafka
- Kafka UI
- Zookeeper
- Testing the Kafka Setup
- Creating a Topic
- Producing a Message
- Consuming a Message
- Stopping and Removing the Cluster
- Conclusion
Why Docker Compose for Kafka and Zookeeper?
Before we dive into the how-to, let’s quickly cover the why.
Docker Compose
is a tool that allows you to define and run multi-container
Docker
applications. This means you can define all the services your application needs (like
Kafka
and
Zookeeper
) in a single
docker-compose.yml
file. Then, with a single command, you can start all those services. It’s like magic, but with more YAML.
Benefits of Using Docker Compose
- Simplified Setup: Instead of manually configuring Kafka and Zookeeper , Docker Compose automates the process, reducing the risk of errors and saving you valuable time. This is especially useful when you’re just trying to learn or test things out.
- Isolation: Docker containers provide isolation, meaning your Kafka and Zookeeper instances won’t interfere with other services or environments on your machine. This keeps everything clean and organized.
-
Reproducibility:
With a
docker-compose.ymlfile, you can easily recreate the same environment on different machines. This is great for collaboration and ensuring everyone is working with the same configuration. - Easy Cleanup: When you’re done experimenting, you can easily stop and remove all the containers with a single command. No more hunting down processes and deleting files manually.
Prerequisites
Before we get our hands dirty, make sure you have the following installed:
- Docker: You’ll need Docker installed on your machine. If you don’t have it yet, head over to the official Docker website and follow the installation instructions for your operating system.
- Docker Compose: Docker Compose usually comes bundled with Docker Desktop . If you’re using Docker Engine separately, you might need to install Docker Compose separately as well. Check the Docker documentation for details.
Once you have these prerequisites in place, you’re ready to start building your
docker-compose.yml
file.
Creating the
docker-compose.yml
File
Alright, let’s create a
docker-compose.yml
file in a directory of your choice. This file will define our
Kafka
and
Zookeeper
services. Open your favorite text editor and paste the following configuration:
version: '3.7'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
hostname: kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,BROKER://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,BROKER:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: BROKER
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
kafka-ui:
image: provectuslabs/kafka-ui:latest
ports:
- "8080:8080"
depends_on:
- kafka
environment:
KAFKA_CLUSTERS_0_NAME: Local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
Breaking Down the
docker-compose.yml
File
Let’s take a closer look at what’s happening in this file:
-
version: '3.7': Specifies the version of the Docker Compose file format. -
services: Defines the different services that make up your application.-
zookeeper: This service defines the Zookeeper container.-
image: confluentinc/cp-zookeeper:latest: Specifies the Docker image to use for Zookeeper . In this case, we’re using the official Confluent Platform Zookeeper image. -
hostname: zookeeper: Sets the hostname for the Zookeeper container. -
ports: Maps the port 2181 on the host machine to port 2181 on the container. This allows you to access Zookeeper from your host machine. -
environment: Sets environment variables for the Zookeeper container. These variables configure Zookeeper to listen on port 2181 and set the tick time.
-
-
kafka: This service defines the Kafka container.-
image: confluentinc/cp-kafka:latest: Specifies the Docker image to use for Kafka . We’re using the official Confluent Platform Kafka image. -
hostname: kafka: Sets the hostname for the Kafka container. -
ports: Maps the port 9092 on the host machine to port 9092 on the container. This allows you to access Kafka from your host machine. -
depends_on: Specifies that the Kafka container depends on the Zookeeper container. This ensures that Zookeeper is started before Kafka . -
environment: Sets environment variables for the Kafka container. These variables configure Kafka to connect to Zookeeper , set the broker ID, and configure listeners.
-
-
kafka-ui: This service defines the Kafka UI container.-
image: provectuslabs/kafka-ui:latest: Specifies the Docker image to use for Kafka UI . We’re using the provectuslabs Kafka UI image. -
ports: Maps the port 8080 on the host machine to port 8080 on the container. This allows you to access Kafka UI from your host machine. -
depends_on: Specifies that the Kafka UI container depends on the Kafka container. This ensures that Kafka is started before Kafka UI . -
environment: Sets environment variables for the Kafka UI container. These variables configure Kafka UI to connect to the Kafka broker.
-
-
Important Configuration Details
-
KAFKA_ADVERTISED_LISTENERS: This is a crucial setting. It tells Kafka how to advertise its listeners to clients. In this example, we’re usingPLAINTEXT://kafka:9092for communication within the Docker network andBROKER://localhost:29092for external access. You might need to adjust this depending on your specific setup. -
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: This maps the listener names to security protocols. Here, we’re usingPLAINTEXTfor both internal and external communication. For production environments, you’d likely want to use something more secure likeSASL_SSL. -
KAFKA_INTER_BROKER_LISTENER_NAME: This specifies the listener name for communication between brokers. We’re usingBROKERhere.
Starting the Kafka and Zookeeper Cluster
Now that you have your
docker-compose.yml
file, you can start the
Kafka
and
Zookeeper
cluster. Open a terminal, navigate to the directory where you saved the file, and run the following command:
docker-compose up -d
This command tells
Docker Compose
to start all the services defined in the
docker-compose.yml
file in detached mode (
-d
), meaning they’ll run in the background.
Docker Compose
will pull the necessary images, create the containers, and start them in the correct order.
Monitoring the Startup Process
You can monitor the startup process by running the following command:
docker-compose logs -f
This will show you the logs from all the containers. Look for any errors or warnings. It usually takes a few minutes for everything to start up completely.
Accessing Kafka and Zookeeper
Once everything is up and running, you can access Kafka and Zookeeper from your host machine.
Kafka
Kafka
is accessible on
localhost:9092
from within the
Docker
network and on
localhost:29092
from your host machine.
You can use the
Kafka
command-line tools (like
kafka-console-producer.sh
and
kafka-console-consumer.sh
) to interact with
Kafka
. These tools are usually included in
Kafka
distributions. Alternatively, you can use a
Kafka
client library in your programming language of choice.
Kafka UI
Kafka UI
is accessible on
localhost:8080
from your web browser.
You can use Kafka UI to inspect the Kafka cluster, view topics, browse messages, and manage consumers.
Zookeeper
Zookeeper
is accessible on
localhost:2181
. You typically don’t interact with
Zookeeper
directly unless you’re doing advanced configuration or troubleshooting. You can use the
Zookeeper
command-line client (
zkCli.sh
) to connect to
Zookeeper
.
Testing the Kafka Setup
Let’s test the Kafka setup by creating a topic, producing a message, and consuming that message.
Creating a Topic
You can create a topic using the
kafka-topics.sh
script. First, you’ll need to either download a
Kafka
distribution or use a
Docker
container that includes the
Kafka
command-line tools. For example, you can use the
confluentinc/cp-kafka:latest
image again.
Run the following command to create a topic named
test-topic
:
docker exec -it <kafka_container_id> kafka-topics --create --topic test-topic --partitions 1 --replication-factor 1 --if-not-exists --zookeeper zookeeper:2181
Replace
<kafka_container_id>
with the actual ID of your
Kafka
container. You can find the container ID using
docker ps
.
Producing a Message
Now, let’s produce a message to the
test-topic
using the
kafka-console-producer.sh
script:
docker exec -it <kafka_container_id> kafka-console-producer --topic test-topic --broker-list kafka:9092
This will open a console where you can type messages. Type a message and press Enter. The message will be sent to the
test-topic
.
Consuming a Message
Finally, let’s consume the message from the
test-topic
using the
kafka-console-consumer.sh
script:
docker exec -it <kafka_container_id> kafka-console-consumer --topic test-topic --from-beginning --bootstrap-server kafka:9092
This will start a consumer that reads messages from the beginning of the
test-topic
. You should see the message you produced earlier.
Stopping and Removing the Cluster
When you’re done experimenting, you can stop and remove the
Kafka
and
Zookeeper
cluster by running the following command in the same directory as your
docker-compose.yml
file:
docker-compose down
This will stop and remove all the containers, networks, and volumes associated with the Docker Compose project. It’s a clean way to tear down the environment when you’re finished.
Conclusion
And there you have it! You’ve successfully set up a local Kafka and Zookeeper environment using Docker Compose . This setup is perfect for development, testing, and learning about Kafka . Remember to adjust the configuration to suit your specific needs, especially when moving to production environments. Using Docker Compose simplifies the whole process, allowing you to focus on what really matters: building awesome applications with Kafka .
Hope this helps you get started with Kafka ! Happy coding, and feel free to reach out if you have any questions. Cheers!