How a Dockerfile Works? Write Your First Dockerfile Today!

How a Dockerfile Works? Write Your First Dockerfile Today!

Today, we’ll be diving deep into Docker files exploring how they work, and breaking down the syntax step by step. By the end of this article, you’ll have a clear understanding of how to create and optimize a Basic Dockerfile for your applications.

Welcome to Day 4 of the Docker Simplified Series! If you’re new to Docker or need a refresher on the basics, I recommend checking out this previous article before continuing to ensure you have a solid understanding of Docker’s core concepts.

Now, let’s jump into today’s topic—learning Dockerfile end-to-end and gaining clarity on how it helps automate your application deployment.

What is a Dockerfile?

In simple terms, a Dockerfile is a text document that outlines the steps needed to create a Docker image. It serves as a blueprint for Docker, specifying how to package and execute your application within a container.

A Simple Dockerfile Example for a Python Application

Let’s take a look at a Dockerfile example for a basic Python app:

FROM ubuntu:latest  
WORKDIR /app  
COPY . /app  
RUN pip install -r requirements.txt  
EXPOSE 5000  
CMD ["python", "app.py"]

You might be wondering—what exactly is this Dockerfile telling the container to do? Let’s break it down step by step.

FROM: Choosing a Base Image

FROM ubuntu:latest

This is where everything begins. The FROM command specifies the base image, in this case, ubuntu:latest. A base image is like the foundation for your app. Docker pulls this image from Docker Hub, where you can find a variety of pre-configured images. It’s not a full operating system but a lightweight environment that sets the groundwork for your application.

WORKDIR: Setting the Working Directory

WORKDIR /app

The WORKDIR command sets the working directory inside the container. Think of it like navigating to a folder in your terminal. Here, we’re telling Docker to create and switch to /app.

COPY: Moving Your Code Inside the Container

COPY . /app

The COPY command copies your app’s code from your local machine into the container. Here, we’re copying everything from the current directory into /app inside the container.

If your code is hosted on GitHub instead, replace the COPY command with this:

RUN git clone https://github.com/your-username/your-repo.git .

This will clone your repository directly into the container.

RUN: Installing Dependencies

RUN pip install -r requirements.txt

The RUN command is used to execute commands inside the container. Here, we’re installing the Python dependencies listed in requirements.txt.

If you’re using Node.js, the command would look like this:

RUN npm install

EXPOSE: Defining the Container’s Port

EXPOSE 5000

The EXPOSE command defines the port your app will use inside the container’s network (in this case, port 5000). This enables communication between containers. However, to make the port accessible externally, you’ll need to run the container like this:

docker run -p 5000:5000 your-image

CMD: Running the Application

CMD ["python", "app.py"]

The CMD command tells Docker what to do once everything is set up. Here, it runs your Python app using app.py.

If you're ready to see this Dockerfile in action, try creating the image and running the Docker container yourself. Give it a go now!

Writing Your First Dockerfile: A Node.js Example

Let’s walk through a scenario where you need to write a Dockerfile for a Node.js application. Here’s what the project structure looks like:

my-node-app/
├── app.js
├── package.json
└── package-lock.json

To containerize this app, create a Dockerfile in the root directory with this structure:

FROM node:latest

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD ["node", "app.js"]

Breaking down the unique steps in this Dockerfile:

You might notice we copy package.json and package-lock.json before copying the rest of the application code. This is an important practice in Dockerfile creation known as layer caching.

Here’s why it’s beneficial:

  • Efficiency: Docker can cache the layer that installs dependencies, so when only your code changes, it doesn’t have to re-install everything.

  • Minimizing Builds: This reduces the number of times you need to run npm install, leading to faster builds.

Note: This is the basic structure of a Dockerfile. Advanced topics will be covered in future articles.

Conclusion

Congratulations! You’ve just written your first Dockerfile for a Node.js project. By following these steps, you can easily package your application into a container, ensuring consistency across different environments.

Feel free to comment down below if you have any questions, and don’t forget to subscribe so you don’t miss the next part of this series. Stay tuned for more Docker goodness!

Next Article Preview: We’ll dive into how a Docker image works, how it stores your application, and what layers in the Docker image are. It’s one of the most exciting topics, so stay tuned!

Did you find this article valuable?

Support Hemanth by becoming a sponsor. Any amount is appreciated!