Docker is a platform that uses containerization to make it easier to create, deploy, and run applications. This guide covers everything you need to know to get started with Docker.
What is Docker? #
Docker is a containerization platform that packages your application and all its dependencies together in containers. These containers can run consistently across different environments.
Why Use Docker? #
Traditional Deployment Problems #
- “It works on my machine” syndrome
- Complex setup and configuration
- Dependency conflicts
- Inconsistent environments between development and production
Docker Solutions #
- Consistency - Same environment everywhere
- Isolation - Each container is independent
- Portability - Run anywhere Docker is installed
- Efficiency - Lightweight compared to virtual machines
- Scalability - Easy to scale horizontally
Docker vs Virtual Machines #
Virtual Machines:
- Include full operating system
- Heavy (GBs in size)
- Slow to start (minutes)
- Resource intensive
Docker Containers:
- Share host OS kernel
- Lightweight (MBs in size)
- Fast to start (seconds)
- Efficient resource usage
Core Concepts #
Images #
An image is a read-only template with instructions for creating a Docker container. Think of it as a snapshot of your application and its environment.
Containers #
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container.
Dockerfile #
A text file with instructions on how to build a Docker image.
Docker Hub #
A registry where you can find and share Docker images.
Installing Docker #
Windows and Mac #
Download Docker Desktop from docker.com
Linux (Ubuntu) #
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable dockerVerify installation:
docker --version
docker run hello-worldBasic Docker Commands #
Working with Images #
Pull an image:
docker pull nginx
docker pull node:18
docker pull python:3.11List images:
docker imagesRemove an image:
docker rmi nginx
docker rmi image_idWorking with Containers #
Run a container:
docker run nginx
docker run -d nginx # Run in background (detached)
docker run -d -p 8080:80 nginx # Map port 8080 to container port 80
docker run -d --name my-nginx nginx # Give container a nameList running containers:
docker ps
docker ps -a # Include stopped containersStop a container:
docker stop container_id
docker stop my-nginxStart a stopped container:
docker start container_id
docker start my-nginxRemove a container:
docker rm container_id
docker rm my-nginx
docker rm -f my-nginx # Force remove running containerView container logs:
docker logs container_id
docker logs -f container_id # Follow logsExecute command in running container:
docker exec -it container_id bash
docker exec -it my-nginx shCreating Your First Dockerfile #
Simple Node.js Application #
app.js:
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Hello from Docker!');
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});package.json:
{
"name": "docker-app",
"version": "1.0.0",
"dependencies": {
"express": "^4.18.0"
},
"scripts": {
"start": "node app.js"
}
}Dockerfile:
# Use official Node.js image
FROM node:18
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy application code
COPY . .
# Expose port
EXPOSE 3000
# Start application
CMD ["npm", "start"]Build and Run #
# Build the image
docker build -t my-node-app .
# Run the container
docker run -d -p 3000:3000 --name node-app my-node-app
# Test it
curl http://localhost:3000Dockerfile Instructions #
FROM #
Specifies the base image:
FROM node:18
FROM python:3.11
FROM nginx:latestWORKDIR #
Sets the working directory:
WORKDIR /appCOPY #
Copies files from host to container:
COPY package.json .
COPY . .
COPY src/ /app/src/RUN #
Executes commands during build:
RUN npm install
RUN apt-get update && apt-get install -y curl
RUN pip install -r requirements.txtCMD #
Default command to run when container starts:
CMD ["npm", "start"]
CMD ["python", "app.py"]
CMD ["node", "server.js"]EXPOSE #
Documents which ports the container listens on:
EXPOSE 3000
EXPOSE 8080ENV #
Sets environment variables:
ENV NODE_ENV=production
ENV PORT=3000Docker Compose #
Docker Compose is a tool for defining and running multi-container applications.
docker-compose.yml Example #
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DB_HOST=db
depends_on:
- db
db:
image: postgres:15
environment:
- POSTGRES_PASSWORD=secretpassword
- POSTGRES_DB=myapp
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:Docker Compose Commands #
# Start all services
docker-compose up
# Start in background
docker-compose up -d
# Stop all services
docker-compose down
# View logs
docker-compose logs
# View running services
docker-compose ps
# Rebuild images
docker-compose build
# Restart services
docker-compose restartVolumes - Persisting Data #
Containers are ephemeral - data is lost when container is removed. Volumes persist data.
Named Volumes #
docker run -d -v my-data:/app/data nginxBind Mounts (Mount host directory) #
docker run -d -v /path/on/host:/app/data nginx
docker run -d -v $(pwd):/app nginx # Mount current directoryIn docker-compose.yml #
services:
web:
image: nginx
volumes:
- ./src:/app/src # Bind mount
- node_modules:/app/node_modules # Named volume
volumes:
node_modules:Networking #
Default Networks #
Docker creates three networks automatically:
- bridge - Default network
- host - Use host network
- none - No networking
Container Communication #
Containers on the same network can communicate using container names:
services:
web:
image: nginx
networks:
- app-network
api:
image: node:18
networks:
- app-network
environment:
- DB_HOST=db # Can reference 'db' by name
db:
image: postgres
networks:
- app-network
networks:
app-network:
driver: bridgeBest Practices #
1. Use Official Images #
FROM node:18-alpine # Smaller, more secure2. Minimize Layers #
# Bad - Multiple layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y git
# Good - Single layer
RUN apt-get update && apt-get install -y \
curl \
git \
&& rm -rf /var/lib/apt/lists/*3. Use .dockerignore #
node_modules
npm-debug.log
.git
.env
*.md4. Don’t Run as Root #
RUN useradd -m appuser
USER appuser5. Multi-stage Builds #
# Build stage
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm install --production
CMD ["node", "dist/server.js"]Common Use Cases #
Development Environment #
docker run -d \
-p 3000:3000 \
-v $(pwd):/app \
-v /app/node_modules \
node:18 \
npm run devRunning Databases #
# PostgreSQL
docker run -d \
--name postgres \
-e POSTGRES_PASSWORD=mysecret \
-p 5432:5432 \
-v pgdata:/var/lib/postgresql/data \
postgres:15
# MongoDB
docker run -d \
--name mongo \
-p 27017:27017 \
-v mongodata:/data/db \
mongo:6
# Redis
docker run -d \
--name redis \
-p 6379:6379 \
redis:alpineTroubleshooting #
View container details: #
docker inspect container_idCheck container resource usage: #
docker statsClean up: #
docker system prune # Remove unused data
docker system prune -a # Remove all unused images
docker volume prune # Remove unused volumesDebug container startup: #
docker run -it my-image sh # Interactive mode
docker logs container_id # Check logsUseful Commands Cheatsheet #
# Images
docker pull image_name
docker images
docker rmi image_id
docker build -t tag_name .
# Containers
docker run -d -p 8080:80 image_name
docker ps
docker stop container_id
docker start container_id
docker rm container_id
docker logs container_id
docker exec -it container_id bash
# System
docker system df # Check disk usage
docker system prune # Clean up
docker version # Check version
docker info # System information
# Compose
docker-compose up -d
docker-compose down
docker-compose logs
docker-compose psDocker simplifies application deployment and development workflows. Master these basics and you’ll be able to containerize any application.