Day 11:Deploying ELK Stack on Docker Swarm Cluster

Day 11:Deploying ELK Stack on Docker Swarm Cluster

ยท

2 min read

Introduction

In modern DevOps workflows, monitoring and logging play a crucial role in diagnosing issues and analyzing system performance. ELK Stack (Elasticsearch, Logstash, and Kibana) is a popular choice for log management. In this guide, we will deploy an ELK stack in a Docker Swarm Cluster to achieve scalable and fault-tolerant centralized logging.

Prerequisites

Before starting, ensure you have:

  • A Docker Swarm cluster (at least 1 manager and 2 worker nodes)

  • Docker & Docker Compose installed

  • At least 4GB RAM per node for optimal performance

  • Port 9200 (Elasticsearch) & 5601 (Kibana) open in firewall settings

Step 1: Create an Overlay Network

To allow communication between ELK services, create an overlay network:

$ docker network create --driver=overlay elk-network

Step 2: Deploy Elasticsearch

Create a file named elasticsearch.yml:

version: '3.8'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.5.0
    environment:
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
    volumes:
      - elasticsearch-data:/usr/share/elasticsearch/data
    networks:
      - elk-network
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager
volumes:
  elasticsearch-data:
networks:
  elk-network:
    external: true

Deploy Elasticsearch:

$ docker stack deploy -c elasticsearch.yml elk

Step 3: Deploy Logstash

Create a logstash.yml file:

version: '3.8'
services:
  logstash:
    image: docker.elastic.co/logstash/logstash:8.5.0
    volumes:
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    networks:
      - elk-network
    deploy:
      replicas: 1
volumes:
  logstash-data:
networks:
  elk-network:
    external: true

Create a logstash.conf pipeline configuration:

input {
  beats {
    port => 5044
  }
}

filter {
  mutate {
    remove_field => [ "@version" ]
  }
}

output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "logs-%{+YYYY.MM.dd}"
  }
}

Deploy Logstash:

$ docker stack deploy -c logstash.yml elk

Step 4: Deploy Kibana

Create kibana.yml:

version: '3.8'
services:
  kibana:
    image: docker.elastic.co/kibana/kibana:8.5.0
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    ports:
      - "5601:5601"
    networks:
      - elk-network
    deploy:
      replicas: 1
networks:
  elk-network:
    external: true

Deploy Kibana:

$ docker stack deploy -c kibana.yml elk

Step 5: Verify the Deployment

  1. Check running services:

     $ docker service ls
    
  2. Access Kibana at: http://<manager-node-ip>:5601

  3. Navigate to Index Management in Kibana and verify indices.

Step 6: Sending Logs to ELK

To send logs from a Docker container to Logstash, install Filebeat:

docker run --rm --network=elk-network \
  -v /var/lib/docker/containers:/var/lib/docker/containers:ro \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker.elastic.co/beats/filebeat:8.5.0 \
  -E output.logstash.hosts=["logstash:5044"]

Conclusion

You have successfully deployed the ELK stack in a Docker Swarm cluster for centralized logging. This setup helps in aggregating logs from multiple containers and visualizing them in Kibana. For production, consider enabling security settings, persistent storage, and load balancing.


Next Steps

  • Integrate with Traefik for Ingress control

  • Set up Loki and Promtail for advanced log management

  • Automate deployment using Terraform

Stay tuned for more DevOps tutorials! ๐Ÿš€

ย