Full NGINX Tutorial: Demo Project with Node.js & Docker
This tutorial covers the basics of NGINX, its use cases, and a hands-on demo to set up a simple Node.js application with NGINX and secure it using SSL
Overview
What is NGINX?
- Learn the fundamentals of NGINX and its functionalities.
NGINX Use Cases
- Discover how NGINX acts as a web server, load balancer, caching server, and more.
How to Configure NGINX?
- Explore configuration examples for various scenarios.
What is NGINX?
NGINX is a fast, lightweight, and high-performance open-source web server. Initially designed for HTTP web serving, it has evolved to include functionalities such as:
Reverse proxying
Load balancing
Caching
SSL/TLS termination
NGINX Use Cases
1. NGINX as a Web Server
NGINX serves static files like HTML, CSS, and JavaScript efficiently, handling concurrent requests to ensure maximum performance under heavy loads.
2. NGINX as a Load Balancer
NGINX distributes incoming web traffic across multiple servers to prevent bottlenecks. Common algorithms include:
Round-robin
Least connections
3. NGINX as Caching
NGINX caches responses from backend servers, storing them temporarily for faster delivery of repeated requests.
4. NGINX for Security
As a single entry point, NGINX minimizes the exposure of backend servers. It also handles SSL/TLS termination, ensuring secure communication.
5. NGINX for Compression
NGINX reduces bandwidth consumption by compressing responses and sending data in chunks, optimizing scenarios like video streaming.
How to Configure NGINX?
NGINX configuration is managed through its configuration file (nginx.conf
), typically located in /etc/nginx/nginx.conf
. Key sections include:
Common Configuration Blocks
http: Manages web traffic settings.
server: Defines virtual hosts and ports.
location: Specifies file paths or proxy rules.
Common Directives
listen
: Specifies the IP and port for incoming requests.server_name
: Defines the domain or IP address.root
: Sets the directory for static files.index
: Customizes default index files.
Examples
1. NGINX as a Web Server
server {
listen 80;
server_name example.com;
location / {
root /var/www/example.com;
index index.html index.htm;
}
}
Explanation:
listen 80
: Listens for HTTP traffic on port 80.root
: Points to the website’s files.index
: Specifies default files.
2. NGINX as a Reverse Proxy
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://backend_server_address;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Explanation:
proxy_pass
: Forwards client requests to a backend server.proxy_set_header
: Adds headers for tracking requests.
3. Redirecting HTTP to HTTPS
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
root /var/www/example.com;
index index.html index.htm;
}
}
Explanation:
Redirects HTTP traffic to HTTPS.
Configures SSL certificates for secure communication.
4. NGINX as a Load Balancer
http {
upstream myapp {
least_conn;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp;
}
}
}
Explanation:
upstream
: Defines backend servers.least_conn
: Routes traffic to the least busy server.
5. NGINX as Caching
http {
proxy_cache_path /data/nginx/cache keys_zone=mycache:10m;
server {
listen 80;
location / {
proxy_cache mycache;
proxy_pass http://backend_server_address;
}
}
}
Explanation:
proxy_cache
: Enables response caching.keys_zone
: Allocates memory for cached responses.
NGINX as Kubernetes Ingress Controller
In Kubernetes, the NGINX Ingress Controller handles external access to services by managing routing rules defined in ingress resources. Unlike cloud load balancers, it operates internally within the cluster, providing advanced traffic management without exposing the cluster publicly.
Build a Simple Node.js Application: Dockerize, Configure NGINX, Load Balance, and Secure with HTTPS
Prerequisites
Ensure the following are installed on your system:
Node.js and npm
Docker Desktop
Step 1: Run the Web Application Locally
1. Install Dependencies
Clone the repository and navigate to the project directory.
Git clone https://github.com/imkiran13/NGINX-Tutorial-Demo-Project-with-Node.js-Docker.git
Run the following command to install required packages:
npm install
2. Start the Application
Run the application using:
node server.js
This will start the backend server, serving static files and assets locally.
Step 2: Dockerize the Node.js Application
1. Dockerfile
Create a Dockerfile
to containerize the Node.js application:
FROM node:14
WORKDIR /app
COPY server.js .
COPY index.html .
COPY images ./images
COPY package.json .
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
2. Build Docker Image
Run the following command to build the Docker image:
docker build -t myapp:1.1 .
3. Run the Docker Container
Start the container and expose it on port 3000:
docker run -it -p 3000:3000 myapp:1.1
Step 3: Run Multiple Services with Docker Compose
To run multiple instances of the app for load balancing, create a docker-compose.yml
file:
version: '3'
services:
app1:
build: .
environment:
- APP_NAME=App1
ports:
- "3001:3000"
app2:
build: .
environment:
- APP_NAME=App2
ports:
- "3002:3000"
app3:
build: .
environment:
- APP_NAME=App3
ports:
- "3003:3000"
1. Start Services
Run all services with:
docker-compose up --build -d
2. Stop Services
To stop and remove containers, use:
docker-compose down
Configure NGINX so that we do not want to access each server separately. We want 1 entry point which forwards the request to one of the backend servers.
Step 1: Launch an AWS EC2 Instance
1. Access the AWS Management Console
Navigate to the EC2 Dashboard.
Launch a new EC2 instance.
2. Choose Configuration
OS: Ubuntu (latest version recommended).
Instance Type: t2.micro (sufficient for small-scale testing).
Security Groups:
Allow inbound traffic on:
HTTP (80)
HTTPS (443)
SSH (22)
Custom TCP(8080,3000,3001,3002,3003)
3. Connect to Your Instance
Use SSH to connect:
ssh -i your-key.pem ubuntu@your-instance-public-ip
Step 2: Update System and Install Dependencies
1. Update Packages
sudo apt update && sudo apt upgrade -y
2. Install Docker and Docker Compose
sudo apt install docker.io -y
sudo usermod -aG docker $USER
sudo systemctl enable docker
sudo systemctl start docker
sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | jq -r .tag_name)/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
3. Install NGINX
sudo apt install nginx -y
sudo systemctl enable nginx
sudo systemctl start nginx
4. Install Node.js and npm
sudo apt install nodejs npm -y
Step 3: Clone the Node.js Repository
git clone https://github.com/imkiran13/NGINX-Tutorial-Demo-Project-with-Node.js-Docker.git
cd NGINX-Tutorial-Demo-Project-with-Node.js-Docker
sudo npm install
Test the application locally:
sudo node server.js
Access the app at http://ec2-public-ip:3000
.
Step 4: Configure NGINX for Load Balancing
1. Create a Configuration File
Create a new NGINX configuration file:
sudo vim /etc/nginx/sites-available/nodejs
Paste the following configuration:
upstream nodejs_cluster {
least_conn;
server 13.126.218.233:3001; #public-ip-ec2 and application port
server 13.126.218.233:3002;
server 13.126.218.233:3003;
}
server {
listen 8080;
server_name 13.126.218.233;
location / {
proxy_pass http://nodejs_cluster;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
2. Enable the Configuration
Create a symbolic link to sites-enabled
:
sudo ln -s /etc/nginx/sites-available/nodejs /etc/nginx/sites-enabled/nodejs
Remove the default configuration to prevent conflicts:
sudo rm /etc/nginx/sites-enabled/default
3. Verify and Reload NGINX
sudo nginx -t
sudo systemctl reload nginx
Let us check to which server is the request getting forwarded to.
Open the ec2-public-ip with port 8080, and check the developer tab. Clear the cache and check the network tab to see the server as Nginx. This means that the request is going through Nginx.
If you open the public-ip with port 3001, in the network tab you can see there is no server configuration displayed. This means that the request is going directly to the first server.
Step 5: Add HTTPS with Self-Signed Certificates
Configure HTTPS for a Secure Connection
In today’s world security is a paramount. Configuring HTTPS on your web server is a crucial part to protect sensitive data. We will implement a robust certificate management system, with that we can ensure all the communications between the client and the server are encrypted.
There are usually two ways to implement this certificate process
1. Use Let’s Encrypt (A Certificate Authority) using Certbot which issues free certificates. However, you need to have a Domain for this type. This is used for production type cases.
2. Use Self-Signed Certificate.
This is not a perfect solution though, it will show a warning that they have a certificate but it is not issued from a Certificate Authority. This is often used for testing and development.
In this article, we will be using the 2nd option as we are using for testing purpose only.
1. Generate SSL Certificates
Create a directory to store SSL files:
mkdir ~/nginx-certs
cd ~/nginx-certs
Generate a self-signed certificate:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nginx-selfsigned.key -out nginx-selfsigned.crt
Verify Certificate Files
Check that both the certificate and private key have been created successfully:
nginx-selfsigned.crt (Certificate)
nginx-selfsigned.key (Private key)
2. Update NGINX Configuration
Edit /etc/nginx/nginx.conf
to include HTTPS:
sudo vim /etc/nginx/nginx.conf
Paste the configuration:
nginxCopy codeupstream nodejs_cluster {
least_conn;
server 13.126.218.233:3001;
server 13.126.218.233:3002;
server 13.126.218.233:3003;
}
server {
listen 443 ssl;
server_name 13.126.218.233;
ssl_certificate /home/ubuntu/nginx-certs/nginx-selfsigned.crt;
ssl_certificate_key /home/ubuntu/nginx-certs/nginx-selfsigned.key;
location / {
proxy_pass http://nodejs_cluster;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 8080;
server_name 13.126.218.233;
location / {
return 301 https://$host$request_uri;
}
}
3. Test and Restart NGINX
sudo nginx -t
sudo systemctl restart nginx
Step 6: Verify the Setup
Access Application via HTTPS:
Openhttps://13.126.218.233
in your browser.
(Accept the warning for the self-signed certificate.)Test HTTP to HTTPS Redirection:
Navigate tohttp://13.126.218.233:8080
, which should redirect to HTTPS.Verify Load Balancing:
Use the browser's developer tools or refresh multiple times to observe requests being distributed among servers.
As you can see the site gets loaded with https, but since it is a self signed certificate it shows it is not a valid certificate.You can also try to access the site on public-ip:8080 which will redirect to the secured https site.
By the way, the default port for http is port 80. You can make the change in the config file and see the changes.
This is how one can use Nginx as a reverse proxy server no matter which techstack you’re using
Clean Up
Stop NGINX:
sudo systemctl stop nginx
Stop Docker Containers:
docker-compose down
Conclusion
This guide demonstrated how to:
Set up NGINX as a reverse proxy and load balancer.
Secure connections using self-signed SSL certificates.
Redirect HTTP traffic to HTTPS.
With NGINX’s extensive features like load balancing and SSL/TLS support, it is an ideal solution for modern web infrastructure. If you have any questions, feel free to ask! 🚀