Hostwinds Tutorials

Search results for:


Table of Contents


What Is a Reverse Proxy, and Why Use One?
Prerequisites
Step 1: Set Up the Nginx Reverse Proxy
Why This Matters
Create a New Server Block
What This Configuration Does
Enable the Configuration
Step 2: Add SSL with Let's Encrypt and Certbot
Why HTTPS Is Important
Request the Certificate
Verify the Changes
Optional: Force HTTPS
Step 3: Improve SSL Settings (Recommended for Production)
What These Settings Do
Step 4: (Optional) Add Diffie-Hellman Parameters
Why Add This?
Step 5: Set Up Auto-Renewal for SSL Certificates
Final Steps and Good Habits
Advanced Use Cases for Nginx Reverse Proxy with SSL
Hosting Multiple Applications on One Server
Path-Based Proxying
Adding Rate Limiting to Protect Your App
Load Balancing Across Multiple Backend Servers
Logging and Debugging
Custom Headers and Security Enhancements

Nginx Reverse Proxy with SSL

Tags: Cloud Servers,  SSL,  VPS 

What Is a Reverse Proxy, and Why Use One?
Prerequisites
Step 1: Set Up the Nginx Reverse Proxy
Why This Matters
Create a New Server Block
What This Configuration Does
Enable the Configuration
Step 2: Add SSL with Let's Encrypt and Certbot
Why HTTPS Is Important
Request the Certificate
Verify the Changes
Optional: Force HTTPS
Step 3: Improve SSL Settings (Recommended for Production)
What These Settings Do
Step 4: (Optional) Add Diffie-Hellman Parameters
Why Add This?
Step 5: Set Up Auto-Renewal for SSL Certificates
Final Steps and Good Habits
Advanced Use Cases for Nginx Reverse Proxy with SSL
Hosting Multiple Applications on One Server
Path-Based Proxying
Adding Rate Limiting to Protect Your App
Load Balancing Across Multiple Backend Servers
Logging and Debugging
Custom Headers and Security Enhancements

If you're running a web application on a private port (like localhost:3000), it's not directly accessible over the internet. One of the most effective ways to expose that app securely is to put a reverse proxy in front of it.

Nginx is a lightweight well-known tool that can do exactly that — receive incoming traffic and forward it to your app — while also handling HTTPS with a free SSL certificate from Let's Encrypt.

In this guide, you'll learn how to:

  • Set up Nginx as a reverse proxy for an internal web service
  • Secure it with an SSL certificate using Certbot
  • Understand each part of the configuration so you know what it's doing and why

New to web servers? Check out our explanation on how web servers work.

What Is a Reverse Proxy, and Why Use One?

A reverse proxy is a server that sits between your users and your backend services. Instead of your app listening publicly on a port (like 3000), Nginx receives the traffic first, and then passes it to the app running in the background.

Here's why this approach is so useful:

  • Hides Internal Ports and Services
    Your app doesn't need to be exposed directly to the public. That reduces attack surface and helps you control access.
  • Handles HTTPS for You
    Many web frameworks can serve HTTPS directly, but it's often easier and more reliable to let Nginx do it — especially when using free SSL certificates from Let's Encrypt.
  • Enables Hosting Multiple Services on One Server
    You can run multiple apps on different ports (like 3000, 4000, 5000) and route traffic based on domain or path using just one public IP.
  • Improves Logging and Monitoring
    Nginx gives you centralized access and error logs, so it's easier to monitor performance or investigate problems.
  • Provides Optional Caching, Load Balancing, and Rate Limiting
    You can optimize traffic flow and protect backend services with just a few extra lines in your Nginx config.

Even if your app can already handle web traffic, using Nginx as a reverse proxy often simplifies the setup, improves flexibility, and increases control.

Prerequisites

Before we get started, let's make sure you have everything you need:

  • A VPS or cloud server running Ubuntu 20.04 or later: Most commands and package versions used in this tutorial assume a Debian-based system. While Nginx and Certbot work on other distributions, the setup process may differ.
  • Root or sudo access: You'll be installing packages, editing system files, and restarting services — all of which require elevated privileges.
  • A registered domain name: You'll need this to request an SSL certificate. Let's Encrypt validates ownership of your domain before issuing a certificate. Without DNS pointed to your server, validation will fail.
  • DNS pointing to your server's public IP: Make sure your domain's DNS records are updated. A simple A record pointing to your server's IP is enough:
A yourdomain.com → 123.123.123.123
A www.yourdomain.com → 123.123.123.123

Propagation can take a few minutes to a few hours.

Don't know how to configure your DNS? Here's how to add an A record with most domain hosts.

  • An application running on localhost (e.g., http://localhost:3000): This is the app you'll be proxying to. It could be anything — Node.js, Python Flask, Ruby on Rails, etc. As long as it's listening on a local port, you can proxy to it.

Note: If your app isn't running yet, that's okay — you can still go through the setup and test later.

  • Nginx installed: Nginx will act as the public-facing server. If it's not installed yet:
sudo apt update
sudo apt install nginx

Then check that it's running:

sudo systemctl status nginx

You should see "active (running)."

  • Certbot installed with the Nginx plugin: Certbot automates the process of obtaining and renewing SSL certificates from Let's Encrypt. Install it like this:
sudo apt install certbot python3-certbot-nginx

This plugin lets Certbot modify your Nginx configuration automatically when you request a certificate — no manual editing required for basic setups.

Have another operating system? Follow this guide on how to install Let's Encrypt on Fedora and Debian

Step 1: Set Up the Nginx Reverse Proxy

Now that your system is ready, the first real step is to configure Nginx to listen for traffic on your domain and forward it to your internal application — this is what makes Nginx act as a reverse proxy.

Why This Matters

Without this setup, users trying to visit your website would hit an empty page or the default Nginx welcome screen. You need to explicitly tell Nginx:

  • Which domain(s) it should respond to
  • What to do with incoming requests
  • Where to send the traffic behind the scenes

Create a New Server Block

You'll create a config file for your domain in Nginx's sites-available directory. This keeps configurations organized and makes it easy to enable or disable individual sites.

sudo nano /etc/nginx/sites-available/yourdomain.com

Paste in the following block, adjusting the domain and app port as needed:

server {
    listen 80;
    server_name yourdomain.com www.yourdomain.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;

        # Pass important headers to the backend
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

What This Configuration Does

  • listen 80;
    Tells Nginx to listen for HTTP traffic on port 80.
  • server_name yourdomain.com www.yourdomain.com;
    Matches this block to requests made to your domain. You can add or remove subdomains as needed.
  • location /
    Catches all requests to the root and forwards them to your app.
  • proxy_pass http://localhost:3000;
    This is the heart of the reverse proxy — it sends the request to your backend app running on port 3000.
  • proxy_set_header lines
    These preserve details from the original client request, like:
    • The user's IP address (X-Real-IP)
    • The original protocol (HTTP or HTTPS)
    • The original hostname
  • This info is useful for logging, analytics, or when your app needs to generate URLs that match the visitor's experience.

Enable the Configuration

Nginx uses symbolic links in the sites-enabled directory to activate sites. So now you'll create a link and reload Nginx:

sudo ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/
sudo nginx -t

Check for syntax errors. If everything looks good:

sudo systemctl reload nginx

Your reverse proxy is now live — requests to http://yourdomain.com will be passed to your app on port 3000.

Step 2: Add SSL with Let's Encrypt and Certbot

With the reverse proxy working over HTTP, the next step is to secure it with HTTPS. This adds encryption to all communication between your users and your server — protecting login credentials, API requests, personal data, and more.

You'll use Let's Encrypt, a free certificate authority, and Certbot, which automates the process.

Why HTTPS Is Important

  • Encrypts traffic so no one can intercept or tamper with it
  • Improves SEO — search engines prefer secure sites
  • Builds trust — users expect to see the padlock icon
  • Required for many APIs, logins, and payment systems

Request the Certificate

Run this command, replacing the domains with your actual values:

sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com

What this does:

  • Tells Certbot to use the Nginx plugin
  • Specifies which domains you're requesting a certificate for

Certbot will:

  • Perform domain validation by creating a temporary file in your Nginx config
  • Contact Let's Encrypt to verify domain ownership
  • Download your SSL certificate and private key
  • Modify your Nginx config to use HTTPS
  • Optionally redirect all HTTP traffic to HTTPS

Tip: If your DNS isn't fully propagated or your server firewall blocks port 80, validation will fail. You can test this with:

curl -I http://yourdomain.com

To better understand ports, check out our guide on How Web Server Ports work.

Verify the Changes

After Certbot completes, your Nginx config should include something like this:

listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;

You should now be able to visit https://yourdomain.com and see your site with a valid SSL certificate.

Optional: Force HTTPS

If you didn't choose the redirect option during Certbot setup, you can add this manually:

server {
    listen 80;
    server_name yourdomain.com www.yourdomain.com;
    return 301 https://$host$request_uri;
}

This forces all HTTP traffic to be redirected to HTTPS, which ensures users don't accidentally use the insecure version of your site.

Step 3: Improve SSL Settings (Recommended for Production)

Once your SSL certificate is in place, you can fine-tune Nginx to improve security and compatibility. These settings go inside the HTTPS server block.

Here's an improved example:

ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_session_tickets off;

What These Settings Do

  • ssl_protocols TLSv1.2 TLSv1.3
    Disables older, less secure protocols like TLS 1.0 and 1.1.
  • ssl_prefer_server_ciphers on
    Lets your server decide the encryption algorithm, rather than deferring to the browser — which can reduce exposure to weak cipher attacks.
  • ssl_ciphers HIGH:!aNULL:!MD5
    Specifies strong cipher suites and excludes weak or broken ones (like MD5 and null ciphers).
  • ssl_session_cache and ssl_session_timeout
    Control SSL session reuse, which can slightly improve performance without compromising security.
  • ssl_session_tickets off
    Disables session tickets, which can be a security concern if not rotated regularly.

These changes improve your SSL security score and protect visitors against downgrade attacks or insecure encryption choices.

Optional: You can test your site with SSL Labs to see how your configuration performs and get specific improvement suggestions.

Step 4: (Optional) Add Diffie-Hellman Parameters

For even stronger encryption, you can generate a custom Diffie-Hellman (DH) key. This step is optional, but it's often recommended for production environments.

Run this command to generate a 2048-bit DH group:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Then add the following line to your SSL server block:

ssl_dhparam /etc/ssl/certs/dhparam.pem;

Why Add This?

Diffie-Hellman parameters strengthen forward secrecy, which means even if your private key is somehow compromised in the future, past encrypted sessions will still be secure.

It takes a few minutes to generate the DH group, but it's a one-time step and worth doing for better security posture.

Step 5: Set Up Auto-Renewal for SSL Certificates

Let's Encrypt certificates expire every 90 days. Fortunately, Certbot installs a system timer that checks twice a day for certificates due to expire and renews them automatically.

You can confirm the timer is active with:

sudo systemctl list-timers | grep certbot

You should see something like this:

NEXT                         LEFT    LAST                         PASSED  UNIT           ACTIVATES
2025-06-19 04:00:00 UTC      12h     2025-06-18 04:00:00 UTC       11h ago certbot.timer  certbot.service

To test the renewal process manually (without making changes), run:

sudo certbot renew --dry-run

This simulates the full renewal process and confirms that your system is ready to handle it automatically.

If there are no errors, your certificates will renew quietly in the background going forward.

Final Steps and Good Habits

Now that your reverse proxy is set up and secured with SSL, it's a good idea to wrap up with a few practical checks and best practices.

These simple habits can help prevent issues down the line, make your configuration easier to maintain, and make sure everything keeps running the way you expect.

Even if everything appears to be working, spending a few extra minutes here can save you time and trouble later.

Restart your app if it doesn't automatically detect changes
Some apps need to be restarted to work correctly behind a proxy.

Check logs
You can monitor Nginx logs for errors or unusual traffic:

sudo tail -f /var/log/nginx/access.log
sudo tail -f /var/log/nginx/error.log

Keep Nginx and Certbot updated
Use sudo apt update && sudo apt upgrade regularly. Updated packages fix bugs, improve compatibility, and patch security issues.

Advanced Use Cases for Nginx Reverse Proxy with SSL

Once you've mastered the basics of setting up a secure reverse proxy, you can extend your configuration to support more complex needs. Here are some common scenarios that can help you get more out of your server.

Hosting Multiple Applications on One Server

If you run several web apps on different ports, Nginx can route requests to each app based on domain or URL path.

Example: Different domains

server {
    listen 80;
    server_name app1.example.com;

    location / {
        proxy_pass http://localhost:3001;
        # proxy headers here
    }
}

server {
    listen 80;
    server_name app2.example.com;

    location / {
        proxy_pass http://localhost:3002;
        # proxy headers here
    }
}

This setup lets you serve multiple apps using separate subdomains, all via Nginx on standard ports.

Using Docker? Learn how to proxy multiple Docker apps with Nginx.

Path-Based Proxying

Alternatively, you can proxy based on URL paths, which is useful if you want all apps under a single domain:

server {
    listen 80;
    server_name example.com;

    location /app1/ {
        proxy_pass http://localhost:3001/;
        # proxy headers here
    }

    location /app2/ {
        proxy_pass http://localhost:3002/;
        # proxy headers here
    }
}

Note: When using path-based proxying, trailing slashes and URL rewriting can get tricky — make sure your backend app can handle being served under a sub-path.

Adding Rate Limiting to Protect Your App

You can limit how many requests a client can make in a given time frame to protect your backend from abuse or accidental overload.

Add this in the http block in /etc/nginx/nginx.conf:

limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;

Then in your server or location block:

limit_req zone=mylimit burst=20 nodelay;

This configuration allows 10 requests per second with bursts of up to 20 requests, dropping excess requests to avoid overwhelming your app.

Load Balancing Across Multiple Backend Servers

If you have several instances of your app running (for example, multiple containers or VPSs), Nginx can distribute traffic among them:

upstream backend {
    server 192.168.1.10:3000;
    server 192.168.1.11:3000;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend;
        # proxy headers here
    }
}

Nginx balances requests round-robin by default, but you can configure it for other methods like least connections or IP hash.

To learn more, check out our guide on DNS Load Balancing.

Logging and Debugging

You can customize logging to include important proxy info for troubleshooting or analytics:

log_format proxy '$remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent" '
                 'upstream_response_time $upstream_response_time '
                 'request_time $request_time';

access_log /var/log/nginx/proxy_access.log proxy;

This logs upstream response times and total request times, helping identify slow backend responses.

Custom Headers and Security Enhancements

You might want to add or modify HTTP headers for security or functionality:

add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy no-referrer-when-downgrade;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

These headers protect against clickjacking, MIME sniffing, and enforce HTTPS usage.

Written by Hostwinds Team  /  June 14, 2019