Hostwinds Tutorials
Search results for:
Table of Contents
Tags: Cloud Servers, SSL, VPS
If you're running a web application on a private port (like localhost:3000), it's not directly accessible over the internet. One of the most effective ways to expose that app securely is to put a reverse proxy in front of it.
Nginx is a lightweight well-known tool that can do exactly that — receive incoming traffic and forward it to your app — while also handling HTTPS with a free SSL certificate from Let's Encrypt.
In this guide, you'll learn how to:
New to web servers? Check out our explanation on how web servers work.
A reverse proxy is a server that sits between your users and your backend services. Instead of your app listening publicly on a port (like 3000), Nginx receives the traffic first, and then passes it to the app running in the background.
Here's why this approach is so useful:
Even if your app can already handle web traffic, using Nginx as a reverse proxy often simplifies the setup, improves flexibility, and increases control.
Before we get started, let's make sure you have everything you need:
A yourdomain.com → 123.123.123.123
A www.yourdomain.com → 123.123.123.123
Propagation can take a few minutes to a few hours.
Don't know how to configure your DNS? Here's how to add an A record with most domain hosts.
Note: If your app isn't running yet, that's okay — you can still go through the setup and test later.
sudo apt update
sudo apt install nginx
Then check that it's running:
sudo systemctl status nginx
You should see "active (running)."
sudo apt install certbot python3-certbot-nginx
This plugin lets Certbot modify your Nginx configuration automatically when you request a certificate — no manual editing required for basic setups.
Have another operating system? Follow this guide on how to install Let's Encrypt on Fedora and Debian
Now that your system is ready, the first real step is to configure Nginx to listen for traffic on your domain and forward it to your internal application — this is what makes Nginx act as a reverse proxy.
Without this setup, users trying to visit your website would hit an empty page or the default Nginx welcome screen. You need to explicitly tell Nginx:
You'll create a config file for your domain in Nginx's sites-available directory. This keeps configurations organized and makes it easy to enable or disable individual sites.
sudo nano /etc/nginx/sites-available/yourdomain.com
Paste in the following block, adjusting the domain and app port as needed:
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
# Pass important headers to the backend
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Nginx uses symbolic links in the sites-enabled directory to activate sites. So now you'll create a link and reload Nginx:
sudo ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/
sudo nginx -t
Check for syntax errors. If everything looks good:
sudo systemctl reload nginx
Your reverse proxy is now live — requests to http://yourdomain.com will be passed to your app on port 3000.
With the reverse proxy working over HTTP, the next step is to secure it with HTTPS. This adds encryption to all communication between your users and your server — protecting login credentials, API requests, personal data, and more.
You'll use Let's Encrypt, a free certificate authority, and Certbot, which automates the process.
Run this command, replacing the domains with your actual values:
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com
What this does:
Certbot will:
Tip: If your DNS isn't fully propagated or your server firewall blocks port 80, validation will fail. You can test this with:
curl -I http://yourdomain.com
To better understand ports, check out our guide on How Web Server Ports work.
After Certbot completes, your Nginx config should include something like this:
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
You should now be able to visit https://yourdomain.com and see your site with a valid SSL certificate.
If you didn't choose the redirect option during Certbot setup, you can add this manually:
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
return 301 https://$host$request_uri;
}
This forces all HTTP traffic to be redirected to HTTPS, which ensures users don't accidentally use the insecure version of your site.
Once your SSL certificate is in place, you can fine-tune Nginx to improve security and compatibility. These settings go inside the HTTPS server block.
Here's an improved example:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_session_tickets off;
These changes improve your SSL security score and protect visitors against downgrade attacks or insecure encryption choices.
Optional: You can test your site with SSL Labs to see how your configuration performs and get specific improvement suggestions.
For even stronger encryption, you can generate a custom Diffie-Hellman (DH) key. This step is optional, but it's often recommended for production environments.
Run this command to generate a 2048-bit DH group:
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
Then add the following line to your SSL server block:
ssl_dhparam /etc/ssl/certs/dhparam.pem;
Diffie-Hellman parameters strengthen forward secrecy, which means even if your private key is somehow compromised in the future, past encrypted sessions will still be secure.
It takes a few minutes to generate the DH group, but it's a one-time step and worth doing for better security posture.
Let's Encrypt certificates expire every 90 days. Fortunately, Certbot installs a system timer that checks twice a day for certificates due to expire and renews them automatically.
You can confirm the timer is active with:
sudo systemctl list-timers | grep certbot
You should see something like this:
NEXT LEFT LAST PASSED UNIT ACTIVATES
2025-06-19 04:00:00 UTC 12h 2025-06-18 04:00:00 UTC 11h ago certbot.timer certbot.service
To test the renewal process manually (without making changes), run:
sudo certbot renew --dry-run
This simulates the full renewal process and confirms that your system is ready to handle it automatically.
If there are no errors, your certificates will renew quietly in the background going forward.
Now that your reverse proxy is set up and secured with SSL, it's a good idea to wrap up with a few practical checks and best practices.
These simple habits can help prevent issues down the line, make your configuration easier to maintain, and make sure everything keeps running the way you expect.
Even if everything appears to be working, spending a few extra minutes here can save you time and trouble later.
Restart your app if it doesn't automatically detect changes
Some apps need to be restarted to work correctly behind a proxy.
Check logs
You can monitor Nginx logs for errors or unusual traffic:
sudo tail -f /var/log/nginx/access.log
sudo tail -f /var/log/nginx/error.log
Keep Nginx and Certbot updated
Use sudo apt update && sudo apt upgrade regularly. Updated packages fix bugs, improve compatibility, and patch security issues.
Once you've mastered the basics of setting up a secure reverse proxy, you can extend your configuration to support more complex needs. Here are some common scenarios that can help you get more out of your server.
If you run several web apps on different ports, Nginx can route requests to each app based on domain or URL path.
Example: Different domains
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://localhost:3001;
# proxy headers here
}
}
server {
listen 80;
server_name app2.example.com;
location / {
proxy_pass http://localhost:3002;
# proxy headers here
}
}
This setup lets you serve multiple apps using separate subdomains, all via Nginx on standard ports.
Using Docker? Learn how to proxy multiple Docker apps with Nginx.
Alternatively, you can proxy based on URL paths, which is useful if you want all apps under a single domain:
server {
listen 80;
server_name example.com;
location /app1/ {
proxy_pass http://localhost:3001/;
# proxy headers here
}
location /app2/ {
proxy_pass http://localhost:3002/;
# proxy headers here
}
}
Note: When using path-based proxying, trailing slashes and URL rewriting can get tricky — make sure your backend app can handle being served under a sub-path.
You can limit how many requests a client can make in a given time frame to protect your backend from abuse or accidental overload.
Add this in the http block in /etc/nginx/nginx.conf:
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
Then in your server or location block:
limit_req zone=mylimit burst=20 nodelay;
This configuration allows 10 requests per second with bursts of up to 20 requests, dropping excess requests to avoid overwhelming your app.
If you have several instances of your app running (for example, multiple containers or VPSs), Nginx can distribute traffic among them:
upstream backend {
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
# proxy headers here
}
}
Nginx balances requests round-robin by default, but you can configure it for other methods like least connections or IP hash.
To learn more, check out our guide on DNS Load Balancing.
You can customize logging to include important proxy info for troubleshooting or analytics:
log_format proxy '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'upstream_response_time $upstream_response_time '
'request_time $request_time';
access_log /var/log/nginx/proxy_access.log proxy;
This logs upstream response times and total request times, helping identify slow backend responses.
You might want to add or modify HTTP headers for security or functionality:
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy no-referrer-when-downgrade;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
These headers protect against clickjacking, MIME sniffing, and enforce HTTPS usage.
Written by Hostwinds Team / June 14, 2019