Nginx Cheat Sheet - The Configuration Guide That Actually Works
Essential Nginx configurations learned from deploying applications at Intelity, Bhoos Games, and Tathyakar. From basic setup to production-ready configurations that don't break at 3 AM.
After configuring Nginx for conversational AI platforms at Intelity, game servers at Bhoos, and loyalty systems at Tathyakar, I’ve collected a set of configurations that actually work in production. This isn’t another basic tutorial - it’s the configurations I wish I had when starting out.
Why This Cheat Sheet?
Most Nginx tutorials show you how to serve a static HTML file. This guide covers real-world scenarios: load balancing ML models, handling WebSocket connections for games, SSL termination, and configurations that survive production traffic.
Basic Setup & Structure
Let’s start with a solid foundation. Here’s the Nginx structure I use across all projects:
# /etc/nginx/nginx.conf - Main configuration
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log main;
# Performance optimizations
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 50M;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript
application/javascript application/xml+rss
application/json application/xml;
# Include server configurations
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Pro Tip: Worker Processes
Use worker_processes auto
to let Nginx automatically set the number of worker processes based on available CPU cores. I learned this after manually setting it wrong and wondering why performance was terrible.
Reverse Proxy Configurations
Basic Node.js App (Like Our Chatbot API)
# /etc/nginx/sites-available/chatbot-api
server {
listen 80;
server_name api.yourapp.com;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
# API routes
location /api/ {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Timeouts for ML model inference
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# Health check endpoint
location /health {
proxy_pass http://127.0.0.1:3000/health;
access_log off;
}
}
Load Balancing Multiple Instances
At Intelity, we run multiple instances of our chatbot service for high availability. Here’s the load balancing configuration:
# Upstream configuration
upstream chatbot_backend {
least_conn; # Use least connections algorithm
server 127.0.0.1:3001 weight=3 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3002 weight=3 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3003 weight=2 max_fails=3 fail_timeout=30s; # Backup server
# Health check
keepalive 32;
}
server {
listen 80;
server_name api.intelity.com;
location / {
proxy_pass http://chatbot_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Enable connection pooling
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
}
Load Balancing Methods
- round_robin (default): Requests distributed evenly
- least_conn: Route to server with fewest active connections
- ip_hash: Route based on client IP (session persistence)
- hash: Route based on custom key
WebSocket Configuration
At Bhoos Games, our multiplayer games required WebSocket connections. Here’s the configuration that handles thousands of concurrent connections:
# WebSocket proxy configuration
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream game_websocket {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
server 127.0.0.1:8082;
}
server {
listen 80;
server_name ws.bhoosgames.com;
location /socket.io/ {
proxy_pass http://game_websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket specific timeouts
proxy_read_timeout 86400; # 24 hours
proxy_send_timeout 86400;
proxy_connect_timeout 60s;
# Disable buffering for real-time communication
proxy_buffering off;
}
}
SSL/TLS Configuration
Production applications need SSL. Here’s a secure SSL configuration that gets an A+ rating on SSL Labs:
# SSL configuration
server {
listen 443 ssl http2;
server_name api.yourapp.com;
# SSL certificates
ssl_certificate /etc/letsencrypt/live/api.yourapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.yourapp.com/privkey.pem;
# SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_session_tickets off;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/api.yourapp.com/chain.pem;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# Security headers
add_header Strict-Transport-Security "max-age=63072000" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
# Your application
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name api.yourapp.com;
return 301 https://$server_name$request_uri;
}
Static File Serving & Caching
For the Vhoye web dashboard at Tathyakar, we needed efficient static file serving with proper caching:
# Static file serving with caching
server {
listen 80;
server_name dashboard.vhoye.com;
root /var/www/vhoye-dashboard/build;
index index.html;
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
add_header X-Cache-Status "STATIC";
# Enable compression
gzip_static on;
# Security headers for static files
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
}
# Cache HTML files for shorter period
location ~* \.(html)$ {
expires 1h;
add_header Cache-Control "public";
add_header X-Cache-Status "HTML";
}
# API routes
location /api/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Disable caching for API responses
add_header Cache-Control "no-cache, no-store, must-revalidate";
add_header Pragma "no-cache";
add_header Expires "0";
}
# React Router fallback
location / {
try_files $uri $uri/ /index.html;
}
}
Rate Limiting & Security
Protecting your APIs from abuse is crucial. Here are the rate limiting configurations that saved us during traffic spikes:
# Rate limiting configuration
http {
# Define rate limiting zones
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;
limit_req_zone $binary_remote_addr zone=general:10m rate=100r/s;
# Connection limiting
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
}
server {
listen 80;
server_name api.yourapp.com;
# General rate limiting
limit_req zone=general burst=20 nodelay;
limit_conn conn_limit_per_ip 20;
# Strict rate limiting for login
location /api/auth/login {
limit_req zone=login burst=5 nodelay;
proxy_pass http://127.0.0.1:3000;
# ... other proxy settings
}
# API rate limiting
location /api/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://127.0.0.1:3000;
# ... other proxy settings
}
# Block common attack patterns
location ~* \.(php|asp|aspx|jsp)$ {
return 444; # Close connection without response
}
# Block access to sensitive files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}
Rate Limiting Gotcha
Be careful with rate limiting behind load balancers. If you’re using $binary_remote_addr
, all requests might appear to come from the load balancer IP. Use $http_x_forwarded_for
or $http_x_real_ip
instead, but validate these headers first!
Monitoring & Logging
Proper logging saved us countless hours of debugging. Here’s how to set up comprehensive Nginx monitoring:
# Custom log formats for different needs
http {
# Detailed API logging
log_format api_log '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time" '
'cs=$upstream_cache_status';
# JSON format for log aggregation
log_format json_log escape=json '{"timestamp":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"method":"$request_method",'
'"uri":"$request_uri",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"request_time":$request_time,'
'"upstream_response_time":"$upstream_response_time",'
'"user_agent":"$http_user_agent"}';
}
server {
listen 80;
server_name api.yourapp.com;
# Separate logs for different endpoints
access_log /var/log/nginx/api.access.log api_log;
error_log /var/log/nginx/api.error.log warn;
location /api/v1/ {
access_log /var/log/nginx/api-v1.access.log json_log;
proxy_pass http://127.0.0.1:3000;
# ... proxy settings
}
# Don't log health checks
location /health {
access_log off;
proxy_pass http://127.0.0.1:3000/health;
}
}
# Enable Nginx status page for monitoring
server {
listen 8080;
server_name localhost;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
Performance Tuning
These performance optimizations helped us handle high traffic at Intelity without breaking the bank on server costs:
# Performance tuning in nginx.conf
worker_processes auto;
worker_rlimit_nofile 65535;
events {
worker_connections 4096;
use epoll;
multi_accept on;
accept_mutex off;
}
http {
# Connection keepalive
keepalive_timeout 65;
keepalive_requests 1000;
# Buffer sizes
client_body_buffer_size 128k;
client_max_body_size 50m;
client_header_buffer_size 1k;
large_client_header_buffers 4 4k;
output_buffers 1 32k;
postpone_output 1460;
# Timeouts
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
# TCP optimizations
tcp_nopush on;
tcp_nodelay on;
sendfile on;
sendfile_max_chunk 512k;
# Compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/xml+rss
application/json;
# Open file cache
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
Common Nginx Commands
These are the commands I use daily for managing Nginx:
# Test configuration before reloading
sudo nginx -t
# Reload configuration without downtime
sudo nginx -s reload
# Stop Nginx gracefully
sudo nginx -s quit
# Stop Nginx immediately
sudo nginx -s stop
# Check Nginx status
sudo systemctl status nginx
# View error logs in real-time
sudo tail -f /var/log/nginx/error.log
# View access logs with filtering
sudo tail -f /var/log/nginx/access.log | grep "POST"
# Check which process is using port 80
sudo netstat -tlnp | grep :80
# Find Nginx configuration files
sudo nginx -T | head -20
# Check Nginx version and modules
nginx -V
Troubleshooting Common Issues
502 Bad Gateway
Usually means your upstream server is down or unreachable.
# Check if your app is running
curl http://127.0.0.1:3000/health
# Check Nginx error logs
sudo tail -f /var/log/nginx/error.log
413 Request Entity Too Large
File upload too large. Increase client_max_body_size.
# In server block or http block
client_max_body_size 50M;
504 Gateway Timeout
Upstream server taking too long to respond.
# Increase proxy timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
Production Deployment Checklist
Pre-deployment Checklist
- Test configuration with
nginx -t
- Set up SSL certificates (Let’s Encrypt recommended)
- Configure proper security headers
- Set up rate limiting for APIs
- Configure log rotation
- Set up monitoring and alerts
- Test load balancing with multiple backends
- Configure proper cache headers
- Set up backup configuration files
- Document your configuration changes
Conclusion
Nginx is incredibly powerful, but with great power comes great configuration complexity. These configurations have been battle-tested across multiple production environments, handling everything from chatbot APIs to multiplayer game servers.
Remember: start simple, monitor everything, and always test your configurations before deploying to production. The 5 minutes you spend testing can save you hours of downtime debugging at 3 AM.
References
- Official Nginx Documentation - Comprehensive official docs
- Nginx Wiki - Community-driven wiki
- HTML5 Boilerplate Nginx Config - Best practices
- Mozilla SSL Configuration Generator - SSL config tool