Nginx Config Generator
Generate production-ready nginx configuration files with SSL, reverse proxy, caching, gzip compression, rate limiting, and security headers. Download your .conf file instantly.
Generate Your Nginx Configuration
Add SSL with Let's Encrypt certificate paths and HTTP to HTTPS redirect
Strict-Transport-Security header to force HTTPS for future visits
Compress text responses (HTML, CSS, JS, JSON, XML) to reduce bandwidth
Set browser cache headers for images, CSS, JS, and fonts
X-Frame-Options, X-Content-Type-Options, X-XSS-Protection, Referrer-Policy
Limit requests per IP to protect against abuse and DDoS
301 redirect www.example.com to example.com
Add Upgrade and Connection headers for WebSocket proxying
Configure custom log file paths for this server block
Understanding Nginx Configuration
Nginx (pronounced "engine X") is one of the most widely used web servers in the world. As of early 2026, it powers a significant portion of all websites on the internet, serving as both a web server and a reverse proxy. The nginx configuration file follows a hierarchical structure based on directives and contexts (blocks), with the main contexts being http, server, and location (Wikipedia: Nginx).
The main configuration file is typically located at /etc/nginx/nginx.conf on Linux systems. This file contains global settings in the main context, then an http block that contains server blocks. Each server block represents a virtual host, handling requests for a specific domain or IP address. Inside server blocks, location blocks define how specific URL paths should be handled.
A key concept in nginx configuration is directive inheritance. Directives set in an outer context (like http) are inherited by inner contexts (like server and location) unless they are explicitly overridden. This means you can set gzip compression once in the http block and it applies to all server blocks, or you can override it in a specific location block that should not use compression.
Nginx processes requests by first determining which server block should handle the request (based on the Host header and the server_name directive), then finding the best matching location block within that server. Location matching follows specific rules: exact matches (=) are checked first, then prefix matches (^~), then regular expression matches (~ and ~*), and finally general prefix matches. Understanding this matching order is critical for writing configurations that behave as expected.
Server Blocks Explained
Server blocks (sometimes called virtual hosts by people coming from Apache) are the fundamental building blocks of nginx configuration. Each server block tells nginx how to handle requests for a particular domain name or IP address combination. You can have multiple server blocks in a single nginx installation, allowing one server to host many different websites.
A minimal server block needs three things: a listen directive specifying the port, a server_name directive specifying the domain, and either a root directive (for serving static files) or a proxy_pass directive (for reverse proxying). The listen directive usually specifies port 80 for HTTP or 443 for HTTPS. You can listen on multiple ports by including multiple listen directives.
The server_name directive accepts exact names, wildcard names, and regular expressions. The order of precedence is: exact name match, wildcard name starting with an asterisk, wildcard name ending with an asterisk, then regular expression match. If no server block matches the requested domain, nginx uses the default server, which is the first server block in the configuration unless one is explicitly marked with the default_server parameter.
For organizing server blocks, the standard practice on Debian and Ubuntu systems is to create individual configuration files in /etc/nginx/sites-available/ and create symbolic links to /etc/nginx/sites-enabled/ for the ones that should be active. This pattern makes it easy to enable and disable sites without deleting configuration files. On CentOS and Fedora, configuration files go in /etc/nginx/conf.d/ with a .conf extension.
SSL and HTTPS Setup
Configuring SSL on nginx has become straightforward thanks to Let's Encrypt, which provides free TLS certificates with automated renewal. The recommended approach is to use Certbot, the official Let's Encrypt client, which can automatically configure nginx for you. On Ubuntu 22.04 or later, install it with: sudo apt install certbot python3-certbot-nginx, then run: sudo certbot --nginx -d yourdomain.com.
For manual configuration, you need the ssl_certificate and ssl_certificate_key directives pointing to your certificate and private key files. With Let's Encrypt, these are typically at /etc/letsencrypt/live/yourdomain.com/fullchain.pem and /etc/letsencrypt/live/yourdomain.com/privkey.pem respectively.
Modern SSL configuration should only allow TLS 1.2 and TLS 1.3, as older versions (TLS 1.0 and 1.1) have known vulnerabilities and have been deprecated by all major browsers. The ssl_protocols directive controls this. For cipher suites, the recommended approach in 2026 is to use ssl_ciphers with a modern cipher string that prioritizes ECDHE key exchange and AES-GCM or ChaCha20 encryption, or to let the server select the strongest cipher the client supports by enabling ssl_prefer_server_ciphers (Mozilla SSL Configuration Generator).
Always include an HTTP to HTTPS redirect in a separate server block on port 80. Without this, users who type your domain without the https:// prefix will get a connection refused or see an unencrypted version of your site.
Reverse Proxy Configuration
Using nginx as a reverse proxy is one of its most common deployment patterns. In this setup, nginx sits in front of one or more backend application servers (Node.js, Python, Ruby, Java, Go, etc.) and handles incoming HTTP/HTTPS connections on their behalf. This architecture provides several advantages: SSL termination at the nginx layer, load balancing across multiple backend instances, static file serving without hitting the application server, and connection management that protects backends from slow clients.
The core directive for reverse proxying is proxy_pass, which specifies the address of the backend server. Along with proxy_pass, you should always include several proxy_set_header directives to pass important information to the backend. The Host header tells the backend which domain was requested. X-Real-IP passes the client's IP address, since without it the backend only sees nginx's IP. X-Forwarded-For provides the full chain of proxy addresses. X-Forwarded-Proto tells the backend whether the original request was HTTP or HTTPS.
For applications that use WebSockets (chat applications, real-time dashboards, collaborative editors), you need additional proxy headers: proxy_http_version 1.1, proxy_set_header Upgrade $http_upgrade, and proxy_set_header Connection "upgrade". Without these, WebSocket connections will fail because HTTP/1.0 (the default proxy protocol) does not support the connection upgrade mechanism that WebSockets require.
Timeouts are another critical aspect of reverse proxy configuration. The proxy_connect_timeout, proxy_send_timeout, and proxy_read_timeout directives control how long nginx waits for different phases of backend communication. The defaults (60 seconds each) are reasonable for most applications, but long-running API requests or file uploads may need higher values. Setting timeouts too high can cause nginx to hold connections open unnecessarily, while setting them too low will cause legitimate requests to fail (Nginx proxy module documentation).
Performance Optimization
Nginx is fast out of the box, but several configuration tweaks can significantly improve performance for high-traffic sites. The most impactful optimizations are gzip compression, static asset caching, and connection handling tuning.
Gzip Compression
Enabling gzip compression typically reduces the size of text-based responses by 60-80%, which translates directly to faster page loads and lower bandwidth costs. The gzip_types directive should include all text-based content types: text/plain, text/css, application/javascript, application/json, text/xml, application/xml, image/svg+xml, and application/font-woff2. Set gzip_comp_level to 4 or 5 for a good balance between CPU usage and compression ratio. The gzip_min_length directive should be set to around 256 bytes, as compressing very small responses is counterproductive.
Browser Caching
Setting appropriate Cache-Control headers for static assets (images, CSS, JavaScript, fonts) tells browsers to store these files locally instead of re-downloading them on every visit. For assets with content hashes in their filenames (common with modern build tools), set max-age to one year (31536000 seconds). For assets without hashes, use shorter cache periods (1 hour to 1 day) combined with ETag headers so browsers can check for updates efficiently.
Connection Handling
The worker_processes directive should typically be set to auto, which creates one worker process per CPU core. The worker_connections directive in the events block controls how many simultaneous connections each worker can handle. The default of 512 is often too low for busy sites. Setting it to 1024 or 2048 is common. The keepalive_timeout directive controls how long idle connections remain open. The default of 75 seconds is reasonable, but lowering it to 30-60 seconds can free up connections faster on high-traffic servers.
Community Questions
How to configure nginx to proxy WebSocket connections?
One of the most frequently asked questions about nginx proxying. The answers cover the Upgrade and Connection headers needed for WebSocket support, plus common pitfalls with proxy timeouts that cause WebSocket disconnections.
View on Stack OverflowHow to fix "502 Bad Gateway" errors with nginx reverse proxy?
The 502 error is one of the most common issues when setting up nginx as a reverse proxy. Answers cover backend server not running, incorrect proxy_pass address, SELinux blocking connections, and buffer size mismatches.
View on Stack OverflowOptimal nginx configuration for serving a React/Vue SPA
Covers the try_files directive for client-side routing, caching strategies for hashed assets, and how to handle API proxying alongside the SPA. Includes a complete working configuration example.
View on Stack OverflowVideo Tutorials
Helpful Videos on Nginx Configuration
Complete walkthrough of setting up nginx as a reverse proxy for Node.js, Python, and other backend applications with SSL.
Step-by-step guide to obtaining and configuring Let's Encrypt SSL certificates with automatic renewal on nginx.
Covers gzip, caching, buffer sizes, worker processes, and load balancing configuration for production deployments.
Frequently Asked Questions
Setting up nginx as a reverse proxy involves creating a server block that listens on port 80 or 443 and forwards requests to a backend application. The key directive is proxy_pass, which tells nginx where to send the request. A basic configuration includes listen 80, a server_name for your domain, and a location block with proxy_pass http://localhost:3000. Include proxy_set_header directives for Host, X-Real-IP, and X-Forwarded-For so your backend correctly identifies the original client. Without these headers, your backend sees all requests as coming from localhost. For production, add SSL termination and proxy_set_header X-Forwarded-Proto $scheme so your backend knows whether the request was HTTP or HTTPS. This generator creates all of these directives automatically.
Configuring SSL with Let's Encrypt is a two-step process. Install Certbot with sudo apt install certbot python3-certbot-nginx, then run sudo certbot --nginx -d yourdomain.com. Certbot automatically modifies your nginx config to include SSL certificate paths and redirect HTTP to HTTPS. For manual configuration, add ssl_certificate and ssl_certificate_key directives pointing to the fullchain.pem and privkey.pem files in /etc/letsencrypt/live/yourdomain.com/. Use ssl_protocols TLSv1.2 TLSv1.3 and a modern cipher suite. Add a separate server block on port 80 that redirects to HTTPS with return 301 https://$server_name$request_uri. Let's Encrypt certificates expire after 90 days, so ensure certbot renew runs automatically.
The most important security headers are: X-Frame-Options SAMEORIGIN (clickjacking protection), X-Content-Type-Options nosniff (prevents MIME type sniffing), X-XSS-Protection "1; mode=block" (XSS filter), Referrer-Policy strict-origin-when-cross-origin (controls referrer information), Content-Security-Policy (controls allowed resource sources), and Strict-Transport-Security with max-age of at least 31536000 (forces HTTPS). Add these using the add_header directive. Be careful with Content-Security-Policy, as an overly restrictive policy can break site functionality. Test thoroughly before deploying CSP in production. This generator includes all standard security headers when you check the Security Headers option.
Gzip compression reduces HTTP response sizes by 60-80% for text-based content. When a browser sends Accept-Encoding: gzip, nginx compresses the response before sending it. Enable with gzip on in your http or server block. Use gzip_types to specify which content types to compress: text/html, text/css, application/javascript, application/json, text/xml, and image/svg+xml benefit the most. Binary formats like images are already compressed and should not be gzip'd. Set gzip_comp_level to 4 or 5 for a good CPU-to-compression balance. Set gzip_min_length to 256 to skip tiny responses where gzip overhead exceeds the savings. The generator configures all of these settings when you enable gzip compression.
Nginx uses an event-driven, asynchronous architecture where a small number of worker processes each handle thousands of connections simultaneously. Apache uses a process-based or thread-based model where each connection gets its own process or thread. This makes nginx significantly more memory-efficient under high concurrency. Apache has .htaccess files for per-directory configuration without restarts, popular with shared hosting. Nginx does not support .htaccess but is faster because it does not check for these files on every request. Many production deployments use nginx as a reverse proxy in front of Apache or application servers, combining nginx's connection handling with backend application features.
Rate limiting uses the ngx_http_limit_req_module with two parts. First, define a zone in the http block: limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s. This creates a 10MB shared memory zone allowing 10 requests per second per IP. Second, apply it in a location block: limit_req zone=mylimit burst=20 nodelay. The burst parameter allows short spikes above the rate, and nodelay processes burst requests immediately rather than queuing them. Requests exceeding both rate and burst get a 503 response. For APIs, use stricter limits (2-5 req/s) while allowing higher rates for static assets. You can define multiple zones and apply them selectively to different location blocks.
Single-page applications need a specific configuration for client-side routing. When a user navigates to /about in a React or Vue app, nginx needs to serve index.html instead of looking for a file at /about. The fix is: location / { try_files $uri $uri/ /index.html; }. This tells nginx to check for the requested file, then a directory, then fall back to index.html where your JavaScript handles the routing. Set long cache lifetimes for hashed assets (JS and CSS with content hashes) using a regex location block. For index.html itself, use Cache-Control: no-cache so browsers always check for the latest version when you deploy updates. This generator creates the correct SPA configuration when you select the SPA server type.
Start with nginx -t, which tests configuration syntax without restarting. It tells you exactly which file and line has the error. Common syntax errors include missing semicolons, unmatched braces, and directive typos. If syntax passes but nginx misbehaves, check /var/log/nginx/error.log and /var/log/nginx/access.log. The error log shows permission denied, upstream failures, and SSL certificate problems. For detailed debugging, temporarily set error_log level to debug. Another common issue is forgetting to reload after config changes: use sudo nginx -s reload to apply changes without dropping connections. If reload fails, the previous configuration stays active, so you do not lose your running server.
Sources and References
Quick Facts
- 100% free, no registration required
- All processing happens locally in your browser
- No data sent to external servers
- Works offline after initial page load
- Mobile-friendly responsive design