Advanced Nginx Hardening
security hardening configuration hosting technicalNginx is one of the most widely used web servers and has become a key part of modern enterprise architecture. In this article, we'll explore configuration options that simplify monitoring, enhance performance, and strengthen security — ultimately improving the resilience of your infrastructure
JSON Logging
JSON is an excellent format for Nginx log files for two key reasons. First, it is much more human-readable. Second, transferring logs to systems like OpenSearch for further monitoring or SIEM purposes becomes significantly easier.
Here's a simple example from the nginx.conf
log_format json-logger escape=json '{
"type": "access-log",
"time": "$time_iso8601",
"remote-ip": "$remote_addr",
"x-forward-for": "$proxy_add_x_forwarded_for",
"request-id": "$request_id",
"request-length": "$request_length",
"response-bytes": "$bytes_sent",
"response-body-size": "$body_bytes_sent",
"status": "$status",
"vhost": "$host",
"protocol": "$server_protocol",
"path": "$uri",
"query": "$args",
"duration": "$request_time",
"backend-duration": "$upstream_response_time",
"backend-status": "$upstream_status",
"method": "$request_method",
"referer": "$http_referer",
"user-agent": "$http_user_agent",
"active-connections": "$connections_active"
}';
access_log /var/log/nginx/access.log json-logger;
Which leed to the following output within the access.log file:
{
"type": "access-log",
"time": "2025-02-25T16:02:54+00:00",
"remote-ip": "130.61.78.239",
"x-forward-for": "130.61.78.239",
"request-id": "38750f2a1a51b196fa0a76025b0d1be9",
"request-length": "258",
"response-bytes": "353",
"response-body-size": "167",
"status": "404",
"vhost": "3.69.78.187",
"protocol": "HTTP/1.1",
"path": "/lib/phpunit/Util/PHP/eval-stdin.php",
"query": "",
"duration": "0.016",
"backend-duration": "0.016",
"backend-status": "404",
"method": "GET",
"referer": "",
"user-agent": "Custom-AsyncHttpClient",
"active-connections": "1"
}
Request Parameterization
Large body sizes, long timeouts, and excessively extended KeepAlive settings in requests can significantly impact performance. To optimize efficiency, it's best to keep these parameters as low as possible — while still meeting the needs of the application.
Example from nginx.conf:
client_max_body_size 10M; client_body_timeout 10s; client_header_timeout 10s; keepalive_timeout 5s 5s;
client_max_body_size
Defines the maximum size of the HTTP request body that a client can send. If the limit is exceeded, Nginx returns a 413 Request Entity Too Large error.
client_body_timeout
Specifies the maximum time Nginx waits for the complete request body. If the body is not received within this time, the connection is closed.
client_header_timeout
Sets the maximum time Nginx waits for the client to send the full HTTP header. If this limit is exceeded, the connection is closed.
keepalive_timeout
Defines how long a keep-alive connection remains open after the last request. The first value (e.g., 5s) sets the timeout duration. The second value (optional) is sent to the client as a suggestion for how long it should keep the connection open.
Limiting Requests
If a client attempts to flood a web server with requests, Nginx provides the option to configure "limit request zones" to limit traffic based on various parameters.
Here's an example (reverse proxy with a limit request zone):
limit_req_zone $binary_remote_addr zone=limitreqsbyaddr:20m rate=15r/s;
limit_req_status 429;
upstream app.localhost {
server localhost:8080;
}
server {
listen 443 ssl;
server_name app.devlab.intern;
location / {
limit_req zone=limitreqsbyaddr burst=10;
proxy_pass http://app.localhost;
}
}
$binary_remote_addr
Configures a rate-limiting zone to limit the number of requests per client IP address.
zone=limitreqsbyaddr:20m
Creates a shared memory zone named limitreqsbyaddr with a size of 20 MB. This zone stores rate-limiting data for different IP addresses.
rate=15r/s
Limits the number of requests to a maximum of 15 requests per second per IP. If a client exceeds this limit, the excess requests are rejected.
limit_req_status 429;
Returns a 429 Too Many Requests status code when the rate limit is exceeded, indicating that too many requests were made within a defined time period.
This configuration helps protect services from abuse and overload.
Limitation to only necessary HTTP request types
In my opinion, limiting the allowed HTTP methods to only the necessary or supported ones is a clean way to synchronize the web server settings with the application (e.g., REST API). This not only helps protect against API misuse or the use of unwanted methods but also blocks potentially dangerous requests like TRACE. Additionally, it prevents unnecessary server load by eliminating unsupported or irrelevant requests.
# HEAD is implicit
limit_except GET {
deny all;
}
Another example for allow all methods exception Trace and Patch:
if ($request_method ~ ^(PATCH|TRACE)$) {
return 405;
}
Simple protection against bots
If bots or poorly configured scanners — often with a "talking" user agent — are detected, we can cause maximum confusion by returning an internal Nginx HTTP status. To achieve this, we create a file named bot.protection.conf in the /etc/nginx/snippets folder and add the following content:
map $http_user_agent $blacklist_user_agents {
~*wpscan 1;
~*dirbuster 1;
~*gobuster 1;
}
Entries can of course be added as required. Within the VHost configuration, the file can be loaded as follows:
include /etc/nginx/snippets/bot.protection.conf;
if ($blacklist_user_agents) {
return 444;
}
Http 444 can also by "funny" for certain common file guessings like .env etc:
# <your-domain>/.bash_history for example ends with HTTP 444.
location ~ /\. {
return 444;
}
What Does HTTP 444 Mean?
HTTP 444 is a non-standardized status code that instructs the NGINX web server to close the connection without sending a response header to the client. It is most commonly used to reject malicious or incorrectly formatted requests. An interesting side effect of using this status code is that some scanners fail to handle it properly, adding an extra layer of protection.
Enabling TCP Fast Open
TCP Fast Open is a significant enhancement in Nginx, offering a more efficient way to establish TCP connections. This feature allows data transmission during the initial handshake, notably accelerating the connection process. It is particularly beneficial in reducing latency and optimizing performance, especially in high-latency network environments.
You can verify if your linux kernel supports tcp_fastopen by running
cat /proc/sys/net/ipv4/tcp_fastopen
Return value of 1 means its enabled, otherwise simply run:
echo 1 > /proc/sys/net/ipv4/tcp_fastopen
You can now use this settings via:
listen [::]:443 ssl http2 fastopen=500; listen 443 ssl http2 fastopen=500;
gZIP Compression
GZip is a data compression method that reduces the size of a file by making it smaller. The original data can be fully restored by decompressing (or "unzipping") the compressed file.
For web apps and websites, GZip is important because the HTTP protocol supports compressing data before it's sent over the network.
When GZip is enabled, the size of the files served to visitors is reduced. This means lower bandwidth usage and, as a result, reduced costs for hosting and serving the site. Visitors benefit too, as they download smaller files, leading to faster loading times.
Sample config:
gzip on; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;