Share
Explore

Nginx Configuration (Understanding Configuration)

Reverse proxy and SSL Configuration

A reverse proxy is a type of proxy server that is used to protect a web server’s identity. It’s an intermediate connection point that forwards user/web browser requests to web servers, increasing performance, security, and reliability.
In the below code you can see there are so many nginx directives we are using to configure and handle the requests and passing that requests to our application.
We are using location directives for each application that has been running on our server and by using those directives we are able to manage the client request that, which request has to be communicate to which application

Let's take an example to illustrate how this works

Suppose if client request to this endpoint. nginx web server knows where we have to pass this request and show the response to the client that we are getting from location /metrics endpoint.
server {
listen 80;
listen [::]:80;
server_name 43.204.127.52 racetestnet.io;

location ^~ /ws {
proxy_pass http://0.0.0.0:8546/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Nginx-Proxy true;
}

location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:8545/;
}
location /metrics {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:9100/metrics;
}

location /docs {
# proxy_set_header Host $http_host;
# proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass https://nitants-organization.gitbook.io/race/;
}

location /monitor {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:7000/;
}
location /health {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:9100/health;
}
location /genesis {
default_type "application/json";
alias /home/ubuntu/testnet/op-geth/genesis.json;
autoindex off;
}

location /rollup {
default_type "application/json";
alias /home/ubuntu/testnet/optimism/op-node/rollup.json;
autoindex off;
}


}

server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
server_name 43.204.127.52 racetestnet.io;

location ^~ /ws {
proxy_pass http://0.0.0.0:8546/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Nginx-Proxy true;
}

location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:8545/;
}

location /metrics {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:7500/metrics;
}
location /docs {
# proxy_set_header Host $http_host;
# proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass https://nitants-organization.gitbook.io/race/;
}
location /genesis {
default_type "application/json";
alias /home/ubuntu/testnet/op-geth/genesis.json;
autoindex off;
}

location /rollup {
default_type "application/json";
alias /home/ubuntu/testnet/optimism/op-node/rollup.json;
autoindex off;
}


ssl_certificate /etc/letsencrypt/live/racetestnet.io/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/racetestnet.io/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}


Rate limiting

Explanation of NGINX rate limiting using the leaky bucket algorithm
NGINX rate limiting uses the leaky bucket algorithm, which is widely used in telecommunications and packet‑switched computer networks to deal with burstiness when bandwidth is limited. The analogy is with a bucket where water is poured in at the top and leaks from the bottom; if the rate at which water is poured in exceeds the rate at which it leaks, the bucket overflows.
In terms of request processing, the water represents requests from clients, and the bucket represents a queue where requests wait to be processed according to a first‑in‑first‑out (FIFO) scheduling algorithm. The leaking water represents requests exiting the buffer for processing by the server, and the overflow represents requests that are discarded and never serviced.
Rate limiting prevents from brute-force & DOS(denial-of-service) or DDOS(distributed denial-of-service) attacks.

Configuring Basic Rate Limiting

Rate limiting is configured with two main directives, limit_req_zone and limit_req, as in this example:
The burst parameter defines how many requests a client can make in excess of the rate specified by the zone.
The delay parameter defines the point at which, within the burst size, excessive requests are throttled (delayed) to comply with the defined rate limit.
limit_req_zone $binary_remote_addr zone=limitbyaddr:10m rate=1r/s;
limit_req_status 429;


server {
listen 80;
listen [::]:80;
server_name 43.204.127.52 racetestnet.io;
limit_req zone=limitbyaddr burst=10 delay=5;
...


Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.