A reverse proxy is a type of proxy server that is used to protect a web server’s identity. It’s an intermediate connection point that forwards user/web browser requests to web servers, increasing performance, security, and reliability.
In the below code you can see there are so many nginx directives we are using to configure and handle the requests and passing that requests to our application.
We are using location directives for each application that has been running on our server and by using those directives we are able to manage the client request that, which request has to be communicate to which application
Let's take an example to illustrate how this works
this endpoint. nginx web server knows where we have to pass this request and show the response to the client that we are getting from location /metrics endpoint.
alias /home/ubuntu/testnet/optimism/op-node/rollup.json;
autoindex off;
}
ssl_certificate /etc/letsencrypt/live/racetestnet.io/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/racetestnet.io/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Rate limiting
Explanation of NGINX rate limiting using the leaky bucket algorithm
NGINX rate limiting uses the leaky bucket algorithm, which is widely used in telecommunications and packet‑switched computer networks to deal with burstiness when bandwidth is limited. The analogy is with a bucket where water is poured in at the top and leaks from the bottom; if the rate at which water is poured in exceeds the rate at which it leaks, the bucket overflows.
In terms of request processing, the water represents requests from clients, and the bucket represents a queue where requests wait to be processed according to a first‑in‑first‑out (FIFO) scheduling algorithm. The leaking water represents requests exiting the buffer for processing by the server, and the overflow represents requests that are discarded and never serviced.
Rate limiting prevents from brute-force & DOS(denial-of-service) or DDOS(distributed denial-of-service) attacks.
Configuring Basic Rate Limiting
Rate limiting is configured with two main directives, limit_req_zone and limit_req, as in this example:
The burst parameter defines how many requests a client can make in excess of the rate specified by the zone.
The delay parameter defines the point at which, within the burst size, excessive requests are throttled (delayed) to comply with the defined rate limit.