- Types of Rate Limiting
- Types of Rate Limiting
- How it Works
- Algorithms
- Challenges
- Implementation
Using Rate Limits to Prevent DDoS Attacks
Rate limiting can be a productive method for preventing DDoS attacks. It works by limiting the number of requests a user (or bot) can make to a service or server in a time period, preventing them from flooding systems and rendering them unavailable. Rate limiting is a versatile defensive technique that can protect against a variety of attack types by preventing attacks from overwhelming servers while legitimate traffic is allowed through.
Types of Rate Limiting
There are three key types of rate limiting, outlined below:
- User-Based Rate Limiting: Restricts access based on the specific user making the request based on IP address or other identifiers. This helps prevent credential attacks in addition to DDoS attacks. This can be challenging due to the need to identify a user across multiple sessions, an often difficult task.
- Geographic Rate Limiting: This limits the number of requests that can come from a certain location or region. It helps prevent attacks based in a single or few geographic areas and can make applications more secure.
- Time-Based Rate Limiting: This method uses request timestamps to limit their frequency. If too many requests are made in a short time period, it blocks traffic.
How Does Rate Limiting Work to Prevent DDoS Attacks?
Rate limiting is deployed to limit the level of abuse an attacker can release on an application, asset, or server to keep it available for all users. Malicious traffic can overwhelm a resource, preventing legitimate users from accessing it. Rate limiting achieves its goal by disallowing certain users, or groups of users, from monopolizing access to the asset and making it unavailable.
If attackers are bombarding a website with illegitimate malicious traffic in a volumetric DDoS attack, then it can take the website offline due to server overload. This prevents legitimate users from accessing the website, costing the business revenue and damaging its reputation, leading to future revenue losses. Rate limiting can aid in limiting the number of requests the malicious traffic generates by filtering by IP or location, allowing legitimate traffic to get through and continue business as usual.
Common Rate Limiting Algorithms
Token Bucket Algorithm
The bucket stores a fixed number of tokens to ensure that traffic is metered to a tolerable level. This prevents too many requests from getting through in too short of time, overwhelming network assets. Requests become throttled if the bucket runs out of tokens until more become available to continue allowing traffic through while preventing the network environment from being overwhelmed.
Leaky Bucket Algorithm
Stores a fixed amount of data, much like the Token Bucket Algorithm stores a fixed number of tokens. With traffic flowing out at a consistent rate, networks and applications can continue to run smoothly and maintain availability. As requests come in, data allotments are removed from the bucket. When the bucket is full, requests are throttled until additional data becomes available.
Fixed Window Algorithm
Allows requests through based on fixed time intervals, or windows. Once a defined number of requests pass through during a single window, requests are throttled until the next window begins.
Sliding Window Log Algorithm
This algorithm is essentially a combination of the Fixed Window and Leaky Bucket algorithms that throttles requests by separating time into windows that overlap. Each window has a set number of requests that are allowed in a set time period. The siding window algorithm allows for more flexibility than the fixed window algorithm because the size and duration of windows can be adjusted based on the rate at which requests come in.
Rate Limiting Challenges
There are several challenges that arise when deploying rate limiting on a network. These include:
- False Positives: Sometimes, the algorithms may automatically block legitimate traffic. When this occurs, the legitimate users are not able to access the network. The blocking of legitimate traffic occurs as a result of an acceptable rate of traffic being exceeded, automating the blockage. This can be combated by adjusting the algorithms to avoid these false positives.
- Traffic Bursts: When bursts of legitimate traffic come through, it can trigger the algorithm to block traffic, good or bad. This can be caused by events or seasonality.
- Identifying the Correct Rate Limit: This can be a difficult task. A miscalibrated rate limit can block legitimate traffic too often or allow malicious users to access it. Teams need to monitor and adjust the rate limit to dial it in and achieve the correct settings.
Implementing Rate Limiting Techniques
When implementing rate limiting, several aspects must be taken into account. First, you need to audit the network and applications to determine what specific needs they have. Once you identify those needs, you are ready to create a tailor-made solution to secure your network. Next, you should choose the right algorithm and its limits, several of which are mentioned above, to filter the requests in the appropriate manner. Once you have those core elements in place, ongoing monitoring is key to honing the algorithmic activity to best secure your network over time. It likely won't be perfect right away, so ongoing adjustments are key to the best DDoS protection via rate limiting.