Hostwinds Blog
Search results for:
The 429 error—"Too Many Requests"—shows up when something hits your site too frequently in a short amount of time. At first, it might seem like a small issue or just your server trying to manage traffic.
But in many cases, it's not a rush of real visitors causing the problem—it's bots. Some are helpful, like Googlebot. Others, like scrapers or aggressive tools, can overload your site without meaning to. And sometimes, the culprit isn't external at all—it's your own software or monitoring systems triggering the error.
A 429 error is your server's way of saying:
"You're sending too many requests too quickly. Back off for a bit."
This response is usually tied to rate limiting, a method websites and APIs use to control how many requests a single client (like a browser, crawler or script) can send over a period of time.
While it's possible a sudden influx of traffic can come from a surge in real users, it's more often the result of automated activity. These bots and tools aren't necessarily malicious, as much of the internet depends on them to handle repetitive tasks with out human input. But when they send too many requests too fast, they can unwittingly trigger a 429 error.
It's easy to assume the spike is from a traffic surge or even malicious activity. But in many cases, the cause falls into one of these groups:
The bottom line: these aren't people browsing your site—they're automated processes. Some are helpful, some aren't, but either way, they can overload your infrastructure, especially if your server isn't built to handle sudden spikes like those that happen during DDoS attacks.
Before you make changes to your site's rate limits or firewall settings, it helps to know exactly what's causing the problem.
Once a pattern emerges, you can decide whether the traffic is good (e.g., Googlebot) or needs to be blocked or slowed down.
Rate limiting helps keep your site from getting overloaded, but if it's too aggressive, it might block useful traffic too—leading to issues like 504 Gateway Timeout errors. The right configuration can prevent abuse without blocking legitimate traffic.
At the end of the day, it's a balancing act–if your rate limits are too tight, you may block legitimate bots or prevent users from accessing your site. If they're too loose, bad bots can eat up resources or worse.
Search engines and trusted SEO tools are essential for visibility and performance. You want to allow them in—but in a controlled way.
This way, search bots can do their job without overwhelming your infrastructure.
Some bots are clearly abusive. They're not interested in indexing your content—they're trying to scrape it, copy it, or look for vulnerabilities. These need to be blocked or managed more aggressively.
It's easy to focus on external traffic when dealing with 429 errors—but some of the worst offenders might be tools you or your team set up. Internal scripts, SEO audits, uptime monitors, or dashboards can flood your site with requests just as easily as third-party bots.
The difference? You have full control over these.
Even tools that are designed to help can cause problems when misconfigured:
SEO Crawlers (like Screaming Frog, SEMRush and Ahrefs)
These tools crawl your entire site to audit metadata, links, and technical health.
If set to use high concurrency (e.g., 10+ threads) and no crawl delay, they can overwhelm your server, especially on shared or lower-spec environments.
Custom Scripts or Internal Bots
You might have scripts querying your own API endpoints for data analysis, testing, or staging purposes.
If they don't include limits, delays, or caching, they can hammer your application unintentionally—sometimes running every minute via cron.
Site Monitoring Tools
Tools that check uptime, response times, or page performance can be noisy if they're set to check too frequently.
Checking your homepage every 15 seconds might seem harmless—but multiply that by multiple regions or services and it adds up quickly.
The good news is that internal traffic is the easiest to fix—because you control the behavior.
Lower Crawl Speed and Concurrency
In tools like Screaming Frog:
Even dropping from 10 threads to 2 can drastically cut down server strain without losing functionality.
Use Caching Wherever Possible
This reduces the need to repeatedly hit your application for the same results.
Run Audits and Scans During Low-Traffic Hours
If your site is global, consider splitting audits across regions or time windows.
Build Retry Logic Into Scripts
Document and Review Your Own Jobs
Once you've tracked down and stopped what's causing the 429 errors, it's smart to think ahead. Fixing the current issue is only part of the work—now it's time to prevent the same problem from showing up again.
Here are some practical steps to help keep things stable over the long haul:
If your server is returning a 429, it's a good idea to include a Retry-After header in the response. This tells bots and automated tools how long to wait before trying again.
It won't stop scrapers or abusive tools that ignore headers, but it does give legitimate services a way to back off automatically without causing further issues.
Where to apply it:
Don't wait for things to break. A little visibility goes a long way.
Tools you can use:
Rate limits aren't "set it and forget it." As your traffic increases, content changes, or your infrastructure evolves, the thresholds you set earlier might become too aggressive—or too relaxed.
Review your rate-limiting policies regularly:
You might need to increase the limit on some paths or reduce it on others. You can also experiment with using a sliding window algorithm instead of a fixed window to avoid sudden cutoffs.
Tip for teams: Document your rate limits and who they affect. That makes it easier to debug issues when they pop up later.
A good Content Delivery Network does more than just cache content—it can also help filter or throttle unwanted traffic before it even reaches your server.
Most major CDNs (like Cloudflare, Fastly, or Akamai) offer handy tools like:
Offloading this traffic before it hits your origin server helps reduce load, cut down on bandwidth costs, and prevent issues like 429s from happening in the first place.
If you're already using a CDN, take some time to explore its security or bot protection settings—you might already have the tools you need and just need to turn them on.
If you're returning a 429 error, don't serve a blank screen. Add a short explanation and a friendly message. For example:
"We're getting more requests than expected. If you're using an automated tool, try again in a few minutes."
This helps developers and SEO teams understand what happened and adjust accordingly. You can even include a link to documentation or your site's robots.txt if that applies.
A 429 error doesn't always mean your site is overloaded—it often means someone or something is being too pushy.
Learning to track, identify, and manage these requests, you can reduce problems, protect your resources, and make sure your site remains available to the people—and bots—you actually want to serve.
Written by Hostwinds Team / July 16, 2025