Traffic Throttling
Smooth Bursty Traffic Before It Hits Your APIs
Event-driven systems produce bursty traffic. A batch job kicks off, 500 requests fire in a second, your downstream API returns 429s for the next two minutes. Throttling spreads that load over time — the same work gets done, just without the spike that triggers the rate limiter.
What Causes Burst Traffic
The traffic itself is usually fine — the shape is the problem. Common sources:
- →Batch jobs that wake up and process a backlog all at once
- →Cron jobs triggering downstream API calls for every record
- →Webhook fan-out sending notifications to many endpoints simultaneously
- →User actions that fan out into multiple dependent API calls
- →Import jobs that need to enrich or validate each row via an external API
Sub-Second Windowing
For finer-grained throttling, use sub-second windows. Instead of 100 requests/minute, configure 2 requests/second. Same throughput rate, but spread evenly across each second rather than potentially all hitting in the first second of the minute.
import ratequeue.aio as rq
# Configure: 2 req/s window on the resource dashboard
# Then acquire normally — RateQueue handles the pacing
async with rq.acquire("sendgrid-email", api_key=RATEQUEUE_API_KEY):
await send_email(recipient, content)The window configuration lives on the resource, not in your code. Update the limit on the dashboard and it takes effect instantly — no deployment required.
Before and After
Before RateQueue
500 emails triggered → 500 requests in 1 second → 490 bounced with 429s → retry storm → eventual success with massive latency and noisy logs.
After RateQueue
500 emails triggered → queued → sent at 2/second → 250 seconds to complete, zero errors, zero noise. Same outcome, predictable shape.
Works With Any Throughput Target
Your resource limits are yours to define. RateQueue supports any window shape or throughput target:
You're not tied to any particular algorithm. Define the limit you need, and RateQueue enforces it.
Turn spikes into steady throughput
Sign up free and wrap your batch jobs. Define the throughput you want and RateQueue handles the pacing.