Distributed Locking

Distributed Locking, Already Managed

Race conditions in distributed systems are one of the most frustrating bugs to debug — two workers try to update the same record, both succeed their reads, both write conflicting state. The fix is a distributed lock: before touching shared state, acquire exclusive access. RateQueue can be that lock.

How a Capacity-1 Resource Works as a Lock

A resource with capacity=1 and request scope means only one request can be active at a time. The others queue up. When the active request finishes — the context exits — the next in line acquires it. That's a lock, with queuing instead of rejection.

import ratequeue.aio as rq

# Only one worker can process this user's data at a time
async with rq.acquire(
    f"user-update-{user_id}",
    api_key=RATEQUEUE_API_KEY
):
    user = await db.fetch_user(user_id)
    updated = apply_changes(user, changes)
    await db.save_user(updated)
# Lock released automatically

Why This Is Better Than Redis SETNX

Redis-based locks can fail silently: a process dies while holding the lock, the TTL hasn't expired yet, and the rest of your system hangs waiting for a lock that will never be released. You need careful TTL tuning, heartbeat refreshing, and fallback logic.

RateQueue releases slots on context exit — whether the code completes normally, raises an exception, or the process crashes. With request expiry, stale requests clean themselves up automatically. No TTL tuning, no heartbeat threads.

Queued Locks vs Mutex Locks

Redis locks reject new callers when the lock is held — the caller has to retry, poll, and eventually succeed. RateQueue queues them instead. They wait and get served when the lock is free. This is usually what you actually want: don't drop the work, serialize it.

import { ratequeue } from "@ratequeue/sdk";

// Serialize writes to this external service
await ratequeue.acquire(
  `billing-update-${customerId}`,
  { apiKey: process.env.RATEQUEUE_API_KEY! },
  async () => {
    await updateBillingRecord(customerId, data);
  }
);

Named Resources = Granular Locks

Resources are named strings, so you can create a unique lock per entity. Two workers updating different users don't block each other — only workers touching the same entity are serialized. Fine-grained locking without the overhead of a Redis cluster.

user-update-${userId}
invoice-process-${invoiceId}
order-fulfill-${orderId}
account-debit-${accountId}

Each unique resource name is its own independent lock. Resources are created on first use — no pre-provisioning required.

Distributed locks without the infrastructure

No Redis cluster, no ZooKeeper, no custom lock management. Sign up free and wrap your first critical section.