Promise Concurrency Queue
Run async operations with a concurrency limit to prevent overwhelming servers and network resources.
A reusable decorator that adds exponential backoff retry logic to any function without modifying its code.
import time
import random
from functools import wraps
def retry(max_attempts=3, base_delay=1.0, exceptions=(Exception,)):
def decorator(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
last_error = None
for attempt in range(1, max_attempts + 1):
try:
return fn(*args, **kwargs)
except exceptions as e:
last_error = e
if attempt == max_attempts:
break
# Exponential backoff with jitter
delay = base_delay * (2 ** (attempt - 1))
jitter = delay * 0.25 * random.uniform(-1, 1)
time.sleep(delay + jitter)
raise last_error
return wrapper
return decorator
Let us trace what happens when a decorated function fails twice then succeeds:
@retry(max_attempts=3, base_delay=0.5)
def fetch_data(url): ...
Call fetch_data("https://api.example.com/data")
Attempt 1: fn() → raises ConnectionError
→ attempt 1 < 3, so continue
→ delay = 0.5 * (2^0) = 0.5s + jitter → sleep ~0.5s
Attempt 2: fn() → raises Timeout
→ attempt 2 < 3, so continue
→ delay = 0.5 * (2^1) = 1.0s + jitter → sleep ~1.0s
Attempt 3: fn() → returns {"status": "ok"}
→ return result immediately
Total wait: ~1.5s
The caller never knows retries happened — same interface, built-in resilience
The @wraps(fn) decorator preserves the original function name, docstring, and signature. Without it, debugging becomes confusing because every retried function would appear as "wrapper" in stack traces. The exceptions parameter lets you control exactly which errors trigger a retry — you do not want to retry a ValueError from bad input, only transient failures like network errors.
Without decorator (retry logic mixed into business code):
def fetch_data(url):
for attempt in range(3):
try:
return requests.get(url)
except Exception:
time.sleep(...)
Every function repeats this pattern → duplication everywhere
With decorator (retry logic separated):
@retry(max_attempts=3, exceptions=(ConnectionError, Timeout))
def fetch_data(url):
return requests.get(url)
Business logic stays clean → retry is a reusable concern
import requests
# Retry only on network-related errors
@retry(max_attempts=4, base_delay=2.0, exceptions=(
requests.ConnectionError,
requests.Timeout,
))
def call_api(endpoint, payload):
response = requests.post(endpoint, json=payload, timeout=10)
response.raise_for_status()
return response.json()
# Retry database connections
@retry(max_attempts=3, base_delay=1.0, exceptions=(OperationalError,))
def get_db_connection():
return psycopg2.connect(host="db.internal", dbname="app")
# Use normally — retries are invisible to the caller
data = call_api("/api/ingest", {"records": batch})
conn = get_db_connection()
Use this decorator for any function that interacts with external systems: APIs, databases, file servers, message queues. Specify the exact exception types to retry — broad Exception catches will mask bugs. For production systems, consider adding a logging call inside the except block so retries are visible in your observability pipeline.