Rate Limiting
Stay within Omise API rate limits and build efficient integrations. Learn about rate limit headers, handle 429 errors gracefully, and optimize your request patterns.
Overviewโ
To ensure reliable service for all merchants, Omise implements rate limiting on API requests. Rate limits prevent any single integration from overwhelming the API and ensure fair resource allocation. Understanding and respecting these limits is essential for building robust payment integrations.
- Default Limit: 1,000 requests per minute per API key
- Monitor
X-RateLimit-*headers in responses - Handle HTTP 429 with exponential backoff
- Implement request queuing for high-volume operations
- Cache responses when appropriate
Rate Limit Detailsโ
Current Limitsโ
| Limit Type | Value | Scope |
|---|---|---|
| Standard Rate Limit | 1,000 requests/minute | Per API key |
| Burst Allowance | ~100 requests | Short bursts allowed |
| Reset Period | 60 seconds | Rolling window |
What Counts Toward Limitsโ
โ Counted:
- All API requests (GET, POST, PATCH, DELETE)
- Successful requests (2xx responses)
- Failed requests (4xx, 5xx responses)
- Authentication failures
โ Not Counted:
- Requests blocked before reaching API (invalid URLs)
- Static asset requests
- Dashboard access
- Webhook deliveries from Omise
Rate Limit Headersโ
Every API response includes rate limit information in headers:
Response Headersโ
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 995
X-RateLimit-Reset: 1612137600
Header Descriptionsโ
| Header | Description | Example |
|---|---|---|
X-RateLimit-Limit | Maximum requests allowed in window | 1000 |
X-RateLimit-Remaining | Requests remaining in current window | 995 |
X-RateLimit-Reset | Unix timestamp when limit resets | 1612137600 |
Reading Headers in Codeโ
# Ruby - Check rate limit headers
require 'omise'
Omise.api_key = ENV['OMISE_SECRET_KEY']
response = Omise::Charge.retrieve('chrg_test_...')
# Access headers
limit = response.http_headers['X-RateLimit-Limit']
remaining = response.http_headers['X-RateLimit-Remaining']
reset = response.http_headers['X-RateLimit-Reset']
puts "Rate limit: #{remaining}/#{limit}"
puts "Resets at: #{Time.at(reset.to_i)}"
# Python - Check rate limit headers
import omise
from datetime import datetime
omise.api_secret = os.environ['OMISE_SECRET_KEY']
charge = omise.Charge.retrieve('chrg_test_...')
# Access headers (library-specific)
headers = charge.response_headers
limit = headers.get('X-RateLimit-Limit')
remaining = headers.get('X-RateLimit-Remaining')
reset_timestamp = int(headers.get('X-RateLimit-Reset', 0))
print(f"Rate limit: {remaining}/{limit}")
print(f"Resets at: {datetime.fromtimestamp(reset_timestamp)}")
// Node.js - Check rate limit headers
const omise = require('omise')({
secretKey: process.env.OMISE_SECRET_KEY
});
try {
const charge = await omise.charges.retrieve('chrg_test_...');
// Headers available in response
const headers = charge._response.headers;
const limit = headers['x-ratelimit-limit'];
const remaining = headers['x-ratelimit-remaining'];
const reset = headers['x-ratelimit-reset'];
console.log(`Rate limit: ${remaining}/${limit}`);
console.log(`Resets at: ${new Date(reset * 1000)}`);
} catch (error) {
console.error('Request failed:', error);
}
HTTP 429 Responseโ
When you exceed the rate limit, the API returns HTTP 429 Too Many Requests:
429 Response Formatโ
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1612137660
Retry-After: 60
{
"object": "error",
"location": "https://www.omise.co/api-errors#rate-limit-exceeded",
"code": "rate_limit_exceeded",
"message": "too many requests, please try again later"
}
Response Fieldsโ
| Field | Description |
|---|---|
code | "rate_limit_exceeded" |
message | Human-readable error message |
Retry-After | Seconds to wait before retrying |
Handling Rate Limitsโ
Strategy 1: Exponential Backoff (Recommended)โ
Retry with increasing delays:
# Ruby - Exponential backoff
require 'omise'
def create_charge_with_backoff(params, max_attempts: 5)
attempt = 0
begin
attempt += 1
Omise::Charge.create(params)
rescue Omise::Error => e
if e.code == 'rate_limit_exceeded' && attempt < max_attempts
# Calculate backoff delay: 1s, 2s, 4s, 8s, 16s
delay = 2 ** (attempt - 1)
# Add jitter (randomness) to prevent thundering herd
jitter = rand(0..delay * 0.1)
sleep(delay + jitter)
retry
else
raise
end
end
end
# Usage
charge = create_charge_with_backoff(
amount: 100000,
currency: 'thb',
card: token
)
# Python - Exponential backoff with decorator
import time
import random
from functools import wraps
def exponential_backoff(max_attempts=5, base_delay=1):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
for attempt in range(max_attempts):
try:
return func(*args, **kwargs)
except omise.errors.BaseError as e:
if e.code != 'rate_limit_exceeded':
raise
if attempt == max_attempts - 1:
raise
# Calculate delay with jitter
delay = base_delay * (2 ** attempt)
jitter = random.uniform(0, delay * 0.1)
total_delay = delay + jitter
print(f"Rate limited. Retrying in {total_delay:.2f}s...")
time.sleep(total_delay)
raise Exception("Max retry attempts exceeded")
return wrapper
return decorator
@exponential_backoff(max_attempts=5)
def create_charge(amount, currency, card):
return omise.Charge.create(
amount=amount,
currency=currency,
card=card
)
# Usage
charge = create_charge(100000, 'thb', token)
// Node.js - Exponential backoff
async function createChargeWithBackoff(chargeData, maxAttempts = 5) {
for (let attempt = 0; attempt < maxAttempts; attempt++) {
try {
return await omise.charges.create(chargeData);
} catch (error) {
if (error.code !== 'rate_limit_exceeded' || attempt === maxAttempts - 1) {
throw error;
}
// Calculate delay with jitter
const baseDelay = Math.pow(2, attempt) * 1000;
const jitter = Math.random() * baseDelay * 0.1;
const delay = baseDelay + jitter;
console.log(`Rate limited. Retrying in ${(delay / 1000).toFixed(2)}s...`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
// Usage
const charge = await createChargeWithBackoff({
amount: 100000,
currency: 'thb',
card: token
});
Strategy 2: Respect Retry-After Headerโ
Use the server's suggested retry time:
<?php
function createChargeWithRetryAfter($params, $maxAttempts = 5) {
$attempt = 0;
while ($attempt < $maxAttempts) {
try {
$attempt++;
return OmiseCharge::create($params);
} catch (Exception $e) {
if ($e->getCode() !== 'rate_limit_exceeded' || $attempt >= $maxAttempts) {
throw $e;
}
// Get Retry-After header from response
$retryAfter = $e->getResponse()->getHeader('Retry-After');
$delay = $retryAfter ? (int)$retryAfter : 60;
echo "Rate limited. Waiting {$delay} seconds...\n";
sleep($delay);
}
}
throw new Exception('Max retry attempts exceeded');
}
// Usage
$charge = createChargeWithRetryAfter([
'amount' => 100000,
'currency' => 'thb',
'card' => $token
]);
Strategy 3: Request Queueโ
Queue requests to control rate:
// Node.js - Request queue with rate limiting
class RateLimitedQueue {
constructor(requestsPerMinute = 1000) {
this.queue = [];
this.requestsPerMinute = requestsPerMinute;
this.requestsThisMinute = 0;
this.windowStart = Date.now();
}
async enqueue(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.processQueue();
});
}
async processQueue() {
if (this.queue.length === 0) return;
// Reset window if minute has passed
const now = Date.now();
if (now - this.windowStart >= 60000) {
this.requestsThisMinute = 0;
this.windowStart = now;
}
// Check if we can make request
if (this.requestsThisMinute >= this.requestsPerMinute) {
// Wait until next window
const waitTime = 60000 - (now - this.windowStart);
setTimeout(() => this.processQueue(), waitTime);
return;
}
// Process next request
const { requestFn, resolve, reject } = this.queue.shift();
this.requestsThisMinute++;
try {
const result = await requestFn();
resolve(result);
} catch (error) {
if (error.code === 'rate_limit_exceeded') {
// Re-queue the request
this.queue.unshift({ requestFn, resolve, reject });
// Wait before processing
setTimeout(() => this.processQueue(), 5000);
} else {
reject(error);
}
}
// Process next in queue
if (this.queue.length > 0) {
// Small delay between requests
setTimeout(() => this.processQueue(), 100);
}
}
}
// Usage
const queue = new RateLimitedQueue(1000);
async function createCharge(data) {
return queue.enqueue(() => omise.charges.create(data));
}
// Multiple requests queued automatically
const charge1 = await createCharge({ amount: 100000, currency: 'thb', card: token1 });
const charge2 = await createCharge({ amount: 50000, currency: 'thb', card: token2 });
Strategy 4: Batch Operationsโ
Reduce requests by batching:
# Python - Batch charge retrieval
def get_charges_batch(charge_ids, batch_size=100):
"""Retrieve multiple charges efficiently"""
charges = []
# Use list endpoint instead of individual retrievals
for i in range(0, len(charge_ids), batch_size):
batch_ids = charge_ids[i:i+batch_size]
# Single list request replaces 100 retrieve requests
page = omise.Charge.list(limit=batch_size)
# Filter to requested IDs
batch_charges = [c for c in page['data'] if c.id in batch_ids]
charges.extend(batch_charges)
# Rate limit consideration
time.sleep(0.1)
return charges
# Bad: 1000 requests
for charge_id in charge_ids:
charge = omise.Charge.retrieve(charge_id) # 1 request each
# Good: 10 requests
charges = get_charges_batch(charge_ids, batch_size=100)
Monitoring Rate Limitsโ
Track Usage in Real-Timeโ
# Ruby - Rate limit monitor
class RateLimitMonitor
def initialize
@limit = nil
@remaining = nil
@reset_at = nil
end
def track_response(response)
headers = response.http_headers
@limit = headers['X-RateLimit-Limit'].to_i
@remaining = headers['X-RateLimit-Remaining'].to_i
@reset_at = Time.at(headers['X-RateLimit-Reset'].to_i)
# Log if getting close to limit
usage_percent = ((@limit - @remaining).to_f / @limit * 100).round(2)
if usage_percent > 80
Rails.logger.warn(
"Rate limit: #{usage_percent}% used (#{@remaining}/#{@limit} remaining)"
)
end
# Alert if very close
if usage_percent > 95
alert_high_rate_limit_usage(usage_percent)
end
end
def alert_high_rate_limit_usage(percent)
# Send alert (email, Slack, PagerDuty, etc.)
AlertService.notify(
"โ ๏ธ Rate limit usage: #{percent}%",
"Only #{@remaining} requests remaining until #{@reset_at}"
)
end
end
# Usage in request wrapper
monitor = RateLimitMonitor.new
def make_request(&block)
response = block.call
monitor.track_response(response)
response
end
charge = make_request { Omise::Charge.retrieve('chrg_test_...') }
Dashboard Metricsโ
// Node.js - Log metrics to monitoring service
class MetricsCollector {
constructor(metricsService) {
this.metrics = metricsService;
}
trackRateLimit(headers) {
const limit = parseInt(headers['x-ratelimit-limit']);
const remaining = parseInt(headers['x-ratelimit-remaining']);
const used = limit - remaining;
const usagePercent = (used / limit) * 100;
// Send to monitoring service (DataDog, CloudWatch, etc.)
this.metrics.gauge('omise.rate_limit.remaining', remaining);
this.metrics.gauge('omise.rate_limit.used', used);
this.metrics.gauge('omise.rate_limit.usage_percent', usagePercent);
// Trigger alert if high usage
if (usagePercent > 90) {
this.metrics.event('omise.rate_limit.high_usage', {
alert_type: 'warning',
text: `Omise rate limit at ${usagePercent.toFixed(2)}%`
});
}
}
trackRateLimitError() {
this.metrics.increment('omise.rate_limit.exceeded');
}
}
// Usage
const metrics = new MetricsCollector(datadogClient);
async function makeOmiseRequest(requestFn) {
try {
const response = await requestFn();
// Track rate limit usage
if (response._response && response._response.headers) {
metrics.trackRateLimit(response._response.headers);
}
return response;
} catch (error) {
if (error.code === 'rate_limit_exceeded') {
metrics.trackRateLimitError();
}
throw error;
}
}
Optimization Strategiesโ
1. Cache Responsesโ
# Ruby - Cache with Redis
require 'redis'
class OmiseCache
def initialize
@redis = Redis.new
end
def get_charge(charge_id)
cache_key = "charge:#{charge_id}"
# Try cache first
cached = @redis.get(cache_key)
return JSON.parse(cached) if cached
# Fetch from API
charge = Omise::Charge.retrieve(charge_id)
# Cache for 5 minutes
@redis.setex(cache_key, 300, charge.to_json)
charge
end
def get_customer(customer_id)
cache_key = "customer:#{customer_id}"
cached = @redis.get(cache_key)
return JSON.parse(cached) if cached
customer = Omise::Customer.retrieve(customer_id)
@redis.setex(cache_key, 300, customer.to_json)
customer
end
end
cache = OmiseCache.new
# First call - hits API
charge = cache.get_charge('chrg_test_...')
# Subsequent calls - from cache (no API request)
charge = cache.get_charge('chrg_test_...')
2. Use Webhooks Instead of Pollingโ
// โ Bad - Polling wastes rate limit
async function waitForChargeComplete(chargeId) {
let charge;
// Polls every 2 seconds - wastes requests!
while (true) {
charge = await omise.charges.retrieve(chargeId);
if (charge.status === 'successful' || charge.status === 'failed') {
return charge;
}
await new Promise(resolve => setTimeout(resolve, 2000));
}
}
// โ
Good - Use webhooks
app.post('/webhooks/omise', async (req, res) => {
const event = req.body;
if (event.key === 'charge.complete') {
const charge = event.data;
// Process completed charge
await processCharge(charge);
}
res.sendStatus(200);
});
3. Batch Webhook Processingโ
# Process webhooks in batch to reduce API calls
class WebhookProcessor:
def __init__(self):
self.batch = []
self.batch_size = 10
def add_event(self, event):
self.batch.append(event)
if len(self.batch) >= self.batch_size:
self.process_batch()
def process_batch(self):
# Extract IDs
charge_ids = [e['data']['id'] for e in self.batch if e['key'] == 'charge.complete']
# Single list request instead of N retrievals
charges = omise.Charge.list(limit=100)
# Match and process
for event in self.batch:
charge = next((c for c in charges['data'] if c.id == event['data']['id']), None)
if charge:
process_charge(charge)
self.batch = []
processor = WebhookProcessor()
@app.route('/webhooks/omise', methods=['POST'])
def webhook():
event = request.json
processor.add_event(event)
return '', 200
4. Optimize List Queriesโ
<?php
// Use filters to reduce data transfer and processing
// โ Bad - Fetches everything
$charges = OmiseCharge::retrieve(['limit' => 100]);
$successfulCharges = array_filter($charges['data'], function($c) {
return $c['status'] === 'successful';
});
// โ
Good - Filter on server side (future feature - currently use pagination efficiently)
// Note: Omise API doesn't support status filtering yet, but use pagination efficiently
$charges = OmiseCharge::retrieve([
'limit' => 100,
'offset' => 0
]);
// Process efficiently
foreach ($charges['data'] as $charge) {
if ($charge['status'] === 'successful') {
processCharge($charge);
}
}
5. Parallel Requests with Careโ
// Go - Parallel requests with rate limiting
package main
import (
"golang.org/x/time/rate"
"sync"
)
type RateLimitedClient struct {
client *omise.Client
limiter *rate.Limiter
}
func NewRateLimitedClient(client *omise.Client, requestsPerSecond int) *RateLimitedClient {
return &RateLimitedClient{
client: client,
limiter: rate.NewLimiter(rate.Limit(requestsPerSecond), requestsPerSecond),
}
}
func (c *RateLimitedClient) CreateCharge(params *operations.CreateCharge) (*omise.Charge, error) {
// Wait for rate limiter
err := c.limiter.Wait(context.Background())
if err != nil {
return nil, err
}
return c.client.CreateCharge(params)
}
func main() {
client, _ := omise.NewClient(
os.Getenv("OMISE_PUBLIC_KEY"),
os.Getenv("OMISE_SECRET_KEY"),
)
// Limit to 16 requests per second (safe margin under 1000/min)
rateLimited := NewRateLimitedClient(client, 16)
var wg sync.WaitGroup
// Process 100 charges in parallel
for i := 0; i < 100; i++ {
wg.Add(1)
go func(idx int) {
defer wg.Done()
charge, err := rateLimited.CreateCharge(&operations.CreateCharge{
Amount: 100000,
Currency: "thb",
Card: tokens[idx],
})
if err != nil {
log.Printf("Charge %d failed: %v", idx, err)
return
}
log.Printf("Charge %d created: %s", idx, charge.ID)
}(i)
}
wg.Wait()
}
6. Implement Circuit Breakerโ
// Node.js - Circuit breaker to prevent cascading failures
class CircuitBreaker {
constructor(threshold = 5, timeout = 60000) {
this.failureThreshold = threshold;
this.timeout = timeout;
this.failureCount = 0;
this.state = 'CLOSED'; // CLOSED, OPEN, HALF_OPEN
this.nextAttempt = Date.now();
}
async execute(requestFn) {
if (this.state === 'OPEN') {
if (Date.now() < this.nextAttempt) {
throw new Error('Circuit breaker is OPEN');
}
this.state = 'HALF_OPEN';
}
try {
const result = await requestFn();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
onSuccess() {
this.failureCount = 0;
this.state = 'CLOSED';
}
onFailure() {
this.failureCount++;
if (this.failureCount >= this.failureThreshold) {
this.state = 'OPEN';
this.nextAttempt = Date.now() + this.timeout;
console.log(`Circuit breaker OPEN. Retrying after ${this.timeout}ms`);
}
}
}
// Usage
const breaker = new CircuitBreaker(5, 60000);
async function createChargeSafe(chargeData) {
return breaker.execute(() => omise.charges.create(chargeData));
}
Testing Rate Limitsโ
Simulate Rate Limit Responsesโ
# RSpec - Test rate limit handling
require 'webmock'
RSpec.describe 'Rate Limit Handling' do
it 'retries on rate limit error' do
stub_request(:post, 'https://api.omise.co/charges')
.to_return(
{ status: 429, body: { code: 'rate_limit_exceeded' }.to_json },
{ status: 200, body: { object: 'charge', id: 'chrg_test_123' }.to_json }
)
charge = create_charge_with_retry(amount: 100000, currency: 'thb')
expect(charge.id).to eq('chrg_test_123')
expect(WebMock).to have_requested(:post, 'https://api.omise.co/charges').twice
end
it 'respects Retry-After header' do
stub_request(:post, 'https://api.omise.co/charges')
.to_return(
status: 429,
headers: { 'Retry-After' => '5' },
body: { code: 'rate_limit_exceeded' }.to_json
)
expect {
create_charge_with_retry(amount: 100000, currency: 'thb')
}.to raise_error(Omise::Error)
# Verify waited appropriate time (mock time if needed)
end
end
Load Testingโ
// Node.js - Load test rate limits
async function loadTest() {
const results = {
success: 0,
rateLimited: 0,
errors: 0
};
const requests = [];
// Send 1500 requests (should hit rate limit at 1000)
for (let i = 0; i < 1500; i++) {
const request = omise.charges.list({ limit: 1 })
.then(() => {
results.success++;
})
.catch((error) => {
if (error.code === 'rate_limit_exceeded') {
results.rateLimited++;
} else {
results.errors++;
}
});
requests.push(request);
}
await Promise.all(requests);
console.log('Load test results:');
console.log(` Success: ${results.success}`);
console.log(` Rate limited: ${results.rateLimited}`);
console.log(` Other errors: ${results.errors}`);
}
// Run test
loadTest();
Best Practicesโ
1. Always Implement Retry Logicโ
# โ
Good - Retry logic built in
@retry(
stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=1, min=1, max=60),
retry=retry_if_exception_type(omise.errors.RateLimitError)
)
def create_charge(amount, currency, card):
return omise.Charge.create(
amount=amount,
currency=currency,
card=card
)
2. Monitor Rate Limit Usageโ
# โ
Good - Track and alert
after_action :track_rate_limit
def track_rate_limit
if response.headers['X-RateLimit-Remaining']
remaining = response.headers['X-RateLimit-Remaining'].to_i
limit = response.headers['X-RateLimit-Limit'].to_i
usage_percent = ((limit - remaining).to_f / limit * 100).round(2)
# Log metrics
StatsD.gauge('omise.rate_limit.usage', usage_percent)
# Alert if high
if usage_percent > 90
AlertService.notify("High Omise rate limit usage: #{usage_percent}%")
end
end
end
3. Use Appropriate Request Patternsโ
// โ
Good - Batch and cache
class EfficientOmiseClient {
constructor() {
this.cache = new Map();
this.batchQueue = [];
}
async getCharge(chargeId) {
// Check cache first
if (this.cache.has(chargeId)) {
return this.cache.get(chargeId);
}
// Fetch from API
const charge = await omise.charges.retrieve(chargeId);
// Cache for 5 minutes
this.cache.set(chargeId, charge);
setTimeout(() => this.cache.delete(chargeId), 5 * 60 * 1000);
return charge;
}
async getCharges(chargeIds) {
// Use list endpoint for multiple charges
const charges = await omise.charges.list({ limit: 100 });
// Cache all charges
charges.data.forEach(charge => {
this.cache.set(charge.id, charge);
});
return chargeIds.map(id =>
charges.data.find(c => c.id === id)
).filter(Boolean);
}
}
4. Implement Request Throttlingโ
<?php
class ThrottledOmiseClient {
private $requestTimes = [];
private $maxRequestsPerMinute = 900; // Safety margin
public function makeRequest($callable) {
$this->cleanOldRequests();
// Check if we're at limit
if (count($this->requestTimes) >= $this->maxRequestsPerMinute) {
// Wait until oldest request ages out
$oldestRequest = min($this->requestTimes);
$waitTime = 60 - (time() - $oldestRequest);
if ($waitTime > 0) {
sleep($waitTime);
}
$this->cleanOldRequests();
}
// Record this request
$this->requestTimes[] = time();
// Make request
return $callable();
}
private function cleanOldRequests() {
$cutoff = time() - 60;
$this->requestTimes = array_filter(
$this->requestTimes,
function($t) use ($cutoff) { return $t > $cutoff; }
);
}
}
// Usage
$client = new ThrottledOmiseClient();
$charge = $client->makeRequest(function() use ($params) {
return OmiseCharge::create($params);
});
5. Handle Rate Limits Gracefullyโ
# โ
Good - User-friendly error handling
def create_charge(params)
Omise::Charge.create(params)
rescue Omise::Error => e
if e.code == 'rate_limit_exceeded'
# Don't expose technical details to users
flash[:error] = "Our payment system is currently busy. Please try again in a moment."
# Log for monitoring
Rails.logger.warn("Rate limit exceeded: #{e.message}")
# Retry in background job
ChargeCreationJob.perform_later(params)
else
raise
end
end
Rate Limit Checklistโ
Before going live:
- Implement exponential backoff for retries
- Respect
Retry-Afterheader - Monitor rate limit headers in responses
- Set up alerts for high usage (>80%)
- Cache responses where appropriate
- Use webhooks instead of polling
- Batch operations when possible
- Test rate limit handling in staging
- Document rate limit strategy for team
- Have fallback plan for rate limit errors
- Monitor rate limit metrics in production
- Review code for unnecessary API calls
Quick Referenceโ
Current Limitsโ
1,000 requests per minute per API key
Rate Limit Headersโ
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 995
X-RateLimit-Reset: 1612137600
HTTP 429 Responseโ
{
"code": "rate_limit_exceeded",
"message": "too many requests, please try again later"
}
Basic Retry Patternโ
begin
omise_request()
rescue Omise::Error => e
if e.code == 'rate_limit_exceeded'
sleep(2 ** attempt)
retry
end
raise
end
Exponential Backoff Formulaโ
delay = base_delay * (2 ^ attempt) + jitter
Example:
- Attempt 1: 1s + jitter
- Attempt 2: 2s + jitter
- Attempt 3: 4s + jitter
- Attempt 4: 8s + jitter
- Attempt 5: 16s + jitter
Related Resourcesโ
Ready to integrate? Review all essential guides: Authentication โข Error Handling โข Pagination โข Idempotency โข Versioning