メインコンテンツへスキップ
バージョン: 最新版

レート制限

Omise APIレート制限内に留まり、効率的な統合を構築します。レート制限ヘッダーについて学び、429エラーを適切に処理し、リクエストパターンを最適化します。

概要

すべてのマーチャントに信頼性の高いサービスを確保するため、OmiseはAPIリクエストにレート制限を実装しています。レート制限により、単一の統合がAPIを圧倒することを防ぎ、公平なリソース割り当てを保証します。これらの制限を理解し尊重することは、堅牢な決済統合を構築するために不可欠です。

クイックスタート
  • デフォルト制限: APIキーあたり毎分1,000リクエスト
  • レスポンスのX-RateLimit-*ヘッダーを監視
  • HTTP 429を指数バックオフで処理
  • 大量操作用のリクエストキューイングを実装
  • 適切な場合はレスポンスをキャッシュ

レート制限の詳細

現在の制限

制限タイプスコープ
標準レート制限毎分1,000リクエストAPIキーごと
バースト許容量約100リクエスト短時間のバースト許可
リセット期間60秒ローリングウィンドウ

制限にカウントされるもの

カウントされる:

  • すべてのAPIリクエスト(GET、POST、PATCH、DELETE)
  • 成功したリクエスト(2xxレスポンス)
  • 失敗したリクエスト(4xx、5xxレスポンス)
  • 認証の失敗

カウントされない:

  • APIに到達する前にブロックされたリクエスト(無効なURL)
  • 静的アセットリクエスト
  • ダッシュボードアクセス
  • OmiseからのWebhook配信

レート制限ヘッダー

すべてのAPIレスポンスには、ヘッダーにレート制限情報が含まれています:

レスポンス Headers

HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 995
X-RateLimit-Reset: 1612137600

ヘッダーの説明

ヘッダー説明
X-RateLimit-Limitウィンドウ内で許可される最大リクエスト数1000
X-RateLimit-Remaining現在のウィンドウで残っているリクエスト数995
X-RateLimit-Reset制限がリセットされるUnixタイムスタンプ1612137600

コード内でヘッダーを読み取る

# Ruby - Check rate limit headers
require 'omise'

Omise.api_key = ENV['OMISE_SECRET_KEY']

response = Omise::Charge.retrieve('chrg_test_...')

# Access headers
limit = response.http_headers['X-RateLimit-Limit']
remaining = response.http_headers['X-RateLimit-Remaining']
reset = response.http_headers['X-RateLimit-Reset']

puts "Rate limit: #{remaining}/#{limit}"
puts "Resets at: #{Time.at(reset.to_i)}"
# Python - Check rate limit headers
import omise
from datetime import datetime

omise.api_secret = os.environ['OMISE_SECRET_KEY']

charge = omise.Charge.retrieve('chrg_test_...')

# Access headers (library-specific)
headers = charge.response_headers

limit = headers.get('X-RateLimit-Limit')
remaining = headers.get('X-RateLimit-Remaining')
reset_timestamp = int(headers.get('X-RateLimit-Reset', 0))

print(f"Rate limit: {remaining}/{limit}")
print(f"Resets at: {datetime.fromtimestamp(reset_timestamp)}")
// Node.js - Check rate limit headers
const omise = require('omise')({
secretKey: process.env.OMISE_SECRET_KEY
});

try {
const charge = await omise.charges.retrieve('chrg_test_...');

// Headers available in response
const headers = charge._response.headers;

const limit = headers['x-ratelimit-limit'];
const remaining = headers['x-ratelimit-remaining'];
const reset = headers['x-ratelimit-reset'];

console.log(`Rate limit: ${remaining}/${limit}`);
console.log(`Resets at: ${new Date(reset * 1000)}`);

} catch (error) {
console.error('Request failed:', error);
}

HTTP 429レスポンス

レート制限を超えると、APIはHTTP 429 Too Many Requestsを返します:

429レスポンス形式

HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1612137660
Retry-After: 60

{
"object": "error",
"location": "https://www.omise.co/api-errors#rate-limit-exceeded",
"code": "rate_limit_exceeded",
"message": "too many requests, please try again later"
}

レスポンスフィールド

フィールド説明
code"rate_limit_exceeded"
message人間が読めるエラーメッセージ
Retry-After再試行前に待機する秒数

レート制限の処理

戦略1: 指数バックオフ(推奨)

遅延を増やして再試行:

# Ruby - Exponential backoff
require 'omise'

def create_charge_with_backoff(params, max_attempts: 5)
attempt = 0

begin
attempt += 1
Omise::Charge.create(params)

rescue Omise::Error => e
if e.code == 'rate_limit_exceeded' && attempt < max_attempts
# Calculate backoff delay: 1s, 2s, 4s, 8s, 16s
delay = 2 ** (attempt - 1)

# Add jitter (randomness) to prevent thundering herd
jitter = rand(0..delay * 0.1)
sleep(delay + jitter)

retry
else
raise
end
end
end

# Usage
charge = create_charge_with_backoff(
amount: 100000,
currency: 'thb',
card: token
)
# Python - Exponential backoff with decorator
import time
import random
from functools import wraps

def exponential_backoff(max_attempts=5, base_delay=1):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
for attempt in range(max_attempts):
try:
return func(*args, **kwargs)

except omise.errors.BaseError as e:
if e.code != 'rate_limit_exceeded':
raise

if attempt == max_attempts - 1:
raise

# Calculate delay with jitter
delay = base_delay * (2 ** attempt)
jitter = random.uniform(0, delay * 0.1)
total_delay = delay + jitter

print(f"Rate limited. Retrying in {total_delay:.2f}s...")
time.sleep(total_delay)

raise Exception("Max retry attempts exceeded")

return wrapper
return decorator

@exponential_backoff(max_attempts=5)
def create_charge(amount, currency, card):
return omise.Charge.create(
amount=amount,
currency=currency,
card=card
)

# Usage
charge = create_charge(100000, 'thb', token)
// Node.js - Exponential backoff
async function createChargeWithBackoff(chargeData, maxAttempts = 5) {
for (let attempt = 0; attempt < maxAttempts; attempt++) {
try {
return await omise.charges.create(chargeData);

} catch (error) {
if (error.code !== 'rate_limit_exceeded' || attempt === maxAttempts - 1) {
throw error;
}

// Calculate delay with jitter
const baseDelay = Math.pow(2, attempt) * 1000;
const jitter = Math.random() * baseDelay * 0.1;
const delay = baseDelay + jitter;

console.log(`Rate limited. Retrying in ${(delay / 1000).toFixed(2)}s...`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}

// Usage
const charge = await createChargeWithBackoff({
amount: 100000,
currency: 'thb',
card: token
});

戦略2: Retry-Afterヘッダーを尊重

サーバーが提案する再試行時間を使用:

<?php
function createChargeWithRetryAfter($params, $maxAttempts = 5) {
$attempt = 0;

while ($attempt < $maxAttempts) {
try {
$attempt++;
return OmiseCharge::create($params);

} catch (Exception $e) {
if ($e->getCode() !== 'rate_limit_exceeded' || $attempt >= $maxAttempts) {
throw $e;
}

// Get Retry-After header from response
$retryAfter = $e->getResponse()->getHeader('Retry-After');
$delay = $retryAfter ? (int)$retryAfter : 60;

echo "Rate limited. Waiting {$delay} seconds...\n";
sleep($delay);
}
}

throw new Exception('Max retry attempts exceeded');
}

// Usage
$charge = createChargeWithRetryAfter([
'amount' => 100000,
'currency' => 'thb',
'card' => $token
]);

戦略3: リクエストキュー

レートを制御するためにリクエストをキューに入れる:

// Node.js - Request queue with rate limiting
class RateLimitedQueue {
constructor(requestsPerMinute = 1000) {
this.queue = [];
this.requestsPerMinute = requestsPerMinute;
this.requestsThisMinute = 0;
this.windowStart = Date.now();
}

async enqueue(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.processQueue();
});
}

async processQueue() {
if (this.queue.length === 0) return;

// Reset window if minute has passed
const now = Date.now();
if (now - this.windowStart >= 60000) {
this.requestsThisMinute = 0;
this.windowStart = now;
}

// Check if we can make request
if (this.requestsThisMinute >= this.requestsPerMinute) {
// Wait until next window
const waitTime = 60000 - (now - this.windowStart);
setTimeout(() => this.processQueue(), waitTime);
return;
}

// Process next request
const { requestFn, resolve, reject } = this.queue.shift();
this.requestsThisMinute++;

try {
const result = await requestFn();
resolve(result);
} catch (error) {
if (error.code === 'rate_limit_exceeded') {
// Re-queue the request
this.queue.unshift({ requestFn, resolve, reject });
// Wait before processing
setTimeout(() => this.processQueue(), 5000);
} else {
reject(error);
}
}

// Process next in queue
if (this.queue.length > 0) {
// Small delay between requests
setTimeout(() => this.processQueue(), 100);
}
}
}

// Usage
const queue = new RateLimitedQueue(1000);

async function createCharge(data) {
return queue.enqueue(() => omise.charges.create(data));
}

// Multiple requests queued automatically
const charge1 = await createCharge({ amount: 100000, currency: 'thb', card: token1 });
const charge2 = await createCharge({ amount: 50000, currency: 'thb', card: token2 });

戦略4: バッチ操作

バッチ処理によりリクエストを削減:

# Python - Batch charge retrieval
def get_charges_batch(charge_ids, batch_size=100):
"""Retrieve multiple charges efficiently"""
charges = []

# Use list endpoint instead of individual retrievals
for i in range(0, len(charge_ids), batch_size):
batch_ids = charge_ids[i:i+batch_size]

# Single list request replaces 100 retrieve requests
page = omise.Charge.list(limit=batch_size)

# Filter to requested IDs
batch_charges = [c for c in page['data'] if c.id in batch_ids]
charges.extend(batch_charges)

# Rate limit consideration
time.sleep(0.1)

return charges

# Bad: 1000 requests
for charge_id in charge_ids:
charge = omise.Charge.retrieve(charge_id) # 1 request each

# Good: 10 requests
charges = get_charges_batch(charge_ids, batch_size=100)

レート制限の監視

リアルタイムで使用状況を追跡

# Ruby - Rate limit monitor
class RateLimitMonitor
def initialize
@limit = nil
@remaining = nil
@reset_at = nil
end

def track_response(response)
headers = response.http_headers

@limit = headers['X-RateLimit-Limit'].to_i
@remaining = headers['X-RateLimit-Remaining'].to_i
@reset_at = Time.at(headers['X-RateLimit-Reset'].to_i)

# Log if getting close to limit
usage_percent = ((@limit - @remaining).to_f / @limit * 100).round(2)

if usage_percent > 80
Rails.logger.warn(
"Rate limit: #{usage_percent}% used (#{@remaining}/#{@limit} remaining)"
)
end

# Alert if very close
if usage_percent > 95
alert_high_rate_limit_usage(usage_percent)
end
end

def alert_high_rate_limit_usage(percent)
# Send alert (email, Slack, PagerDuty, etc.)
AlertService.notify(
"⚠️ Rate limit usage: #{percent}%",
"Only #{@remaining} requests remaining until #{@reset_at}"
)
end
end

# Usage in request wrapper
monitor = RateLimitMonitor.new

def make_request(&block)
response = block.call
monitor.track_response(response)
response
end

charge = make_request { Omise::Charge.retrieve('chrg_test_...') }

ダッシュボードメトリクス

// Node.js - Log metrics to monitoring service
class MetricsCollector {
constructor(metricsService) {
this.metrics = metricsService;
}

trackRateLimit(headers) {
const limit = parseInt(headers['x-ratelimit-limit']);
const remaining = parseInt(headers['x-ratelimit-remaining']);
const used = limit - remaining;
const usagePercent = (used / limit) * 100;

// Send to monitoring service (DataDog, CloudWatch, etc.)
this.metrics.gauge('omise.rate_limit.remaining', remaining);
this.metrics.gauge('omise.rate_limit.used', used);
this.metrics.gauge('omise.rate_limit.usage_percent', usagePercent);

// Trigger alert if high usage
if (usagePercent > 90) {
this.metrics.event('omise.rate_limit.high_usage', {
alert_type: 'warning',
text: `Omise rate limit at ${usagePercent.toFixed(2)}%`
});
}
}

trackRateLimitError() {
this.metrics.increment('omise.rate_limit.exceeded');
}
}

// Usage
const metrics = new MetricsCollector(datadogClient);

async function makeOmiseRequest(requestFn) {
try {
const response = await requestFn();

// Track rate limit usage
if (response._response && response._response.headers) {
metrics.trackRateLimit(response._response.headers);
}

return response;

} catch (error) {
if (error.code === 'rate_limit_exceeded') {
metrics.trackRateLimitError();
}
throw error;
}
}

最適化戦略

1. レスポンスをキャッシュ

# Ruby - Cache with Redis
require 'redis'

class OmiseCache
def initialize
@redis = Redis.new
end

def get_charge(charge_id)
cache_key = "charge:#{charge_id}"

# Try cache first
cached = @redis.get(cache_key)
return JSON.parse(cached) if cached

# Fetch from API
charge = Omise::Charge.retrieve(charge_id)

# Cache for 5 minutes
@redis.setex(cache_key, 300, charge.to_json)

charge
end

def get_customer(customer_id)
cache_key = "customer:#{customer_id}"

cached = @redis.get(cache_key)
return JSON.parse(cached) if cached

customer = Omise::Customer.retrieve(customer_id)
@redis.setex(cache_key, 300, customer.to_json)

customer
end
end

cache = OmiseCache.new

# First call - hits API
charge = cache.get_charge('chrg_test_...')

# Subsequent calls - from cache (no API request)
charge = cache.get_charge('chrg_test_...')

2. ポーリングの代わりにWebhookを使用

// ❌ Bad - Polling wastes rate limit
async function waitForChargeComplete(chargeId) {
let charge;

// Polls every 2 seconds - wastes requests!
while (true) {
charge = await omise.charges.retrieve(chargeId);

if (charge.status === 'successful' || charge.status === 'failed') {
return charge;
}

await new Promise(resolve => setTimeout(resolve, 2000));
}
}

// ✅ Good - Use webhooks
app.post('/webhooks/omise', async (req, res) => {
const event = req.body;

if (event.key === 'charge.complete') {
const charge = event.data;

// Process completed charge
await processCharge(charge);
}

res.sendStatus(200);
});

3. Webhookのバッチ処理

# Process webhooks in batch to reduce API calls
class WebhookProcessor:
def __init__(self):
self.batch = []
self.batch_size = 10

def add_event(self, event):
self.batch.append(event)

if len(self.batch) >= self.batch_size:
self.process_batch()

def process_batch(self):
# Extract IDs
charge_ids = [e['data']['id'] for e in self.batch if e['key'] == 'charge.complete']

# Single list request instead of N retrievals
charges = omise.Charge.list(limit=100)

# Match and process
for event in self.batch:
charge = next((c for c in charges['data'] if c.id == event['data']['id']), None)
if charge:
process_charge(charge)

self.batch = []

processor = WebhookProcessor()

@app.route('/webhooks/omise', methods=['POST'])
def webhook():
event = request.json
processor.add_event(event)
return '', 200

4. リストクエリを最適化

<?php
// Use filters to reduce data transfer and processing

// ❌ Bad - Fetches everything
$charges = OmiseCharge::retrieve(['limit' => 100]);
$successfulCharges = array_filter($charges['data'], function($c) {
return $c['status'] === 'successful';
});

// ✅ Good - Filter on server side (future feature - currently use pagination efficiently)
// Note: Omise API doesn't support status filtering yet, but use pagination efficiently
$charges = OmiseCharge::retrieve([
'limit' => 100,
'offset' => 0
]);

// Process efficiently
foreach ($charges['data'] as $charge) {
if ($charge['status'] === 'successful') {
processCharge($charge);
}
}

5. 慎重に並列リクエスト

// Go - Parallel requests with rate limiting
package main

import (
"golang.org/x/time/rate"
"sync"
)

type RateLimitedClient struct {
client *omise.Client
limiter *rate.Limiter
}

func NewRateLimitedClient(client *omise.Client, requestsPerSecond int) *RateLimitedClient {
return &RateLimitedClient{
client: client,
limiter: rate.NewLimiter(rate.Limit(requestsPerSecond), requestsPerSecond),
}
}

func (c *RateLimitedClient) CreateCharge(params *operations.CreateCharge) (*omise.Charge, error) {
// Wait for rate limiter
err := c.limiter.Wait(context.Background())
if err != nil {
return nil, err
}

return c.client.CreateCharge(params)
}

func main() {
client, _ := omise.NewClient(
os.Getenv("OMISE_PUBLIC_KEY"),
os.Getenv("OMISE_SECRET_KEY"),
)

// Limit to 16 requests per second (safe margin under 1000/min)
rateLimited := NewRateLimitedClient(client, 16)

var wg sync.WaitGroup

// Process 100 charges in parallel
for i := 0; i < 100; i++ {
wg.Add(1)

go func(idx int) {
defer wg.Done()

charge, err := rateLimited.CreateCharge(&operations.CreateCharge{
Amount: 100000,
Currency: "thb",
Card: tokens[idx],
})

if err != nil {
log.Printf("Charge %d failed: %v", idx, err)
return
}

log.Printf("Charge %d created: %s", idx, charge.ID)
}(i)
}

wg.Wait()
}

6. サーキットブレーカーを実装

// Node.js - Circuit breaker to prevent cascading failures
class CircuitBreaker {
constructor(threshold = 5, timeout = 60000) {
this.failureThreshold = threshold;
this.timeout = timeout;
this.failureCount = 0;
this.state = 'CLOSED'; // CLOSED, OPEN, HALF_OPEN
this.nextAttempt = Date.now();
}

async execute(requestFn) {
if (this.state === 'OPEN') {
if (Date.now() < this.nextAttempt) {
throw new Error('Circuit breaker is OPEN');
}
this.state = 'HALF_OPEN';
}

try {
const result = await requestFn();
this.onSuccess();
return result;

} catch (error) {
this.onFailure();
throw error;
}
}

onSuccess() {
this.failureCount = 0;
this.state = 'CLOSED';
}

onFailure() {
this.failureCount++;

if (this.failureCount >= this.failureThreshold) {
this.state = 'OPEN';
this.nextAttempt = Date.now() + this.timeout;
console.log(`Circuit breaker OPEN. Retrying after ${this.timeout}ms`);
}
}
}

// Usage
const breaker = new CircuitBreaker(5, 60000);

async function createChargeSafe(chargeData) {
return breaker.execute(() => omise.charges.create(chargeData));
}

レート制限のテスト

レート制限レスポンスをシミュレート

# RSpec - Test rate limit handling
require 'webmock'

RSpec.describe 'Rate Limit Handling' do
it 'retries on rate limit error' do
stub_request(:post, 'https://api.omise.co/charges')
.to_return(
{ status: 429, body: { code: 'rate_limit_exceeded' }.to_json },
{ status: 200, body: { object: 'charge', id: 'chrg_test_123' }.to_json }
)

charge = create_charge_with_retry(amount: 100000, currency: 'thb')

expect(charge.id).to eq('chrg_test_123')
expect(WebMock).to have_requested(:post, 'https://api.omise.co/charges').twice
end

it 'respects Retry-After header' do
stub_request(:post, 'https://api.omise.co/charges')
.to_return(
status: 429,
headers: { 'Retry-After' => '5' },
body: { code: 'rate_limit_exceeded' }.to_json
)

expect {
create_charge_with_retry(amount: 100000, currency: 'thb')
}.to raise_error(Omise::Error)

# Verify waited appropriate time (mock time if needed)
end
end

負荷テスト

// Node.js - Load test rate limits
async function loadTest() {
const results = {
success: 0,
rateLimited: 0,
errors: 0
};

const requests = [];

// Send 1500 requests (should hit rate limit at 1000)
for (let i = 0; i < 1500; i++) {
const request = omise.charges.list({ limit: 1 })
.then(() => {
results.success++;
})
.catch((error) => {
if (error.code === 'rate_limit_exceeded') {
results.rateLimited++;
} else {
results.errors++;
}
});

requests.push(request);
}

await Promise.all(requests);

console.log('Load test results:');
console.log(` Success: ${results.success}`);
console.log(` Rate limited: ${results.rateLimited}`);
console.log(` Other errors: ${results.errors}`);
}

// Run test
loadTest();

ベストプラクティス

1. 常に再試行ロジックを実装

# ✅ Good - Retry logic built in
@retry(
stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=1, min=1, max=60),
retry=retry_if_exception_type(omise.errors.RateLimitError)
)
def create_charge(amount, currency, card):
return omise.Charge.create(
amount=amount,
currency=currency,
card=card
)

2. レート制限の使用状況を監視

# ✅ Good - Track and alert
after_action :track_rate_limit

def track_rate_limit
if response.headers['X-RateLimit-Remaining']
remaining = response.headers['X-RateLimit-Remaining'].to_i
limit = response.headers['X-RateLimit-Limit'].to_i

usage_percent = ((limit - remaining).to_f / limit * 100).round(2)

# Log metrics
StatsD.gauge('omise.rate_limit.usage', usage_percent)

# Alert if high
if usage_percent > 90
AlertService.notify("High Omise rate limit usage: #{usage_percent}%")
end
end
end

3. 適切なリクエストパターンを使用

// ✅ Good - Batch and cache
class EfficientOmiseClient {
constructor() {
this.cache = new Map();
this.batchQueue = [];
}

async getCharge(chargeId) {
// Check cache first
if (this.cache.has(chargeId)) {
return this.cache.get(chargeId);
}

// Fetch from API
const charge = await omise.charges.retrieve(chargeId);

// Cache for 5 minutes
this.cache.set(chargeId, charge);
setTimeout(() => this.cache.delete(chargeId), 5 * 60 * 1000);

return charge;
}

async getCharges(chargeIds) {
// Use list endpoint for multiple charges
const charges = await omise.charges.list({ limit: 100 });

// Cache all charges
charges.data.forEach(charge => {
this.cache.set(charge.id, charge);
});

return chargeIds.map(id =>
charges.data.find(c => c.id === id)
).filter(Boolean);
}
}

4. リクエストスロットリングを実装

<?php
class ThrottledOmiseClient {
private $requestTimes = [];
private $maxRequestsPerMinute = 900; // Safety margin

public function makeRequest($callable) {
$this->cleanOldRequests();

// Check if we're at limit
if (count($this->requestTimes) >= $this->maxRequestsPerMinute) {
// Wait until oldest request ages out
$oldestRequest = min($this->requestTimes);
$waitTime = 60 - (time() - $oldestRequest);

if ($waitTime > 0) {
sleep($waitTime);
}

$this->cleanOldRequests();
}

// Record this request
$this->requestTimes[] = time();

// Make request
return $callable();
}

private function cleanOldRequests() {
$cutoff = time() - 60;
$this->requestTimes = array_filter(
$this->requestTimes,
function($t) use ($cutoff) { return $t > $cutoff; }
);
}
}

// Usage
$client = new ThrottledOmiseClient();

$charge = $client->makeRequest(function() use ($params) {
return OmiseCharge::create($params);
});

5. レート制限を適切に処理

# ✅ Good - User-friendly error handling
def create_charge(params)
Omise::Charge.create(params)

rescue Omise::Error => e
if e.code == 'rate_limit_exceeded'
# Don't expose technical details to users
flash[:error] = "Our payment system is currently busy. Please try again in a moment."

# Log for monitoring
Rails.logger.warn("Rate limit exceeded: #{e.message}")

# Retry in background job
ChargeCreationJob.perform_later(params)
else
raise
end
end

レート制限チェックリスト

本番稼働前に確認:

  • 再試行用の指数バックオフを実装
  • Retry-Afterヘッダーを尊重
  • レスポンスのレート制限ヘッダーを監視
  • 高使用率(>80%)のアラートを設定
  • 適切な場所でレスポンスをキャッシュ
  • ポーリングの代わりにWebhookを使用
  • 可能な場合は操作をバッチ処理
  • ステージングでレート制限処理をテスト
  • チーム向けにレート制限戦略を文書化
  • レート制限エラーのフォールバック計画を用意
  • 本番環境でレート制限メトリクスを監視
  • 不要なAPI呼び出しがないかコードをレビュー

クイックリファレンス

現在の制限

APIキーあたり毎分1,000リクエスト

レート制限ヘッダー

X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 995
X-RateLimit-Reset: 1612137600

HTTP 429レスポンス

{
"code": "rate_limit_exceeded",
"message": "too many requests, please try again later"
}

基本的な再試行パターン

begin
omise_request()
rescue Omise::Error => e
if e.code == 'rate_limit_exceeded'
sleep(2 ** attempt)
retry
end
raise
end

指数バックオフの式

遅延 = 基本遅延 * (2 ^ 試行回数) + ジッター

例:

  • 試行1: 1秒 + ジッター
  • 試行2: 2秒 + ジッター
  • 試行3: 4秒 + ジッター
  • 試行4: 8秒 + ジッター
  • 試行5: 16秒 + ジッター

関連リソース


統合の準備はできましたか? すべての必須ガイドをレビュー: 認証エラー処理ページネーションべき等性バージョニング