Rate Limits & Best Practices

Rate limits and best practices for using the BlueGamma API efficiently.

This guide covers rate limits, caching strategies, and best practices for integrating with the BlueGamma API.


Rate Limits

BlueGamma applies rate limits to ensure fair usage and service reliability.

Limit
Value

Requests per second

20

Burst allowance

50 requests

The burst allowance lets you make up to 50 requests in a short burst before rate limiting kicks in. After that, requests are throttled to 20 per second.

If you exceed these limits, you'll receive a 429 Too Many Requests response.

Need higher limits? Contact us at [email protected] to discuss your requirements.


Best Practices

1. Cache Responses

Curve data doesn't change every second. Cache responses to reduce API calls:

from functools import lru_cache
from datetime import datetime

@lru_cache(maxsize=100)
def get_swap_rate(index: str, tenor: str, date: str = None):
    # Cache key includes date to refresh daily
    date = date or datetime.now().strftime("%Y-%m-%d")
    response = requests.get(
        "https://api.bluegamma.io/v1/swap_rate",
        params={"index": index, "tenor": tenor},
        headers={"x-api-key": API_KEY}
    )
    return response.json()

Recommended cache durations:

Data Type
Cache Duration

Swap rates

1-5 minutes

Forward curves

1-5 minutes

Historical data

24 hours or longer

Government yields (US)

1-5 minutes

Government yields (other)

24 hours (updated daily)

2. Batch Your Requests

Instead of making separate calls for each tenor, use curve endpoints that return multiple data points:

❌ Inefficient — 8 separate calls:

✅ Efficient — 1 call:

3. Use Appropriate Endpoints

If you need...
Use this endpoint

Single swap rate

/swap_rate

Full swap curve

/get_swap_curve

Single forward rate

/forward_rate

Full forward curve

/forward_curve

Single discount factor

/discount_factor

Full discount curve

/discount_curve

4. Handle Errors Gracefully

Implement retry logic with exponential backoff:

5. Use Valuation Time for Consistency

When building a model that requires multiple data points, use the same valuation_time parameter across all requests to ensure consistency:

This ensures all data points are from the same market snapshot.


Common Mistakes to Avoid

Mistake
Solution

Calling API in a tight loop

Batch requests or add delays

Not caching historical data

Cache historical queries — they won't change

Fetching full curves when you need 1 point

Use single-point endpoints for one-off queries

Not handling 429 responses

Implement retry logic with backoff


Need Higher Limits?

If your use case requires higher rate limits, we can discuss custom arrangements:

Last updated

Was this helpful?