Our API implements rate limiting to ensure fair usage and maintain service stability for all users. Understanding these limits will help you optimize your API usage and avoid disruptions.

Account Endpoints Rate Limits

The following account endpoints are limited to 2 requests per minute:

  • Single Enrichment (/accounts/single_enrich)
  • Headcount Analysis (/accounts/headcount)
  • Profile Count (/accounts/count_profiles)

Leads Endpoints Rate Limits

The leads enrichment endpoint is limited to 2 requests per minute:

  • Single Enrichment (/leads/single_enrich)

Search Endpoints Rate Limits

Search endpoints have stricter rate limits due to their resource-intensive nature:

  • Create Search (/searches/create): 1 request per second
  • Company Search (/searches/search_in_a_company): 1 request per 5 seconds

Understanding Rate Windows

Rate limits are calculated based on rolling time windows. For example:

  • If you make 2 requests to /accounts/single_enrich at 10:00:30, you’ll need to wait until 10:01:30 before making another request
  • If you make a request to /searches/create at 10:00:00, you’ll need to wait 1 second before making another request
  • If you make a request to /searches/search_in_a_company at 10:00:00, you’ll need to wait 5 seconds before making another request

Rate Limit Responses

When you exceed rate limits, you’ll receive a 429 Too Many Requests response:

{
  "error": 429,
  "message": "Rate limit exceeded. Please try again in X seconds."
}

Best Practices

  1. Implement Endpoint-Specific Retry Logic

    • Add exponential backoff with appropriate intervals for each endpoint
    • Use shorter retry intervals for search endpoints (1-5 seconds)
    • Use longer retry intervals for enrichment endpoints (60 seconds)
    • Include jitter to prevent thundering herd problems
  2. Optimize Request Patterns

    • Use bulk endpoints when available (e.g., /accounts/bulk_enrich)
    • Space out search requests according to their specific time windows
    • Track rate limit headers for each endpoint separately
    • Queue requests that would exceed rate limits
  3. Handle Rate Limits Gracefully

    • Implement proper error handling for 429 responses
    • Use different queues for different endpoint types
    • Consider using a rate limiting library in your client

Example Retry Implementation

Here’s a simple example of how to implement retry logic with endpoint-specific timing:

def make_request_with_retry(endpoint, max_retries = 3)
  retries = 0
  # Define wait times based on endpoint type
  wait_time = case endpoint
              when /\/searches\/create$/
                1  # 1 second for create search
              when /\/searches\/search_in_a_company$/
                5  # 5 seconds for company search
              else
                60 # 60 seconds for other endpoints
              end

  begin
    response = api_client.call(endpoint)
    return response
  rescue RateLimitExceeded => e
    raise if retries >= max_retries

    retries += 1
    # Add jitter to prevent thundering herd
    sleep_time = wait_time + rand(0.1..0.5)
    sleep(sleep_time)
    retry
  end
end

Additional Considerations

  • Rate limits are applied per API key
  • Each endpoint’s rate limit is tracked separately
  • Different endpoint types have different rate limit windows
  • Consider implementing separate queues for different endpoint types
  • Monitor your usage patterns and adjust your implementation accordingly

For questions about rate limits or to request limit increases, please contact our support team.