Skip to main content

Rate Limits Guide

Learn how to work within API rate limits and optimize your request patterns.

Rate Limit Overview

Limit TypeValuePer
Requests1000Minute
Bulk emails100Request
Total emails (CSV)10,000Upload

How Rate Limiting Works

GTMAPIs uses a sliding window rate limiter:
  • Tracks requests per API key
  • 1000 requests allowed per 60-second window
  • Window slides continuously (not reset at fixed intervals)

Example Timeline

00:00 - Make 1000 requests ✅
00:01 - Try request #1001 ❌ Rate limited
00:59 - Still rate limited ❌
01:00 - Requests from 00:00 expire, new requests allowed ✅

Rate Limit Headers

Every response includes rate limit information:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 847
X-RateLimit-Reset: 1704196800
HeaderDescription
X-RateLimit-LimitTotal requests allowed per window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when window resets

Reading Rate Limit Headers

const response = await fetch('https://api.gtmapis.com/v1/validate', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'X-API-Key': process.env.GTMAPIS_API_KEY
  },
  body: JSON.stringify({ email: 'john@company.com' })
});

// Check rate limit status
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const reset = parseInt(response.headers.get('X-RateLimit-Reset'));

if (remaining < 100) {
  const resetDate = new Date(reset * 1000);
  console.warn(`Low rate limit: ${remaining} requests remaining until ${resetDate}`);
}

Rate Limit Exceeded

When you exceed the rate limit, you’ll receive a 429 response:
{
  "error": "Rate limit exceeded",
  "message": "You have exceeded the rate limit of 1000 requests per minute",
  "retry_after": 60
}

Handling Rate Limits

async function validateWithRateLimit(email) {
  try {
    const response = await fetch('https://api.gtmapis.com/v1/validate', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': process.env.GTMAPIS_API_KEY
      },
      body: JSON.stringify({ email })
    });

    if (response.status === 429) {
      // Extract retry-after header
      const retryAfter = parseInt(response.headers.get('Retry-After') || '60');

      console.log(`Rate limited. Waiting ${retryAfter} seconds...`);

      // Wait before retrying
      await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));

      // Retry request
      return await validateWithRateLimit(email);
    }

    return await response.json();
  } catch (error) {
    console.error('Validation error:', error);
    throw error;
  }
}

Optimization Strategies

1. Use Bulk Endpoint

Process 100 emails per request instead of 1:
// ❌ Bad: 1000 requests for 1000 emails
for (const email of emails) {
  await validateSingle(email);  // 1000 API calls
}

// ✅ Good: 10 requests for 1000 emails
const chunks = chunkArray(emails, 100);
for (const chunk of chunks) {
  await validateBulk(chunk);  // 10 API calls
}
Savings: 100x fewer requests

2. Deduplicate Before Validation

Remove duplicates to reduce API calls:
// Remove duplicates (case-insensitive)
const uniqueEmails = [...new Set(
  emails.map(e => e.toLowerCase())
)];

console.log(`Removed ${emails.length - uniqueEmails.length} duplicates`);

// Validate unique emails only
const results = await validateBulk(uniqueEmails);

3. Cache Validation Results

Cache results to avoid re-validating:
const cache = new Map();
const CACHE_TTL = 7 * 24 * 60 * 60 * 1000;  // 7 days

async function validateWithCache(email) {
  const cacheKey = email.toLowerCase();

  // Check cache
  const cached = cache.get(cacheKey);
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.result;
  }

  // Validate
  const result = await validateEmail(email);

  // Cache result
  cache.set(cacheKey, {
    result,
    timestamp: Date.now()
  });

  return result;
}

4. Implement Request Throttling

Limit concurrent requests:
import pLimit from 'p-limit';

// Limit to 10 concurrent requests
const limit = pLimit(10);

const results = await Promise.all(
  emailBatches.map(batch =>
    limit(() => validateBulk(batch))
  )
);

5. Add Delays Between Batches

Spread requests over time:
async function validateWithDelay(emails, delayMs = 500) {
  const results = [];

  for (let i = 0; i < emails.length; i += 100) {
    const batch = emails.slice(i, i + 100);

    const batchResults = await validateBulk(batch);
    results.push(...batchResults);

    // Delay before next batch (except last)
    if (i + 100 < emails.length) {
      await new Promise(resolve => setTimeout(resolve, delayMs));
    }
  }

  return results;
}

Rate Limiter Library

Use a rate limiting library for automatic management:

Using Bottleneck

import Bottleneck from 'bottleneck';

// Configure limiter
const limiter = new Bottleneck({
  maxConcurrent: 10,      // Max concurrent requests
  minTime: 67,            // Min 67ms between requests (900/min with safety margin)
  reservoir: 900,         // Start with 900 tokens
  reservoirRefreshAmount: 900,
  reservoirRefreshInterval: 60 * 1000  // Refresh every minute
});

// Wrap API function
const validateEmail = limiter.wrap(async (email) => {
  const response = await fetch('https://api.gtmapis.com/v1/validate', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'X-API-Key': process.env.GTMAPIS_API_KEY
    },
    body: JSON.stringify({ email })
  });

  return await response.json();
});

// Use normally - limiter handles rate limiting
const results = await Promise.all(
  emails.map(email => validateEmail(email))
);

Using Custom Rate Limiter

class RateLimiter {
  constructor(maxRequests, windowMs) {
    this.maxRequests = maxRequests;
    this.windowMs = windowMs;
    this.requests = [];
  }

  async waitForSlot() {
    // Remove expired requests
    const now = Date.now();
    this.requests = this.requests.filter(time => now - time < this.windowMs);

    // Check if we're at limit
    if (this.requests.length >= this.maxRequests) {
      // Wait until oldest request expires
      const oldestRequest = Math.min(...this.requests);
      const waitTime = this.windowMs - (now - oldestRequest);

      console.log(`Rate limit reached. Waiting ${waitTime}ms...`);
      await new Promise(resolve => setTimeout(resolve, waitTime));

      return this.waitForSlot();  // Recursive check
    }

    // Record this request
    this.requests.push(now);
  }

  async execute(fn) {
    await this.waitForSlot();
    return await fn();
  }
}

// Usage
const limiter = new RateLimiter(900, 60000);  // 900 requests per minute

async function validateWithLimiter(email) {
  return await limiter.execute(() => validateEmail(email));
}

Monitoring Rate Limit Usage

Track Request Counts

class RateLimitMonitor {
  constructor() {
    this.requestCount = 0;
    this.windowStart = Date.now();
  }

  recordRequest() {
    const now = Date.now();

    // Reset window if needed
    if (now - this.windowStart >= 60000) {
      console.log(`Previous minute: ${this.requestCount} requests`);
      this.requestCount = 0;
      this.windowStart = now;
    }

    this.requestCount++;

    // Warn if approaching limit
    if (this.requestCount >= 900) {
      console.warn(`Approaching rate limit: ${this.requestCount}/1000 requests`);
    }
  }

  getStats() {
    const elapsed = Date.now() - this.windowStart;
    const remaining = 60000 - elapsed;

    return {
      requests: this.requestCount,
      elapsed: Math.floor(elapsed / 1000),
      remaining: Math.ceil(remaining / 1000),
      rate: Math.floor((this.requestCount / elapsed) * 60000)
    };
  }
}

const monitor = new RateLimitMonitor();

async function validateWithMonitoring(email) {
  monitor.recordRequest();

  const result = await validateEmail(email);

  // Log stats every 100 requests
  if (monitor.requestCount % 100 === 0) {
    console.log('Rate limit stats:', monitor.getStats());
  }

  return result;
}

Alert on Rate Limit Issues

let rateLimitErrors = 0;

async function validateWithAlerts(email) {
  try {
    const response = await fetch('https://api.gtmapis.com/v1/validate', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': process.env.GTMAPIS_API_KEY
      },
      body: JSON.stringify({ email })
    });

    if (response.status === 429) {
      rateLimitErrors++;

      // Alert if frequent rate limiting
      if (rateLimitErrors >= 5) {
        sendAlert({
          message: 'Frequent rate limiting detected',
          count: rateLimitErrors,
          action: 'Increase delays or upgrade plan'
        });
      }

      throw new Error('Rate limited');
    }

    // Reset counter on success
    rateLimitErrors = 0;

    return await response.json();
  } catch (error) {
    console.error('Validation error:', error);
    throw error;
  }
}

Best Practices

For High-Volume Applications

  1. Use bulk endpoint for all multi-email validations
  2. Implement caching to avoid re-validating
  3. Add delays between batches (500ms recommended)
  4. Monitor rate limit headers to adjust dynamically
  5. Implement backoff when approaching limits

For Background Jobs

  1. Process in smaller batches over longer time
  2. Use job queues (BullMQ, Celery) for async processing
  3. Implement retry logic with exponential backoff
  4. Track progress to resume on failure

For Real-Time Validation

  1. Cache aggressively (7-day TTL recommended)
  2. Validate on blur instead of on every keystroke
  3. Debounce input to reduce API calls
  4. Queue validations instead of parallel requests

Increasing Rate Limits

Need higher limits? Contact us at matt@closedwonleads.com with:
  • Your use case
  • Expected request volume
  • Current rate limit issues
We offer custom enterprise plans with:
  • Higher rate limits (10,000+ req/min)
  • Dedicated infrastructure
  • Priority support
  • Custom SLAs

Next Steps