Skip to main content

Overview

The NativeMessage API enforces rate limits to ensure fair usage and system stability. The default limit is 200 requests per minute per tenant.
Rate limits apply per tenant account. Multiple API keys under the same tenant share the same rate limit pool.

Rate Limit Headers

Every API response includes rate limit information in the headers:
HeaderDescriptionExample
X-RateLimit-LimitMaximum requests allowed per minute200
X-RateLimit-RemainingRequests remaining in current window195
X-RateLimit-ResetUnix timestamp when the limit resets1708000000
Example response headers:
X-RateLimit-Limit: 200
X-RateLimit-Remaining: 195
X-RateLimit-Reset: 1708000000

Rate Limit Exceeded

When you exceed the rate limit, the API returns HTTP 429 with this response:
{
  "error": "rate limit exceeded"
}
The X-RateLimit-Reset header tells you when you can resume requests.

Best Practices

1. Monitor Remaining Requests

Check X-RateLimit-Remaining before making large batches of requests:
Node.js
const response = await fetch('https://api-message.nativehub.live/api/v1/messages', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_TOKEN',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify(message)
});

const remaining = response.headers.get('X-RateLimit-Remaining');
const reset = response.headers.get('X-RateLimit-Reset');

console.log(`Remaining: ${remaining}, Resets at: ${new Date(reset * 1000)}`);

if (remaining < 10) {
  console.warn('Approaching rate limit!');
}
Python
response = requests.post(
    'https://api-message.nativehub.live/api/v1/messages',
    headers={
        'Authorization': 'Bearer YOUR_API_TOKEN',
        'Content-Type': 'application/json'
    },
    json=message
)

remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
reset = int(response.headers.get('X-RateLimit-Reset', 0))

print(f'Remaining: {remaining}, Resets at: {datetime.fromtimestamp(reset)}')

if remaining < 10:
    print('Approaching rate limit!')

2. Use Bulk Endpoints

Send multiple messages in one request instead of individual calls: ❌ Inefficient (100 requests):
for (const recipient of recipients) {
  await fetch('https://api-message.nativehub.live/api/v1/messages', {
    method: 'POST',
    body: JSON.stringify({
      from: '+8801712345678',
      to: recipient,
      body: 'Hello!'
    })
  });
}
✅ Efficient (1 request):
await fetch('https://api-message.nativehub.live/api/v1/messages/bulk', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_TOKEN',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    messages: recipients.map(to => ({
      from: '+8801712345678',
      to,
      body: 'Hello!'
    }))
  })
});

3. Implement Exponential Backoff

When you hit rate limits, wait and retry with increasing delays:
Node.js
async function sendWithBackoff(message, maxRetries = 5) {
  let delay = 1000; // Start with 1 second

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch('https://api-message.nativehub.live/api/v1/messages', {
      method: 'POST',
      headers: {
        'Authorization': 'Bearer YOUR_API_TOKEN',
        'Content-Type': 'application/json'
      },
      body: JSON.stringify(message)
    });

    if (response.ok) {
      return await response.json();
    }

    if (response.status === 429) {
      const resetTime = parseInt(response.headers.get('X-RateLimit-Reset'));
      const waitMs = Math.max(resetTime * 1000 - Date.now(), delay);

      console.log(`Rate limited. Waiting ${waitMs}ms...`);
      await new Promise(resolve => setTimeout(resolve, waitMs));
      delay *= 2; // Exponential backoff
      continue;
    }

    throw new Error(`Request failed: ${response.status}`);
  }

  throw new Error('Max retries exceeded');
}
Python
import time
import requests

def send_with_backoff(message, max_retries=5):
    delay = 1  # Start with 1 second

    for attempt in range(max_retries):
        response = requests.post(
            'https://api-message.nativehub.live/api/v1/messages',
            headers={
                'Authorization': 'Bearer YOUR_API_TOKEN',
                'Content-Type': 'application/json'
            },
            json=message
        )

        if response.ok:
            return response.json()

        if response.status_code == 429:
            reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
            wait_seconds = max(reset_time - time.time(), delay)

            print(f'Rate limited. Waiting {wait_seconds}s...')
            time.sleep(wait_seconds)
            delay *= 2  # Exponential backoff
            continue

        raise Exception(f'Request failed: {response.status_code}')

    raise Exception('Max retries exceeded')

4. Queue Requests Client-Side

Implement a request queue to control throughput:
Node.js
class RateLimitedClient {
  constructor(apiToken, maxRequestsPerMinute = 200) {
    this.apiToken = apiToken;
    this.maxRequests = maxRequestsPerMinute;
    this.queue = [];
    this.requestCount = 0;
    this.resetTime = Date.now() + 60000;
  }

  async send(message) {
    return new Promise((resolve, reject) => {
      this.queue.push({ message, resolve, reject });
      this.processQueue();
    });
  }

  async processQueue() {
    if (this.queue.length === 0) return;

    const now = Date.now();
    if (now >= this.resetTime) {
      this.requestCount = 0;
      this.resetTime = now + 60000;
    }

    if (this.requestCount >= this.maxRequests) {
      const waitMs = this.resetTime - now;
      setTimeout(() => this.processQueue(), waitMs);
      return;
    }

    const { message, resolve, reject } = this.queue.shift();
    this.requestCount++;

    try {
      const response = await fetch('https://api-message.nativehub.live/api/v1/messages', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${this.apiToken}`,
          'Content-Type': 'application/json'
        },
        body: JSON.stringify(message)
      });

      const data = await response.json();
      resolve(data);
    } catch (error) {
      reject(error);
    }

    // Process next item
    if (this.queue.length > 0) {
      setImmediate(() => this.processQueue());
    }
  }
}

// Usage
const client = new RateLimitedClient('YOUR_API_TOKEN');

for (const recipient of recipients) {
  client.send({
    from: '+8801712345678',
    to: recipient,
    body: 'Hello!'
  });
}

Enterprise Rate Limits

Need higher limits? Contact support for enterprise rate limit increases:
  • Standard: 200 requests/minute
  • Professional: 500 requests/minute
  • Enterprise: 2,000+ requests/minute
Email [email protected] with your use case and estimated traffic.

Rate Limit Calculation

Rate limits reset on a rolling window basis:
Time 00:00 → Request 1-200 (allowed)
Time 00:01 → Request 201 (rate limited)
Time 00:59 → Request 1 from 00:00 expires
Time 00:59 → Request 201 now allowed (199 remaining)
Each request “expires” from the count 60 seconds after it was made.
Rate limits are enforced per tenant, not per API key. Using multiple keys under the same tenant does not increase your limit.