Rate Limits
The Mockupanda API uses rate limits to ensure fair usage and system stability. This page explains how rate limits work and how to stay within them.
Current Limits
Each API key has these limits:
| Limit Type | Value | Window | |------------|-------|--------| | Per Minute | 100 requests | 60 seconds | | Per Day | 10,000 requests | 24 hours |
These limits apply per API key, not per account. Using multiple API keys increases your total capacity.
How Rate Limiting Works
Rate limits use a sliding window algorithm:
- Every request increments your counter
- The counter resets at the end of each window
- Exceeding the limit returns a
429 Too Many Requestserror
Example
If you make 100 requests in 30 seconds:
- First 30s: 100 requests (allowed)
- Next 30s: 0 requests allowed (limit reached)
- After 60s: Counter resets, 100 more requests allowed
Rate Limit Headers
Every API response includes rate limit headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1640995200| Header | Description |
|--------|-------------|
| X-RateLimit-Limit | Maximum requests allowed in the current window |
| X-RateLimit-Remaining | Remaining requests in the current window |
| X-RateLimit-Reset | Unix timestamp when the window resets |
Checking Remaining Requests
curl -X GET https://mockupanda.com/api/v1/templates \
-H "Authorization: Bearer YOUR_API_KEY" \
-IResponse headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1640995200Rate Limit Exceeded
When you exceed the rate limit, you'll receive a 429 error:
{
"error": "Rate limit exceeded",
"code": "RATE_LIMIT_EXCEEDED",
"details": {
"limit": 100,
"window": "60s",
"retry_after": 23
}
}The retry_after field tells you how many seconds to wait before retrying.
Response Headers
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995200
Retry-After: 23Handling Rate Limits
1. Monitor Rate Limit Headers
Check headers after each request:
async function generateMockup(params) {
const response = await fetch('https://mockupanda.com/api/v1/mockups/generate', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY'
},
body: params
});
// Check rate limit headers
const remaining = response.headers.get('X-RateLimit-Remaining');
const reset = response.headers.get('X-RateLimit-Reset');
console.log(`Remaining: ${remaining}, Resets at: ${new Date(reset * 1000)}`);
if (remaining < 10) {
console.warn('Approaching rate limit!');
}
return response;
}2. Implement Exponential Backoff
Retry with increasing delays:
async function requestWithBackoff(params, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await generateMockup(params);
} catch (error) {
if (error.status === 429) {
const retryAfter = error.details?.retry_after || Math.pow(2, attempt);
console.log(`Rate limited. Retrying in ${retryAfter}s...`);
await sleep(retryAfter * 1000);
continue;
}
throw error;
}
}
throw new Error('Max retries exceeded');
}3. Use a Rate Limiter
Queue requests to stay under the limit:
class RateLimiter {
constructor(maxPerMinute = 100) {
this.maxPerMinute = maxPerMinute;
this.queue = [];
this.requestTimes = [];
}
async schedule(fn) {
// Remove requests older than 60 seconds
const now = Date.now();
this.requestTimes = this.requestTimes.filter(t => now - t < 60000);
// Wait if we're at the limit
while (this.requestTimes.length >= this.maxPerMinute) {
const oldestRequest = this.requestTimes[0];
const waitMs = 60000 - (now - oldestRequest);
await sleep(waitMs);
this.requestTimes.shift();
}
// Execute request
this.requestTimes.push(Date.now());
return await fn();
}
}
const limiter = new RateLimiter(100);
// Use it
await limiter.schedule(() => generateMockup(params));4. Distribute Load Across Keys
Use multiple API keys for higher throughput:
const API_KEYS = [
'mpa_key1',
'mpa_key2',
'mpa_key3'
];
let keyIndex = 0;
function getNextKey() {
keyIndex = (keyIndex + 1) % API_KEYS.length;
return API_KEYS[keyIndex];
}
async function generateMockup(params) {
const apiKey = getNextKey();
return await fetch('https://mockupanda.com/api/v1/mockups/generate', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`
},
body: params
});
}Best Practices
Cache Responses
Cache mockups to reduce repeated requests:
const cache = new Map();
async function getCachedMockup(params) {
const cacheKey = JSON.stringify(params);
if (cache.has(cacheKey)) {
console.log('Returning cached mockup');
return cache.get(cacheKey);
}
const mockup = await generateMockup(params);
cache.set(cacheKey, mockup);
return mockup;
}Batch Operations
Process mockups in batches:
async function generateMockupsBatch(paramsArray, batchSize = 10) {
const results = [];
for (let i = 0; i < paramsArray.length; i += batchSize) {
const batch = paramsArray.slice(i, i + batchSize);
const batchResults = await Promise.all(
batch.map(params => generateMockup(params))
);
results.push(...batchResults);
// Pause between batches
if (i + batchSize < paramsArray.length) {
await sleep(1000);
}
}
return results;
}Use Background Jobs
For large workloads, use a job queue:
// With Bull queue (Node.js)
const Queue = require('bull');
const mockupQueue = new Queue('mockups');
// Add jobs
await mockupQueue.add({ template_id: 'bedroom-poster-01', artwork_url: '...' });
// Process with rate limiting
mockupQueue.process(10, async (job) => { // 10 concurrent workers
return await generateMockup(job.data);
});Increasing Rate Limits
Higher rate limits are available for high-volume customers. Contact support if you need:
- More than 100 requests/minute per key
- More than 10,000 requests/day per key
- Dedicated infrastructure for enterprise workloads
Include your expected usage pattern and we'll provide a custom quote.
Fair Use Policy
Rate limits ensure fair access for all customers. Violating the fair use policy may result in:
- Temporary rate limit reductions
- API key suspension
- Account termination
Prohibited behaviors:
- Credential sharing: Don't share API keys between companies
- Abuse: Don't intentionally bypass rate limits
- Reselling: Don't resell raw API access (build a product instead)
Monitoring
Track your rate limit usage:
const stats = {
requests: 0,
rateLimitHits: 0,
minutelyAverage: 0
};
function trackRequest(response) {
stats.requests++;
const remaining = response.headers.get('X-RateLimit-Remaining');
stats.minutelyAverage = 100 - remaining;
if (response.status === 429) {
stats.rateLimitHits++;
console.warn('Rate limit hit:', stats);
}
}