UniLink API Rate Limits (Understand and Handle Throttling)

UniLink API Rate Limits (Understand and Handle Throttling)
UniLink enforces per-hour request limits to keep the API fast and fair — understanding the limit headers and retry strategy keeps your integration running smoothly.
- Limits are 100 req/hour (Free), 1,000 req/hour (Pro), and 10,000 req/hour (Business).
- Every response includes
X-RateLimit-Limit,X-RateLimit-Remaining, andX-RateLimit-Resetheaders. - When you hit the limit, the API returns 429 — use exponential backoff before retrying.
Rate limiting is how UniLink protects the API's performance and availability for all users. Without limits, a single runaway script could degrade the experience for every other developer using the platform. UniLink's rate limit system is transparent — every response tells you exactly how many requests you have left and when your quota resets — so building integrations that stay inside the limits is straightforward once you know where to look.
What Rate Limiting Does
Every UniLink API key is subject to a rolling hourly request limit. The counter resets once per hour from the time of your first request in that window. Free plan accounts receive 100 requests per hour, which is enough for manual testing and low-frequency automations. Pro plan accounts receive 1,000 requests per hour, covering most CRM syncs, content management scripts, and analytics pulls. Business plan accounts receive 10,000 requests per hour, supporting high-volume agency workflows, bulk imports, and real-time data pipelines.
Every API response — whether successful or an error — includes three rate limit headers. X-RateLimit-Limit shows the total requests allowed in the current window. X-RateLimit-Remaining shows how many requests you can still make before being throttled. X-RateLimit-Reset is a Unix timestamp indicating when the window resets and your full quota is restored. Reading these headers before each request is the right way to implement proactive throttling in your application — slow down or pause when remaining drops below a safety threshold rather than waiting for a 429.
When you exceed the rate limit, the API returns HTTP 429 Too Many Requests. The response body contains an error object: {"error": "rate_limit_exceeded", "message": "Too many requests. Retry after X seconds.", "retry_after": 1234567890}. The retry_after field is the same Unix timestamp as X-RateLimit-Reset. Do not retry immediately — sleep until the reset time, then resume with normal request pacing. Retrying too fast while rate limited does not consume additional quota but does generate unnecessary 429 responses.
How to Get Started
- Make any API request and inspect the response headers. Every response includes
X-RateLimit-Limit,X-RateLimit-Remaining, andX-RateLimit-Resetregardless of the endpoint or HTTP method used. - Add header-reading logic to your HTTP client. In most libraries this is a one-time setup — store the remaining count and reset timestamp from each response in a shared variable your request function checks before firing.
- Implement a pre-request check: if
X-RateLimit-Remainingis 0, sleep untilX-RateLimit-Resetbefore sending the next request. - Add a 429 handler to your error handling code. When you receive a 429, read
retry_afterfrom the response body (orX-RateLimit-Resetfrom the headers) and wait that long before retrying. - Test your backoff logic deliberately by sending more requests than your plan allows in a short period in a development environment. Confirm your code pauses and then resumes correctly.
How to Handle Rate Limits in Code
- Read headers after every request. Update a shared
remainingcounter andreset_attimestamp from the response headers on every successful call, not just on errors. - Implement exponential backoff for 429 responses. On the first retry, wait 1 second. On the second retry, wait 2 seconds. On the third, wait 4 seconds. Cap at 60 seconds. Add random jitter (±25%) to prevent thundering-herd behavior when multiple instances restart simultaneously.
- Use bulk endpoints for high-volume operations. Instead of creating 50 contacts with 50 individual POST requests, use the bulk contact import endpoint available on Pro and Business plans to send them all in a single request.
- Queue and pace requests in batch jobs. If you are processing a large export, add a small delay between requests (for example, 1 request per 3.6 seconds on Free) to stay comfortably inside the hourly limit without needing to handle 429 errors at all.
- Monitor your usage over time. The dashboard shows request counts per API key. If a key is consistently near its limit, either optimize your integration to make fewer requests or upgrade your plan.
Key Settings
| Setting | What It Does | Recommended |
|---|---|---|
| Plan Tier | Determines the hourly request limit (100 / 1,000 / 10,000) | Match plan to your integration's peak request rate with 30% headroom |
| X-RateLimit-Remaining | Remaining requests in the current window | Slow down or pause when this drops below 10% of limit |
| X-RateLimit-Reset | Unix timestamp when the window resets | Use as the sleep target when pausing due to exhaustion |
| Bulk Endpoints | Allows sending multiple resources in one request | Use for imports of 10+ items to minimize request count |
| Retry Jitter | Random delay added to backoff intervals | Add ±25% jitter to all retry delays to prevent synchronized retries |
GET endpoints in a loop. A webhook costs zero API requests and delivers data instantly — replacing a 1-minute polling loop with a webhook alone can cut your hourly request count by over 90%.
Get the Most Out Of Rate Limit Management
Track your rate limit usage per integration, not per account. If you have five API keys and one of them is consuming 80% of your hourly quota, that integration is a candidate for optimization. Look for polling loops that could be replaced with webhooks, requests that fetch more data than needed (over-fetching), or missing pagination — fetching page after page when you only need the first few results.
Exponential backoff with jitter is the industry-standard approach to handling 429 errors, and it matters especially in multi-instance deployments. If you run three worker processes and all of them hit the rate limit at the same time, a fixed retry delay will cause all three to retry simultaneously and immediately exceed the limit again. Adding random jitter spreads the retries across time so the load recovers gracefully.
Request batching is the single most effective optimization for high-volume workflows. The UniLink bulk endpoints accept arrays of resources — contacts, pages, and orders can all be submitted in batches. Sending 100 records in 1 request instead of 100 separate requests reduces your quota consumption by 99x for that operation. Bulk endpoints are available on Pro and Business plans and are documented in the API reference for each resource type.
Consider using different API keys for different priority levels in your system. For example, use one key for your customer-facing real-time features (page data, checkout) and a separate key for background batch jobs (analytics exports, CRM syncs). If the background key gets rate limited, it does not affect the customer-facing key's quota at all, and your end-user experience stays reliable.
Troubleshooting
| Problem | Cause | Fix |
|---|---|---|
| Consistent 429 errors despite low request count | Multiple application instances sharing one key | Count requests across all instances combined — use a distributed counter or switch to a higher plan |
| Rate limit resets slower than expected | Misreading X-RateLimit-Reset as relative seconds instead of Unix timestamp | Convert the reset value using new Date(reset * 1000) in JS or datetime.fromtimestamp(reset) in Python |
| Backoff loop never exits | Retry logic does not check remaining correctly after sleep | Re-read the X-RateLimit-Remaining header after waking up — do not assume the quota is full just because the reset time passed |
| Hitting Free limit within minutes | Polling loop or missing pagination causing excessive requests | Replace polling with webhooks; add a delay between paginated requests; upgrade to Pro if the use case genuinely requires more requests |
- Transparent headers on every response make it easy to build proactive rate-limit handling
- Business plan limit (10,000 req/hour) is sufficient for most high-volume agency workflows
- Bulk endpoints dramatically reduce request count for batch operations
- Webhook integration eliminates polling-based request overhead entirely
- Free plan limit (100 req/hour) is very low for anything beyond manual testing
- Rate limits are per-account, not per-endpoint — a noisy background job affects all API calls on the same key
- No burst allowance — you cannot temporarily exceed your plan limit even briefly
What happens when I exceed the rate limit?
The API returns HTTP 429 Too Many Requests with a JSON body containing retry_after — a Unix timestamp indicating when your quota resets. Stop sending requests until that time, then resume normally.
Do rate limits apply to all endpoints equally?
Yes — every request to any endpoint counts against the same hourly quota for that API key. There is no per-endpoint limit, but bulk endpoints let you accomplish more work per request.
Does the rate limit reset at a fixed time each hour?
No. The window starts from your first request in the period and runs for 60 minutes from that point. The exact reset time is always visible in the X-RateLimit-Reset header.
Can I get a temporary rate limit increase?
Yes. Contact UniLink support with your use case. Temporary increases are available for migrations and large one-time imports. For ongoing high-volume needs, upgrading to Business plan is the recommended path.
Do failed requests (4xx errors) count against the limit?
Yes, all requests count against the rate limit, including those that return 4xx errors. This is why it is important to fix authentication and validation issues quickly rather than retrying in a tight loop.
- Rate limits are 100/hour (Free), 1,000/hour (Pro), and 10,000/hour (Business) per API key.
- Read
X-RateLimit-RemainingandX-RateLimit-Resetheaders after every response to stay ahead of the limit. - A 429 response means you are out of quota — wait until
retry_afterbefore sending more requests. - Use exponential backoff with random jitter when retrying after a 429.
- Replace polling loops with webhooks and use bulk endpoints to minimize request consumption.
Need higher rate limits for your integration? Upgrade your UniLink plan at app.unilink.us under Settings → Billing to unlock up to 10,000 requests per hour.
Create Your Free Link-in-Bio Page
Join thousands of creators using UniLink. 40+ blocks, analytics, e-commerce, and AI tools — all free.
Get Started Free