SITECORE EXPERIENCE EDGE 429 RATE LIMITS: Patterns That Actually Work in Production
INTRODUCTION
Across the projects I’ve worked on, this little log line shows up sooner or later on almost every XM Cloud build:
HTTP 429 Too Many Requests X-Rate-Limit-Limit: 80 Experience Edge ships with a fair-use guardrail of 80 uncached requests per second per tenant — a deliberate design choice that keeps multi-tenant Edge fast for everyone. Apps that respect this guardrail run beautifully. Apps that don’t can see ISR revalidations queue up and Vercel logs grow noisy during traffic surges. The good news: the patterns to stay well under it are well-established.
This blog is the playbook I give every team when they hit this. Five patterns that actually work, with the exact code and configuration needed to ship them. Every pattern below is verified against the Sitecore Accelerate Cookbook and the official Experience Edge documentation.
The Bit Most Teams Miss
The Sitecore docs explain the 80 req/sec number plainly. What they don’t tell you is how that number actually behaves in real projects, which is what trips teams up. Three things I’ve learned to remind every team I work with:
- It is per tenant, not per page or per user. So a single popular page can soak the whole budget.
- It only counts uncached requests. The whole game is making sure as many of yours as possible are cached.
- The window resets every second. That means a burst of 200 calls in one second is far worse than 200 calls spread over five seconds. Steady throughput beats bursty traffic on this platform every time.
Where my clients have hit this in practice:
- A popular page revalidates during a traffic surge, taking 50+ concurrent GraphQL calls with it.
- Wildcard routes (product details, article pages) fan out at build or during ISR.
- A sitemap with 10,000 URLs tries to refresh all at once.
With that context, here are the patterns.
Pattern 1: Start With SSG, Not SSR
If I have time for one conversation with a team about Edge cost, this is it. SSR is the single biggest reason teams burn through their request budget. Every request hits Edge afresh, no matter how identical the page output. SSG flips the script: render once, serve from cache until revalidation fires.
If your XM Cloud Next.js app is still using getServerSideProps or unconfigured route handlers, move to SSG first. Everything else in this playbook compounds on top of it.
// Content SDK - App Router SSG with revalidation // app/[[...path]]/page.tsx import { sitecoreClient } from 'lib/sitecore-client'; export const revalidate = 3600; export async function generateStaticParams() { const sitemap = await sitecoreClient.getSiteMap(); return sitemap.map((entry) => ({ path: entry.path.split('/').filter(Boolean), })); } export default async function Page({ params }) { const route = await sitecoreClient.getRouteData({ path: '/' + (params.path?.join('/') ?? ''), language: 'en', }); return ; } Gotcha: generateStaticParams called against a 10k-entry sitemap at build time is itself a burst of Edge calls. Run your first build against a limited sitemap in a preview deployment before flipping production.
Pattern 2: Consolidate Queries with GraphQL Aliases
Walk through your component tree and count the GraphQL fetches a single page makes. On most projects I audit, the answer is somewhere between five and twelve, just for the chrome (header, footer, nav, search, meta, breadcrumbs). Each of those counts as a separate request against your budget.
GraphQL aliases let you merge them into a single request. This pattern is in the Sitecore Accelerate Cookbook for a reason.
Before – five queries, five requests:
query GetHeader($path: String!) { item(path: $path) { ... } } query GetFooter($path: String!) { item(path: $path) { ... } } query GetNav($path: String!) { item(path: $path) { ... } } query GetSearch($path: String!) { item(path: $path) { ... } } query GetMeta($path: String!) { item(path: $path) { ... } } After – one query with aliases, one request:
query GetPageGlobals( $headerPath: String! $footerPath: String! $navPath: String! $searchPath: String! $metaPath: String! ) { header: item(path: $headerPath) { ...HeaderFields } footer: item(path: $footerPath) { ...FooterFields } nav: item(path: $navPath) { ...NavFields } search: item(path: $searchPath) { ...SearchFields } meta: item(path: $metaPath) { ...MetaFields } } fragment HeaderFields on Item { id name fields { name value } } fragment FooterFields on Item { id name fields { name value } } Five separate fetches become one fetch. On a 50-component page, this alone can reduce your Edge calls by 70-80%.
Pattern 3: Use Vercel Data Cache with fetch + revalidate + Tags
If I had to pick the one pattern that has saved my projects the most Edge calls, this is it. Vercel’s Data Cache sits between your app and Experience Edge by default when you deploy XM Cloud there. The opportunity is what most teams miss: with the right revalidate and tags on every fetch, the same GraphQL call gets served from cache across thousands of page renders. You go from “every request hits Edge” to “the first request hits Edge, the next 9,999 don’t.”
// lib/fetch-edge.ts - wrap raw Edge calls through this helper. // For most data fetching in Content SDK apps, prefer the built-in // SitecoreClient methods (getRouteData, getSiteMap, etc.) since they // handle auth, retries, and caching for you. export async function fetchEdge(args) { const res = await fetch(process.env.SITECORE_EDGE_URL, { method: 'POST', headers: { 'content-type': 'application/json', // sc_apikey expects your Edge Delivery API key, not the Context ID. 'sc_apikey': process.env.SITECORE_EDGE_API_KEY, }, body: JSON.stringify({ query: args.query, variables: args.variables }), next: { revalidate: args.revalidate ?? 3600, tags: args.tags ?? [], }, }); if (!res.ok) throw new Error('Edge error ' + res.status); const json = await res.json(); return json.data; } Heads up: sc_apikey and the Content SDK CONTEXT_ID are two different auth mechanisms. The context ID is how the SDK finds your Edge environment and resolves credentials internally. The sc_apikey is the explicit Delivery API key you generate yourself. Keep them separate in your env vars.
Usage – tag fetches so you can invalidate them surgically:
const nav = await fetchEdge({ query: NAV_QUERY, variables: { path: '/sitecore/content/site/navigation' }, revalidate: 86400, tags: ['nav', 'globals'], }); const news = await fetchEdge({ query: NEWS_QUERY, revalidate: 300, tags: ['news'], }); Now every component that needs navigation hits the cache on subsequent renders. Your Edge call count drops dramatically.
Pattern 4: Revalidate Tags on Publish via Webhook
Now we hit the question every editor will ask within a week of go-live: if I cache navigation for 24 hours, why aren’t my published changes showing up?
Answer: hook the Sitecore publish event to a Next.js revalidation route. This is one of the under-appreciated parts of XM Cloud’s webhook system. You can react to publishes, items being saved, almost anything.
// app/api/revalidate/route.ts import { revalidateTag } from 'next/cache'; import { NextResponse } from 'next/server'; export async function POST(request) { const secret = request.headers.get('x-revalidate-secret'); if (secret !== process.env.REVALIDATE_SECRET) { return NextResponse.json({ ok: false }, { status: 401 }); } let body; try { body = await request.json(); } catch { return NextResponse.json({ ok: false }, { status: 400 }); } const tags = mapPayloadToTags(body); for (const tag of tags) revalidateTag(tag); return NextResponse.json({ ok: true, tags }); } function mapPayloadToTags(payload) { return ['globals']; } Create a Webhook Event Handler in Sitecore at /sitecore/system/Settings/Webhooks/ pointing at /api/revalidate. When content publishes, the webhook fires, the matching tags invalidate, and the next visitor gets fresh content without you paying the Edge bill for everyone else.
Heads up: Set the secret header on both sides or your endpoint becomes a public cache-buster. I have seen this in production.
Pattern 5: Tune the Retry Strategy in sitecore.config.ts
Even with all the caching above, some requests will miss. Content SDK gives you a clean way to handle that gracefully. The older GRAPH_QL_SERVICE_RETRIES environment variable from JSS days is gone, replaced by a config-based retry strategy that lives in sitecore.config.ts. I prefer this because it’s versioned with the code, not buried in environment variables nobody reviews.
Simple form – just a retry count:
// sitecore.config.ts import { defineConfig } from '@sitecore-content-sdk/nextjs/config'; export default defineConfig({ api: { edge: { contextId: process.env.CONTEXT_ID, edgeUrl: process.env.SITECORE_EDGE_URL, }, }, retries: 3, defaultSite: 'my-site', defaultLanguage: 'en', }); Advanced form – custom status codes and back-off factor:
// sitecore.config.ts import { defineConfig } from '@sitecore-content-sdk/nextjs/config'; import { DefaultRetryStrategy } from '@sitecore-content-sdk/core'; export default defineConfig({ api: { edge: { contextId: process.env.CONTEXT_ID, edgeUrl: process.env.SITECORE_EDGE_URL, }, }, retryStrategy: new DefaultRetryStrategy({ statusCodes: [429, 502, 503, 504], factor: 2, }), defaultSite: 'my-site', defaultLanguage: 'en', }); A three-retry exponential strategy recovers from transient 429s in the vast majority of cases. Anything more than 3 retries is usually a sign you should be fixing cache patterns, not backing off harder.
Pattern 6: Wildcard Pages for High-Fanout Routes
This one is a content-modelling decision more than a code one, and it’s where I’ve seen teams gain the most ground without writing a line of new code. For routes with thousands of URLs that all share the same layout (product detail, article, knowledge base), use a wildcard page. The layout response caches against the wildcard item once, and every concrete URL reuses that cached layout.
- Wildcard layout calls: cached against the wildcard item, free after the first hit
- Concrete item data: still fetched per URL, but at the datasource level, not the full layout
The net effect is that a site with 10,000 product pages uses roughly the same Edge budget as a site with 100.
Pitfalls I See on Almost Every Project
Pitfall 1: Forgetting to Tag Fetches
I have lost count of how many code reviews I’ve done where the fetchEdge helper is in place but every call has an empty tags array. Then a publish event comes in and the team has two bad options: nuke the entire cache (slow, costly) or let stale content sit for hours. Neither is good.
The fix: Tag every fetch. Even [‘global’] on a catch-all is better than nothing.
Pitfall 2: Using the Same TTL for Everything
I see this on roughly half the projects I audit: a single revalidate: 3600 everywhere. News content goes stale. The home page churns refreshes for content that hasn’t actually changed. Both costs are paid for the same TTL.
The fix: Match TTL to actual publishing cadence per content type. Nav: 24 hours. News: 5 minutes. Marketing copy: 1 hour.
Pitfall 3: Not Monitoring 429 Response Rates
The earliest signal that you are approaching the guardrail is 429 responses in your logs. Telemetry catches this well before users do. The X-Rate-Limit-Limit: 80 header confirms the limit on every response, so you can wire it into a dashboard from day one.
The fix: Instrument your Edge fetch wrapper to count responses by status code. Log every 429 with timestamp, route, and query name. Set a dashboard alert on any sustained 429 rate above zero in the last 15 minutes.
Pitfall 4: Treating revalidate as a Magic Number
When I ask a team why they chose revalidate: 3600, the honest answer is usually “that’s what the starter had.” That’s not a decision, it’s a default. Each data type deserves a TTL chosen on the basis of how fresh it actually needs to feel and how expensive a cache miss is.
The fix: Document the TTL decision per content type in your README or sitecore.config.ts comments. Review it every quarter as publishing cadence changes.
Key Takeaways
✓ The fair-use guardrail is 80 uncached GraphQL requests per second per tenant. Know this number and design for it from day one.
✓ SSG first. SSR hits Edge on every request. SSG + revalidation is the default for a reason.
✓ Consolidate queries with GraphQL aliases. Five queries become one. Do this everywhere it makes sense.
✓ Vercel Data Cache is your best friend. Wrap every Edge call through a fetchEdge helper with revalidate and tags, or use the Content SDK built-ins where you can.
✓ Invalidate via webhook on publish, not by shortening TTL. Longer TTLs + on-publish invalidation beats short TTLs every time.
✓ Use DefaultRetryStrategy with exponential back-off. Three retries covers most transient 429s. More than that means your cache strategy needs work.
✓ Wildcard pages for high-fanout routes. 10,000 product pages can share one layout cache entry.
✓ Instrument 429 rate monitoring. Count and alert on 429 responses in your Edge fetch wrapper so you catch issues before users do.
A 429 in production is rarely an Experience Edge problem. It is almost always a caching-strategy problem.
#saas #sitecore #SitecoreXMCloud #xmCloud


