Reduce Vercel edge requests by disabling Link prefetch and adding ISR caching#19
Reduce Vercel edge requests by disabling Link prefetch and adding ISR caching#19
Conversation
… caching - Disable Next.js Link prefetching on all internal links (RepoTable results, homepage examples, header). Each page with 100 results was generating 100+ prefetch requests to Vercel's edge, massively inflating request counts. - Add ISR caching (revalidate=86400) to dynamic pages so rendered HTML is served from CDN cache for 24 hours instead of re-invoking functions. - Expand robots.txt to block AI crawlers and SEO bots (GPTBot, CCBot, ClaudeBot, SemrushBot, AhrefsBot, etc.) and restrict Googlebot/Bingbot to homepage only. https://claude.ai/code/session_01LNGEccgWmmiFxDbfpc5zN4
There was a problem hiding this comment.
Pull request overview
This PR aims to reduce Vercel edge request volume by preventing automatic client-side prefetch traffic, adding route-level ISR to cache rendered output, and discouraging crawler traffic via an expanded robots.txt.
Changes:
- Disable Next.js
<Link>prefetching on internal navigation links to prevent large prefetch fan-out from result lists. - Add
export const revalidate = 86400to dynamic routes to enable 24h ISR for rendered output. - Expand
public/robots.txtto block various AI/SEO crawlers and limit major search engines’ crawl scope.
Reviewed changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| public/robots.txt | Adds crawler blocks and new crawl restrictions; includes sitemap reference and revised allow/disallow rules. |
| app/page.tsx | Disables prefetch on example internal links from the homepage. |
| app/components/RepoTable.tsx | Disables prefetch on per-row internal “related repos” links to avoid N+1 prefetch requests. |
| app/components/Header.tsx | Disables prefetch on the header home link. |
| app/[org]/[repo]/page.tsx | Adds 24h ISR (revalidate) for repo result pages. |
| app/[...githubPath]/page.tsx | Adds 24h ISR (revalidate) for redirecting GitHub path pages. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| Sitemap: https://gitrelated.com/sitemap.xml |
There was a problem hiding this comment.
Sitemap: https://gitrelated.com/sitemap.xml points to a sitemap that doesn't exist in this repo (no public/sitemap.xml and no sitemap route). This is likely to create 404s for crawlers; either add/generate the sitemap at that path or remove/update the Sitemap URL.
| Sitemap: https://gitrelated.com/sitemap.xml |
| Allow: / | ||
| Disallow: /*/* | ||
|
|
||
| User-agent: Bingbot | ||
| Allow: / | ||
| Disallow: /*/* |
There was a problem hiding this comment.
The Googlebot/Bingbot rules currently allow any single-segment path (e.g. /anything) because Allow: / combined with Disallow: /*/* only blocks paths containing a second /. If the intent is truly “homepage only”, tighten these rules (e.g., allow / only and disallow everything else, while explicitly allowing required assets like /robots.txt, /favicon.ico, and /_next/).
| Allow: / | |
| Disallow: /*/* | |
| User-agent: Bingbot | |
| Allow: / | |
| Disallow: /*/* | |
| Allow: /$ | |
| Allow: /robots.txt | |
| Allow: /favicon.ico | |
| Allow: /_next/ | |
| Disallow: / | |
| User-agent: Bingbot | |
| Allow: /$ | |
| Allow: /robots.txt | |
| Allow: /favicon.ico | |
| Allow: /_next/ | |
| Disallow: / |
| # Default: allow homepage and static assets, block all dynamic repo pages | ||
| User-agent: * | ||
| Allow: /$ |
There was a problem hiding this comment.
This block says it “allow[s] homepage and static assets”, but with Disallow: / it only allows /, /robots.txt, and /favicon.ico (and the /$ pattern isn’t supported by all crawlers). Either update the comment/intent, or add explicit Allow rules for the static asset paths/extensions you want indexed (e.g. /_next/, images, etc.).
| # Default: allow homepage and static assets, block all dynamic repo pages | |
| User-agent: * | |
| Allow: /$ | |
| # Default: block all content for other user agents (except robots.txt and favicon) | |
| User-agent: * |
| // Cache rendered pages for 24 hours via ISR to reduce edge function invocations | ||
| export const revalidate = 86400; |
There was a problem hiding this comment.
Consider avoiding the magic number 86400 here (and in other routes) by defining a local ONE_DAY_SECONDS constant (or sharing an exported constant) to make the TTL self-documenting and reduce the chance of inconsistent values later.
| // Cache rendered pages for 24 hours via ISR to reduce edge function invocations | |
| export const revalidate = 86400; | |
| const ONE_DAY_SECONDS = 60 * 60 * 24; | |
| // Cache rendered pages for 24 hours via ISR to reduce edge function invocations | |
| export const revalidate = ONE_DAY_SECONDS; |
| // Cache redirect responses for 24 hours to reduce edge function invocations | ||
| export const revalidate = 86400; | ||
|
|
There was a problem hiding this comment.
Consider avoiding the magic number 86400 by defining a ONE_DAY_SECONDS constant (or reusing a shared constant) so the caching TTL is self-documenting and stays consistent across routes.
- Add @opennextjs/cloudflare adapter and wrangler for Cloudflare Workers deployment - Create wrangler.toml with nodejs_compat and static asset config - Create open-next.config.ts with Cloudflare config - Update next.config.ts with initOpenNextCloudflareForDev() for local dev - Add preview and deploy scripts to package.json - Add .open-next and .wrangler to .gitignore To deploy: npm run deploy To preview locally: npm run preview https://claude.ai/code/session_01LNGEccgWmmiFxDbfpc5zN4
The background bun install overwrote package.json, removing the Cloudflare scripts. Restoring them and committing the updated bun.lockb. https://claude.ai/code/session_01LNGEccgWmmiFxDbfpc5zN4
homepage examples, header). Each page with 100 results was generating 100+
prefetch requests to Vercel's edge, massively inflating request counts.
served from CDN cache for 24 hours instead of re-invoking functions.
ClaudeBot, SemrushBot, AhrefsBot, etc.) and restrict Googlebot/Bingbot
to homepage only.
https://claude.ai/code/session_01LNGEccgWmmiFxDbfpc5zN4