Edge Computing for Web Developers: Cloudflare & Vercel in 2026
Edge computing for web developers means running code at CDN edge locations — physically close to the user, instead of at a single origin server. In 2026, this has moved from infrastructure curiosity to standard web architecture. Cloudflare Workers, Vercel Edge Functions, and Deno Deploy handle everything from request routing and A/B testing to full API logic and AI inference, all executing within 50ms of the user regardless of where they are.
This guide is for front-end and full-stack developers who want to understand what edge computing practically means for their projects: what can run at the edge, what cannot, how it changes your architecture, and where it delivers measurable improvements versus where it is overengineered complexity.
What Runs at the Edge in 2026
Cloudflare Workers
Cloudflare's edge runtime runs JavaScript and WebAssembly at 300+ edge locations globally. Cold start times are under 5ms (compared to 200–500ms for traditional serverless functions). Workers handle:
- Request/response transformation: Modify headers, rewrite URLs, inject content before the response reaches the user
- API routing: Full HTTP API logic without an origin server
- Authentication: Token validation, session management, and access control at the edge
- Caching logic: Custom cache key generation, cache purging, and stale-while-revalidate patterns
- AI inference: Workers AI runs lightweight ML models at the edge for classification, embedding, and text generation
This site runs on Cloudflare Pages with Workers for edge-level processing. The daily rebuild cron that triggers publishing of scheduled guides (as described in the site architecture) is a Worker that posts to a Pages Deploy Hook.
Vercel Edge Functions
Vercel's edge runtime is built on similar principles. Edge Functions run in a V8 isolate (same as Workers) and support:
- Middleware for Next.js applications (request interception before page rendering)
- Edge API routes for serverless API logic
- A/B testing and feature flag evaluation without client-side JavaScript
- Geolocation-based content adaptation
What Can and Cannot Run at the Edge
Works well at the edge:
- Stateless request processing (no database needed)
- Simple data transformation (JSON manipulation, HTML rewriting)
- Authentication and authorisation checks
- Cache control and content negotiation
- Lightweight AI inference (< 50ms models)
- Redirects, rewrites, and header manipulation
Does not work well at the edge:
- Database queries (unless using edge-native databases like D1, Turso, or PlanetScale)
- Long-running computations (Workers have a 30-second CPU time limit on the paid plan)
- File system operations (edge runtimes have no persistent file system)
- Large language model inference (models over ~500MB exceed edge memory constraints)
Architecture Patterns
Static Site + Edge Enhancement
This is the pattern this site uses. Gatsby generates static HTML at build time. Cloudflare Pages serves it from the CDN. Workers add edge-level processing:
- Build time: Gatsby builds static pages, applying publish-date gating
- CDN delivery: Static HTML served from the nearest edge location
- Edge processing: Worker can modify responses — adding headers, rewriting content, implementing personalisation
- Cron trigger: A scheduled Worker triggers a deploy hook daily to rebuild and unlock newly published content
The agentic websites guide covers how edge functions enable personalisation on static sites. The cache and performance guide covers the caching layer that makes static + edge architecture fast.
Edge Middleware for Frameworks
In Next.js, middleware runs at the edge before every request. Practical uses:
// middleware.ts — runs at the edge
export function middleware(request) {
// A/B test assignment
const bucket = request.cookies.get('ab-bucket') || assignBucket();
// Geolocation-based redirect
const country = request.geo?.country;
if (country === 'DE' && !request.nextUrl.pathname.startsWith('/de')) {
return NextResponse.redirect(new URL('/de' + request.nextUrl.pathname, request.url));
}
// Feature flag evaluation
const response = NextResponse.next();
response.headers.set('x-ab-bucket', bucket);
return response;
}
This runs in under 10ms at the nearest edge location. No origin server round-trip for routing decisions.
Edge API Routes
Full API logic at the edge, without an origin server:
// Cloudflare Worker
export default {
async fetch(request, env) {
const url = new URL(request.url);
if (url.pathname === '/api/search') {
const query = url.searchParams.get('q');
// Query edge-native database
const results = await env.DB.prepare(
'SELECT title, path FROM pages WHERE title LIKE ?'
).bind(`%${query}%`).all();
return new Response(JSON.stringify(results), {
headers: { 'Content-Type': 'application/json' }
});
}
return new Response('Not found', { status: 404 });
}
}
With Cloudflare D1 (SQLite at the edge), the entire request — from user to API response — happens at the edge. No origin server. TTFB can be under 50ms globally.
Edge AI Inference
Cloudflare Workers AI and Vercel AI SDK both enable lightweight model inference at the edge. Practical applications for web developers:
- Content classification: Categorise user-submitted content for moderation
- Semantic search: Generate embeddings for search queries and match against pre-computed content embeddings
- Text summarisation: Generate page summaries on the fly for preview cards
- Image classification: Categorise uploaded images for automatic tagging
The constraint: edge AI models must be small (typically under 500MB) and fast (inference under 50ms). Large language models do not run at the edge — they run on GPU-equipped origin servers with edge functions acting as the API gateway.
Performance Characteristics
Edge vs Origin Latency
| Scenario | Typical Latency | |---|---| | Edge function (no external calls) | 5–15ms | | Edge function + edge database (D1) | 10–30ms | | Edge function + origin API call | 50–200ms | | Traditional serverless function (cold start) | 200–500ms | | Traditional serverless function (warm) | 50–150ms | | Static file from CDN | 5–20ms |
The key insight: edge functions are faster than origin serverless functions even when warm, and dramatically faster on cold starts. But if your edge function makes an API call to a single-region origin server, you lose the latency advantage.
Cost Characteristics
Edge computing is priced per-request, not per-uptime:
- Cloudflare Workers: Free tier includes 100,000 requests/day. Paid plan at $5/month for 10 million requests
- Vercel Edge Functions: Included in Pro plan with usage limits
- For static sites, edge processing on top of CDN delivery adds minimal cost
When Edge Computing Is Overkill
For many web projects, static generation with CDN delivery is sufficient. Adding edge logic introduces complexity:
- Edge code has platform-specific APIs and constraints (no Node.js standard library in Workers)
- Debugging edge functions requires understanding distributed systems behaviour
- Edge databases (D1, Turso) have limitations compared to traditional databases
If your site is content-driven, generates at build time, and does not need request-time logic, pure static delivery is simpler and just as fast. The green web design guide notes that static delivery is also the most energy-efficient architecture.
Checklist
- [ ] Identified which request-time logic benefits from edge execution
- [ ] Edge function cold start times verified (should be under 10ms)
- [ ] External API calls from edge functions measured for latency impact
- [ ] Edge-native databases considered for data that needs to be at the edge
- [ ] Fallback behaviour defined for edge function failures
- [ ] Cost model calculated based on expected request volume
- [ ] Edge code tested in multiple geographic regions (not just local development)
- [ ] Caching strategy accounts for edge function responses
- [ ] Monitoring and logging configured for edge function execution
- [ ] Complexity justified — static delivery sufficient for pure content sites
FAQ
Can I use npm packages in Cloudflare Workers?
Many npm packages work, but Workers use a V8 isolate runtime, not Node.js. Packages that depend on Node.js built-in modules (fs, net, child_process) will not work. The node_compat flag enables some Node.js API compatibility, but not all.
How do I handle state at the edge? Cloudflare offers KV (key-value storage) and D1 (SQLite) at the edge. Vercel offers Edge Config for small configuration data. For larger datasets, use Turso or PlanetScale with edge-optimised connection patterns.
Should I move my entire API to the edge? Only if your API is predominantly stateless or uses edge-native databases. APIs with complex database queries, transactions, or large datasets are better served from a single region close to the database, with edge caching on top.
Next Steps
- Review the cache and performance guide for CDN caching fundamentals
- Check the agentic websites guide for edge personalisation patterns
- Read the Core Web Vitals guide for performance targets that edge computing helps achieve
- Browse all guides for more implementation patterns
