Serverless Development: AWS Lambda, Vercel Functions, Cloudflare Workers, Edge
Serverless doesn't mean "without servers". Servers exist — you just don't manage them. Better to read this as "without server management": no OS patching, no nginx configuration, no disk space monitoring. A function receives an event, processes it, returns a response. The provider decides where to run it.
AWS Lambda: Power and Operational Complexity
Lambda is the most mature platform with the largest set of triggers: API Gateway, SQS, SNS, S3, DynamoDB Streams, EventBridge. This is important for complex event-driven architectures.
Cold start — the main pain point of Lambda on Node.js: from 200ms to 1.5s depending on bundle size and VPC. In VPC cold start was up to 10 seconds before 2019, improved since, but still longer. For production functions with latency requirements: Provisioned Concurrency (keeps instances warm), SnapStart for Java, minimize bundle via tree-shaking.
Practical case: function for processing uploaded images (resize, WebP conversion, upload to S3). Bundle with sharp weighed 40MB due to native binaries. Solution — Lambda Layer with sharp, main function 800KB. Cold start dropped from 3.2s to 400ms.
Lambda Layers — shared dependencies between functions. Up to 5 layers per function, each up to 250MB. Standard practice: layer with heavy dependencies (sharp, puppeteer, ffmpeg), layer with common business logic.
Infrastructure for Lambda via AWS CDK or Terraform. SAM — for those just starting, CDK — for serious projects with type safety.
Vercel Functions and Edge Runtime
Vercel Functions are Lambda under the hood (us-east-1 by default), but with minimal barrier to entry for Next.js projects. API Routes and Route Handlers deploy automatically. Serverless functions on Node.js runtime with 300 second limit on Vercel Pro.
Edge Runtime is fundamentally different: function runs on V8 isolate in the Vercel CDN network point nearest to the user (120+ regions). No cold start as such — isolate starts in ~0ms. But strict limitations: no Node.js API (fs, crypto via Web API), no database access via TCP (only via HTTP API), bundle size up to 4MB.
Edge Runtime is ideal for: middleware (auth check, redirect, A/B test), response transforms, geolocation logic, Edge Config. Not suitable for: PostgreSQL access, heavy computation, file system work.
Cloudflare Workers: True Edge
Workers run on V8 isolates at 300+ Cloudflare presence points. User latency — literally the nearest data center. Cold start < 1ms.
Workers Durable Objects solve the state problem in Edge: each Durable Object is one coordination point, runs in one region. Ideal for: game rooms, real-time documents, rate limiting without races.
Workers KV — eventually consistent storage. Writes propagate across all regions in ~60 seconds. Not suitable for financial transactions, suitable for configs, feature flags, cache.
D1 — SQLite on Edge. On one replica for reads works fine, write latency depends on distance to primary region. For global write-heavy applications — not the best choice.
Ecosystem: Hono.js — minimalist router, works on Workers, Deno, Bun, Node.js. If you need single code for Edge and server — good choice.
When Serverless Doesn't Fit
Long computations (>15 minutes on Lambda, >30 seconds on Vercel) — need Fargate or regular server. WebSocket server with state — no persistent process. Tasks with frequent disk access — ephemeral storage, /tmp on Lambda 512MB–10GB. If function is called thousands per second constantly — EC2 or Fargate is cheaper.
Vendor lock-in — real problem. Lambda-specific code (handler signature, Lambda context) is hard to port. Hono.js, Remix, or adapters like @hono/node-server help keep logic portable.
Observability
Without proper observability serverless is a black box. Standard: AWS X-Ray or Powertools for AWS Lambda (structured logging, tracing, metrics out of the box). For multi-cloud stack — OpenTelemetry with export to Grafana Cloud or Honeycomb.
Distributed tracing is critical when function A calls function B via SQS — without trace ID impossible to correlate logs.
Work Process
We start with analysis of load pattern: if traffic is unpredictable or rare — serverless gives savings. If consistently high — might be more expensive. We design function boundaries by single responsibility principle. We develop locally via SST, Wrangler or LocalStack. CI/CD with preview deployments is mandatory.
Timeline
Serverless API for startup (10–20 functions): 2–5 weeks. Migration of monolithic Laravel/Node API to Lambda: 4–10 weeks depending on volume. Edge Middleware + Workers for global product: 2–4 weeks.







