Hosting and Deployment: Vercel, Netlify, AWS, Docker, Nginx, Kubernetes
"Site is not opening" at 3 AM — and it turns out disk is full on VPS because nginx logs weren't rotated for six months. Or server went down under load on launch day of an advertising campaign, because shared hosting had a limit of 50 simultaneous connections. Choice of infrastructure is not about "where it's cheaper", it's about what happens when something goes wrong.
Vercel and Netlify: When It's the Right Choice
Vercel was created for Next.js — deploy in one push, preview deployments for each PR, automatic CDN, Edge Functions, ISR without configuration. For frontend projects and JAMstack this is optimal: no operational load, time-to-deploy measured in minutes.
Real limitations: Vercel Serverless Functions run in us-east-1 by default (latency for Europe +80–100ms), Function timeout 300 seconds on Pro, Bandwidth 1TB/month on Pro. For heavy backend — need workers or separate server.
Netlify is closer to statics and Edge Functions based on Deno Deploy. Build minutes are the main limitation on the free tier.
Docker: Foundation of Predictable Deployment
"Works on my machine" — classic. Docker solves this through environment containerization. But bad Dockerfile creates new problems.
Typical mistake: copy everything into image without .dockerignore, get 800MB image instead of 80MB. node_modules inside image weighs the same. Correct approach: multi-stage build.
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM node:20-alpine AS runner
WORKDIR /app
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
EXPOSE 3000
CMD ["npm", "start"]
Final image: 180MB instead of 1.2GB. CI build time decreases due to layer caching — if package.json didn't change, npm ci layer is taken from cache.
Docker Compose for local development and simple production scenarios: application + PostgreSQL + Redis in one configuration. For production on one server — quite workable if no horizontal scaling requirements.
Nginx as Reverse Proxy
Nginx in front of application — standard for VPS and dedicated servers. Main functions: SSL termination, gzip, static files, rate limiting, upstream balancing.
Configuration often done wrong: worker_processes auto — number of processes equals number of CPUs. worker_connections 1024 — this is 1024 per each worker process. With 4 CPUs and 1024 connections = 4096 simultaneous connections. For high-load site need worker_connections 4096 and keepalive_timeout 65 adjustment.
For static assets with hash in filename:
location ~* \.(js|css|woff2|png|webp)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
immutable tells browser: don't check this file even on hard refresh. Works correctly only with content-hashed filenames (which Vite/webpack does by default).
AWS: Flexibility and Complexity
EC2 + Auto Scaling Group — classic for horizontal scaling. AMI with pre-installed application, Launch Template, ASG with min/desired/max instances, Application Load Balancer. When CPU > 70% for 3 minutes — scale out, when CPU < 30% for 15 minutes — scale in. Health check via ALB excludes unhealthy instances from rotation.
ECS Fargate — containers without managing EC2. Deploy Docker image, specify CPU/memory (512 CPU units = 0.5 vCPU, from 512MB memory), Fargate runs. More expensive than Lambda, but no cold start and no timeout limits. Suitable for long-running processes, WebSocket servers, heavy workers.
RDS for PostgreSQL with Multi-AZ: automatic failover in 1–2 minutes if primary falls. Read Replicas for scaling reads. RDS Proxy for connection pooling — Lambda functions can't maintain long-term connections, proxy buffers this.
Kubernetes: When It's Justified
K8s adds significant operational complexity. Justified when: multiple teams deploy independent services, need fine-tuned resource settings per service, canary deployments and blue/green without downtime — requirement.
AWS EKS, GKE or managed k8s from Hetzner (cheaper). Helm charts for standard services. Horizontal Pod Autoscaler by CPU and custom metrics (RPS via Prometheus).
For most startups and medium projects — Kubernetes is overkill. ECS or Fly.io gives 80% capability at 20% operational complexity.
Monitoring and Alerting
Server without monitoring is waiting for an incident. Minimal stack: Prometheus + Grafana (or Grafana Cloud for managed), alerting on disk > 80%, memory > 85%, CPU > 90% for 5 minutes, error rate > 1%. Uptime via Better Uptime or Upptime (self-hosted).
Logs: Loki + Grafana or CloudWatch Logs Insights. Structured JSON logs (winston, pino) — mandatory, otherwise searching logs becomes painful.
Work Process
Audit of current infrastructure → choice of target architecture with justification by load and budget → CI/CD pipeline setup (GitHub Actions, GitLab CI) → IaC via Terraform or Pulumi → monitoring and alerting setup → runbook documentation.
Timeline
Basic deployment on VPS with Docker + Nginx + CI/CD: 1–2 weeks. AWS infrastructure setup with Auto Scaling, RDS, CDN: 3–6 weeks. Migration to EKS from scratch: 6–12 weeks. Vercel/Netlify setup for JAMstack: 3–5 days.







