Serverless
A cloud computing model in which the provider runs code on demand in response to events, abstracting server management away from the developer.
Also known as: serverless computing, Functions as a Service, FaaS
Serverless is a cloud computing model in which a provider executes code on demand in response to events, automatically managing the underlying servers, scaling, and infrastructure. The “serverless” name is somewhat misleading: servers still exist, but the developer does not provision or manage them.
The most common form of serverless is Functions as a Service (FaaS): small units of code that run in response to HTTP requests, scheduled events, file uploads, database changes, or other triggers.
How serverless works
In a typical serverless model:
- The developer writes a function (a small piece of code with a defined entry point)
- The function is deployed to a serverless platform (AWS Lambda, Cloudflare Workers, Vercel, Netlify, etc.)
- When a triggering event occurs (HTTP request, scheduled time, queue message), the platform allocates resources and runs the function
- After execution, resources are released
- Billing is typically per invocation and execution time, not per running server
The developer does not configure operating systems, runtime versions, or scaling rules beyond high-level settings.
Common serverless platforms
| Platform | Trigger types | Notable features |
|---|---|---|
| AWS Lambda | HTTP, S3, DynamoDB, SQS, schedule, more | Largest ecosystem, deepest AWS integration |
| Google Cloud Functions | HTTP, Pub/Sub, schedule | Tight Google Cloud integration |
| Azure Functions | HTTP, queues, schedule | Microsoft ecosystem integration |
| Cloudflare Workers | HTTP at the edge | Global distribution, V8 isolates |
| Vercel Functions | HTTP | Tight Next.js integration |
| Netlify Functions | HTTP | Integrates with Netlify hosting |
| Supabase Edge Functions | HTTP | Integrates with Supabase database |
| Deno Deploy | HTTP at the edge | Deno runtime, web standards |
Common use cases
- API endpoints. Backend logic for static sites or single-page applications
- Form submissions. Receiving form data and forwarding to email, databases, or third-party services
- Webhooks. Receiving and processing events from third-party services (Stripe, GitHub, Slack)
- Image processing. Generating thumbnails, format conversion
- Scheduled tasks. Running cleanup jobs, sending emails, syncing data
- Authentication callbacks. Handling OAuth flows
- CMS triggers. Rebuilding a static site when content changes
- Lightweight ETL. Moving and transforming data between systems
Strengths
- No server management. No patching, scaling configuration, or capacity planning
- Automatic scaling. Handles variable load without manual intervention
- Pay per use. Costs scale with actual traffic; idle services cost nothing
- Fast deployment. A new function can be live within seconds
- Strong fit for event-driven workloads. Works naturally with the rest of a cloud ecosystem
Limitations
- Cold starts. The first invocation after idle time can be slow, especially for some runtimes
- Execution time limits. Most platforms cap function duration (15 minutes for Lambda, less for edge functions)
- Statelessness. Functions cannot maintain in-memory state between invocations; state must be stored externally
- Vendor lock-in. Functions often use platform-specific APIs and triggers
- Debugging complexity. Distributed, event-driven systems can be harder to trace
- Cost at very high traffic. Per-invocation pricing can exceed dedicated infrastructure at scale
Serverless vs traditional servers
| Aspect | Serverless | Traditional servers |
|---|---|---|
| Server management | Provider-handled | Developer-managed |
| Scaling | Automatic | Manual or auto-scaling configuration |
| Billing | Per invocation | Per server, regardless of use |
| Cold start | Possible | None (always running) |
| Long-running processes | Limited | Native |
| Stateful operations | Requires external state | Native (in-memory) |
Serverless vs containers
Containers (Docker, Kubernetes) and serverless solve overlapping problems but at different levels of abstraction:
- Containers package an application with its environment; the developer still manages how and where it runs (often via Kubernetes or a container service)
- Serverless abstracts away the entire runtime; the developer provides only the function
Many modern systems combine both: long-running services in containers, event-driven and bursty work in serverless functions.
Serverless and static sites
Serverless functions complement static sites by handling dynamic features without requiring a full backend:
- Static marketing site + serverless form handler
- Static documentation + serverless search
- Static ecommerce + serverless cart and checkout
This pattern is central to the JAMstack approach.
Common misconceptions
- “Serverless means there are no servers.” Servers still exist; they are just managed by the provider.
- “Serverless is always cheaper.” For low and bursty traffic, often yes. For sustained high traffic, dedicated infrastructure can be cheaper.
- “Serverless functions can do anything.” They are constrained by execution time, memory, and runtime restrictions; long-running or stateful work needs a different approach.