- HTTP File Server
+ Reverse proxy
- Caddy's file server is best-in-class.
+ Caddy has the most flexible general-purpose reverse proxy in the world, featuring advanced request and response handling, dynamic routing, health checking, load balancing, circuit breaking, and more.
+
+
+ What makes Caddy's proxy unique is its design. Only the client-facing side of the proxy needs to be HTTP; the transport underlying the roundtrip with the backend can be fulfilled with any protocol!
+
+
+ Moreover, our proxy can be programmed with highly dynamic upstreams. That is, the available upstreams can change during in-flight requests! If no backends are available, Caddy can hold onto the request until one is.
+
+
+
High-level proxy features
+
+
+
+
Transports
+
+ Transports are how Caddy gets the response from the backend. Caddy's proxy can be a front for protocols other than HTTP by using alternate transport modules. This allows Caddy to generate HTTP responses from backends that don't even speak HTTP!
+
+
+ - HTTP
+ - FastCGI
+ - NTLM
+
+
+
+
Load balancing
+
+ Selecting upstreams is a crucial function of any modern reverse proxy. Caddy has a variety of built-in load balancing policies to choose from to suit any production services. Some policies are extremely fast and lightweight; others provide upstream affinity based on properties of the client or request; others strive for even distribution by counting connections or using randomness and weights.
+
+
+ - Random
+ - Random Choose-N
+ - Least connections
+ - Round robin
+ - Weighted round robin
+ - First available
+ - Remote IP hash
+ - Client IP hash
+ - URI hash
+ - Query hash
+ - Header hash
+ - Cookie hash
+
+
+
+
Circuit breaking
+
+ A circuit breaker module can temporarily mark a backend as down before it actually goes down, to keep it up.
+
+
+ Latency-based
+
+
+
+
Health checking
+
+ Health checks detect when upstreams are unavailable. Passive health checks infer status from actual requests. Active health checks work in the background, out-of-band of client requests.
+
+
+
+
+
Observability
+
+ The admin API exposes an endpoint to retrieve the traffic count and health status of the proxy upstreams.
+
+
+
+
Upstream sources
+
+ Caddy can get the list of upstreams in various ways. The most common is to write them into the configuration (static). Other ways are dynamic, by which a list of upstreams are returned for each request (these utilize configurable caching to enhance performance).
+
+
+ - Static
+ - Dynamic: A records
+ - Dynamic: SRV records
+ - Dynamic: Multiple sources combined
+
+
+
+
Streaming
+
+ Responses can be streamed directly to the client, or for better wire performance, buffered slightly and flushed periodically.
+
+
+
+
Trusted proxies
+
+ HTTP headers can't be trusted from all clients, so you can specify a list of IP ranges of proxies
+
+
+ - Static
+ - Dynamic: A records
+ - Dynamic: SRV records
+ - Dynamic: Multiple sources combined
+
+
+
+
Header manipulation
+
+ Headers can be modified in the request going up to the backend and the response coming back down from the backend.
+
+
+ - Add
+ - Set (overwrite)
+ - Delete
+ - Substring replace
+
+
+
+
+
Active health checks
+
+ Active health checks assume a backend is down by default until that is confirmed otherwise by a health check.
+
+
+
+
+
HTTP request parameters
+
+ Active health checks are performed against an HTTP endpoint on the upstream. You can customize the parameters for these HTTP requests to work for you.
+
+
+ - Path & query string
+ - Port
+ - Headers
+
+
+
+
Timing
+
+ You can customize the interval at which active health checks are performed.
+
+
+
+
Success criteria
+
+ Each active health check can be customized with a set of criteria to determine healthy or unhealthy status.
+
+
+ - Response timeout
+ - HTTP status code
+ - Regular expression match on body
+
+
+
+
Failure safety
+
+ Backends that are experiencing bugs and difficulties may sometimes respond with unexpectedly large response bodies. Caddy lets you limit this to preserve proxy resources.
+
+
+ Limit response size
+
+
+
+
+
Passive health checks
+
+ Passive health checks assume a backend is up by default until failure criteria are met in the course of proxying requests.
+
+
+
+
+
Failure criteria
+
+ All passive health checks count connection failures. In addition, you can set more criteria needed to deem a backend as healthy during a request.
+
+
+ - Concurrent request limit exceeded
+ - HTTP Status
+ - Latency
+
+
+
+
Failure memory
+
+ You can customize how long to remember failures and how many failures need to be in memory to consider a backend to be down.
+
+
+
+
+
+