9.6 KiB
title |
---|
API |
API
Caddy is configured through an administration endpoint which can be accessed via HTTP using a REST API. You can configure this endpoint in your Caddy config.
Default address: localhost:2019
The latest configuration will be saved to disk after any changes (unless disabled). You can resume the last working config after a restart with caddy run --resume
, which guarantees config durability in the event of a power cycle or similar.
To get started with the API, try our API tutorial or, if you only have a minute, our API quick-start guide.
-
POST /load Sets or replaces the active configuration
-
POST /stop Stops the active configuration and exits the process
-
GET /config/[path] Exports the config at the named path
-
POST /config/[path] Sets or replaces object; appends to array
-
PUT /config/[path] Creates new object; inserts into array
-
PATCH /config/[path] Replaces an existing object or array element
-
DELETE /config/[path] Deletes the value at the named path
-
Using
@id
in JSON Easily traverse into the config structure -
GET /reverse_proxy/upstreams Returns the current status of the configured proxy upstreams
POST /load
Sets Caddy's configuration, overriding any previous configuration. It blocks until the reload completes or fails. Configuration changes are lightweight, efficient, and incur zero downtime. If the new config fails for any reason, the old config is rolled back into place without downtime.
This endpoint supports different config formats using config adapters. The request's Content-Type header indicates the config format used in the request body. Usually, this should be application/json
which represents Caddy's native config format. For another config format, specify the appropriate Content-Type so that the value after the forward slash / is the name of the config adapter to use. For example, when submitting a Caddyfile, use a value like text/caddyfile
; or for JSON 5, use a value such as application/json5
; etc.
If the new config is the same as the current one, no reload will occur. To force a reload, set Cache-Control: must-revalidate
in the request headers.
Examples
Set a new active configuration:
curl -X POST "http://localhost:2019/load" \
-H "Content-Type: application/json" \
-d @caddy.json
Note: curl's -d
flag removes newlines, so if your config format is sensitive to line breaks (e.g. the Caddyfile), use --data-binary
instead:
curl -X POST "http://localhost:2019/load" \
-H "Content-Type: text/caddyfile" \
--data-binary @Caddyfile
POST /stop
Gracefully shuts down the server and exits the process. To only stop the running configuration without exiting the process, use DELETE /config/.
Example
Stop the process:
curl -X POST "http://localhost:2019/stop"
GET /config/[path]
Exports Caddy's current configuration at the named path. Returns a JSON body.
Examples
Export entire config and pretty-print it:
curl "http://localhost:2019/config/" | jq
{
"apps": {
"http": {
"servers": {
"myserver": {
"listen": [
":443"
],
"routes": [
{
"match": [
{
"host": [
"example.com"
]
}
],
"handle": [
{
"handler": "file_server"
}
]
}
]
}
}
}
}
}
Export just the listener addresses:
curl "http://localhost:2019/config/apps/http/servers/myserver/listen"
[":443"]
POST /config/[path]
Changes Caddy's configuration at the named path to the JSON body of the request. If the destination value is an array, POST appends; if an object, it creates or replaces.
As a special case, many items can be added to an array if:
- the path ends in
/...
- the element of the path before
/...
refers to an array - the payload is an array
In this case, the elements in the payload's array will be expanded, and each one will be appended to the destination array. In Go terms, this would have the same effect as:
baseSlice = append(baseSlice, newElems...)
Examples
Add a listener address:
curl -X POST \
-H "Content-Type: application/json" \
-d '":8080"' \
"http://localhost:2019/config/apps/http/servers/myserver/listen"
Add multiple listener addresses:
curl -X POST \
-H "Content-Type: application/json" \
-d '[":8080", ":5133"]' \
"http://localhost:2019/config/apps/http/servers/myserver/listen/..."
PUT /config/[path]
Changes Caddy's configuration at the named path to the JSON body of the request. If the destination value is a position (index) in an array, PUT inserts; if an object, it strictly creates a new value.
Example
Add a listener address in the first slot:
curl -X PUT \
-H "Content-Type: application/json" \
-d '":8080"' \
"http://localhost:2019/config/apps/http/servers/myserver/listen/0"
PATCH /config/[path]
Changes Caddy's configuration at the named path to the JSON body of the request. PATCH strictly replaces an existing value or array element.
Example
Replace the listener addresses:
curl -X PATCH \
-H "Content-Type: application/json" \
-d '[":8081", ":8082"]' \
"http://localhost:2019/config/apps/http/servers/myserver/listen"
DELETE /config/[path]
Removes Caddy's configuration at the named path. DELETE deletes the target value.
Examples
To unload the entire current configuration but leave the process running:
curl -X DELETE "http://localhost:2019/config/"
To stop only one of your HTTP servers:
curl -X DELETE "http://localhost:2019/config/apps/http/servers/myserver"
Using @id
in JSON
You can embed IDs in your JSON document for easier direct access to those parts of the JSON.
Simply add a field called "@id"
to an object and give it a unique name. For example, if you had a reverse proxy handler that you wanted to access frequently:
{
"@id": "my_proxy",
"handler": "reverse_proxy"
}
To use it, simply make a request to the /id/
API endpoint in the same way you would to the corresponding /config/
endpoint, but without the whole path. The ID takes the request directly into that scope of the config for you.
For example, to access the upstreams of the reverse proxy without an ID, the path would be something like
/config/apps/http/servers/myserver/routes/1/handle/0/upstreams
but with an ID, the path becomes
/id/my_proxy/upstreams
which is much easier to remember and write by hand.
GET /reverse_proxy/upstreams
Returns the current status of the configured reverse proxy upstreams in a JSON document.
curl "http://localhost:2019/reverse_proxy/upstreams" | jq
[
{"address": "10.0.1.1:80", "healthy": true, "num_requests": 4, "fails": 2},
{"address": "10.0.1.2:80", "healthy": true, "num_requests": 5, "fails": 4},
{"address": "10.0.1.3:80", "healthy": true, "num_requests": 3, "fails": 3}
]
Each entry in the JSON array is a configured upstream stored in the global upstream pool.
- address is the dial address of the upstream. For SRV upstreams, this is the
lookup_srv
DNS name. - healthy is the status of whether Caddy knows the upstream to be healthy or not, based on the result of active health checks.
- num_requests is the amount of active requests currently being handled by the upstream.
- fails the amount of failed requests that are remembered from passive health checks.
The healthy
status only reflects the result of active health checks that have been performed against the backend.
If you've enabled passive health checks for your proxies, then you need to also take into consideration the fails
and num_requests
amounts to determine if an upstream is considered available. For accuracy, you should check that the fails
amount is less than your configured maximum amount of failures for your proxy (i.e. max_fails
), and that num_requests
is less than or equal to your configured amount of maximum requests per upsteam (i.e. unhealthy_request_count
for the whole proxy, or max_requests
for individual upstreams).