Caddy is configured through an administration endpoint which can be accessed via HTTP using a [REST](https://en.wikipedia.org/wiki/Representational_state_transfer) API. You can [configure this endpoint](/docs/json/admin/) in your Caddy config.
If you are running untrusted code on your server (yikes 😬), make sure you protect your admin endpoint by isolating processes, patching vulnerable programs, and configuring the endpoint to bind to a permissioned unix socket instead.
The latest configuration will be saved to disk after any changes (unless [disabled](/docs/json/admin/config/)). You can resume the last working config after a restart with [`caddy run --resume`](/docs/command-line#caddy-run), which guarantees config durability in the event of a power cycle or similar.
To get started with the API, try our [API tutorial](/docs/api-tutorial) or, if you only have a minute, our [API quick-start guide](/docs/quick-starts/api).
Sets Caddy's configuration, overriding any previous configuration. It blocks until the reload completes or fails. Configuration changes are lightweight, efficient, and incur zero downtime. If the new config fails for any reason, the old config is rolled back into place without downtime.
This endpoint supports different config formats using config adapters. The request's Content-Type header indicates the config format used in the request body. Usually, this should be `application/json` which represents Caddy's native config format. For another config format, specify the appropriate Content-Type so that the value after the forward slash / is the name of the config adapter to use. For example, when submitting a Caddyfile, use a value like `text/caddyfile`; or for JSON 5, use a value such as `application/json5`; etc.
If the new config is the same as the current one, no reload will occur. To force a reload, set `Cache-Control: must-revalidate` in the request headers.
<pre><codeclass="cmd bash">curl -X POST "http://localhost:2019/load" \
-H "Content-Type: application/json" \
-d @caddy.json</code></pre>
Note: curl's `-d` flag removes newlines, so if your config format is sensitive to line breaks (e.g. the Caddyfile), use `--data-binary` instead:
<pre><codeclass="cmd bash">curl -X POST "http://localhost:2019/load" \
-H "Content-Type: text/caddyfile" \
--data-binary @Caddyfile</code></pre>
## POST /stop
Gracefully shuts down the server and exits the process. To only stop the running configuration without exiting the process, use [DELETE /config/](#delete-configpath).
### Example
Stop the process:
<pre><codeclass="cmd bash">curl -X POST "http://localhost:2019/stop"</code></pre>
## GET /config/[path]
Exports Caddy's current configuration at the named path. Returns a JSON body.
Changes Caddy's configuration at the named path to the JSON body of the request. If the destination value is an array, POST appends; if an object, it creates or replaces.
As a special case, many items can be added to an array if:
1. the path ends in `/...`
2. the element of the path before `/...` refers to an array
3. the payload is an array
In this case, the elements in the payload's array will be expanded, and each one will be appended to the destination array. In Go terms, this would have the same effect as:
Changes Caddy's configuration at the named path to the JSON body of the request. If the destination value is a position (index) in an array, PUT inserts; if an object, it strictly creates a new value.
Simply add a field called `"@id"` to an object and give it a unique name. For example, if you had a reverse proxy handler that you wanted to access frequently:
To use it, simply make a request to the `/id/` API endpoint in the same way you would to the corresponding `/config/` endpoint, but without the whole path. The ID takes the request directly into that scope of the config for you.
For example, to access the upstreams of the reverse proxy without an ID, the path would be something like
Each entry in the JSON array is a configured [upstream](/docs/json/apps/http/servers/routes/handle/reverse_proxy/upstreams/) stored in the global upstream pool.
- **address** is the dial address of the upstream. For SRV upstreams, this is the `lookup_srv` DNS name.
- **healthy** reflects whether Caddy believes the upstream to be healthy or not. Note that "health" is a distinct concept from "availability". A healthy backend might not be available to proxy to, but an unhealthy backend will always be unavailable. Health is a global characteristic regardless of specific reverse proxy handler configuration, whereas availability is determined by the configuration of the specific reverse proxy handler. For example, a healthy backend would be unavailable if the handler is configured to only allow N requests at a time and it currently has N active requests. The "healthy" property does not reflect availability.
- **num_requests** is the amount of active requests currently being handled by the upstream.
- **fails** the current number of failed requests remembered, as configured by passive health checks.
If your goal is to determine a backend's _availability_, you will need to cross-check relevant properties of the upstream against the handler configuration you are utilizing. For example, if you've enabled [passive health checks](/docs/json/apps/http/servers/routes/handle/reverse_proxy/health_checks/passive/) for your proxies, then you need to also take into consideration the `fails` and `num_requests` values to determine if an upstream is considered available: check that the `fails` amount is less than your configured maximum amount of failures for your proxy (i.e. [`max_fails`](/docs/json/apps/http/servers/routes/handle/reverse_proxy/health_checks/passive/max_fails/)), and that `num_requests` is less than or equal to your configured amount of maximum requests per upstream (i.e. [`unhealthy_request_count`](/docs/json/apps/http/servers/routes/handle/reverse_proxy/health_checks/passive/unhealthy_request_count/) for the whole proxy, or [`max_requests`](/docs/json/apps/http/servers/routes/handle/reverse_proxy/upstreams/max_requests/) for individual upstreams).