2.5.2 docs (#247)

* docs: Changes for v2.5.2 release

* docs: Remove -X POST

Curl automatically uses POST for -d and --data-binary.

I intentionally left it in to be explicit to learners, but
maybe best practices are better.

* api: /adapt endpoint and Etag usage

* api: Minor tweaks

* Alright fine

* Clarify Etag usage

* CEL embedded matchers

* Oops

* Lots more adjustments

Co-authored-by: Francis Lavoie <lavofr@gmail.com>
This commit is contained in:
Matt Holt 2022-07-12 12:52:18 -06:00 committed by GitHub
parent 522d1961d1
commit 7819a84e59
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
30 changed files with 529 additions and 164 deletions

View file

@ -42,6 +42,12 @@ To get started with the API, try our [API tutorial](/docs/api-tutorial) or, if y
- **[Using `@id` in JSON](#using-id-in-json)**
Easily traverse into the config structure
- **[Concurrent config changes](#concurrent-config-changes)**
Avoid collisions and data loss when making unsynchronized changes to config
- **[POST /adapt](#post-adapt)**
Adapts a configuration to JSON without running it
- **[GET /pki/ca/&lt;id&gt;](#get-pkicaid)**
Returns information about a particular [PKI app](/docs/json/apps/pki/) CA
@ -64,13 +70,13 @@ If the new config is the same as the current one, no reload will occur. To force
Set a new active configuration:
<pre><code class="cmd bash">curl -X POST "http://localhost:2019/load" \
<pre><code class="cmd bash">curl "http://localhost:2019/load" \
-H "Content-Type: application/json" \
-d @caddy.json</code></pre>
Note: curl's `-d` flag removes newlines, so if your config format is sensitive to line breaks (e.g. the Caddyfile), use `--data-binary` instead:
<pre><code class="cmd bash">curl -X POST "http://localhost:2019/load" \
<pre><code class="cmd bash">curl "http://localhost:2019/load" \
-H "Content-Type: text/caddyfile" \
--data-binary @Caddyfile</code></pre>
@ -152,14 +158,14 @@ baseSlice = append(baseSlice, newElems...)
Add a listener address:
<pre><code class="cmd bash">curl -X POST \
<pre><code class="cmd bash">curl \
-H "Content-Type: application/json" \
-d '":8080"' \
"http://localhost:2019/config/apps/http/servers/myserver/listen"</code></pre>
Add multiple listener addresses:
<pre><code class="cmd bash">curl -X POST \
<pre><code class="cmd bash">curl \
-H "Content-Type: application/json" \
-d '[":8080", ":5133"]' \
"http://localhost:2019/config/apps/http/servers/myserver/listen/..."</code></pre>
@ -237,6 +243,47 @@ but with an ID, the path becomes
which is much easier to remember and write by hand.
## Concurrent config changes
<aside class="tip">
This section is for all `/config/` endpoints. It is experimental and subject to change.
</aside>
Caddy's config API provides [ACID guarantees](https://en.wikipedia.org/wiki/ACID) for individual requests, but changes that involve more than a single request are subject to collisions or data loss if not properly synchronized.
For example, two clients may `GET /config/foo` at the same time, make an edit within that scope (config path), then call `POST|PUT|PATCH|DELETE /config/foo/...` at the same time to apply their changes, resulting in a collision: either one will overwrite the other, or the second might leave the config in an unintended state since it was applied to a different version of the config than it was prepared against. This is because the changes are not aware of each other.
Caddy's API does not support transactions spanning multiple requests, and HTTP is a stateless protocol. However, you can use the `Etag` trailer and `If-Match` header to detect and prevent collisions for any and all changes as a kind of optimistic concurrency control. This is useful if there is any chance that you are using Caddy's `/config/...` endpoints concurrently without synchronization. All responses to `GET /config/...` requests have an HTTP trailer called `Etag` that contains the path and a hash of the contents in that scope (e.g. `Etag: "/config/apps/http/servers 65760b8e"`). Simply set the `If-Match` header on a mutative request to that of an Etag trailer from a previous `GET` request.
The basic algorithm for this is as follows:
1. Perform a `GET` request to any scope `S` within the config. Hold onto the `Etag` trailer of the response.
2. Make your desired change on the returned config.
3. Perform a `POST|PUT|PATCH|DELETE` request within scope `S`, setting the `If-Match` header to the recent `Etag` value.
4. If the response is HTTP 412 (Precondition Failed), repeat from step 1, or give up after too many attempts.
This algorithm safely allows multiple, overlapping changes to Caddy's configuration without explicit synchronization. It is designed so that simultaneous changes to different parts of the config don't require a retry: only changes that overlap the same scope of the config can possibly cause a collision and thus require a retry.
## POST /adapt
Adapts a configuration to Caddy JSON without loading or running it. If successful, the resulting JSON document is returned in the response body.
The Content-Type header is used to specify the configuration format in the same way that [/load](#post-load) works. For example, to adapt a Caddyfile, set `Content-Type: text/caddyfile`.
This endpoint will adapt any configuration format as long as the associated [config adapter](/docs/config-adapters) is plugged in to your Caddy build.
### Examples
Adapt a Caddyfile to JSON:
<pre><code class="cmd bash">curl "http://localhost:2019/adapt" \
-H "Content-Type: text/caddyfile" \
--data-binary @Caddyfile</code></pre>
## GET /pki/ca/&lt;id&gt;
@ -278,16 +325,15 @@ Returns the current status of the configured reverse proxy upstreams (backends)
<pre><code class="cmd"><span class="bash">curl "http://localhost:2019/reverse_proxy/upstreams" | jq</span>
[
{"address": "10.0.1.1:80", "healthy": true, "num_requests": 4, "fails": 2},
{"address": "10.0.1.2:80", "healthy": true, "num_requests": 5, "fails": 4},
{"address": "10.0.1.3:80", "healthy": true, "num_requests": 3, "fails": 3}
{"address": "10.0.1.1:80", "num_requests": 4, "fails": 2},
{"address": "10.0.1.2:80", "num_requests": 5, "fails": 4},
{"address": "10.0.1.3:80", "num_requests": 3, "fails": 3}
]</code></pre>
Each entry in the JSON array is a configured [upstream](/docs/json/apps/http/servers/routes/handle/reverse_proxy/upstreams/) stored in the global upstream pool.
- **address** is the dial address of the upstream. For SRV upstreams, this is the `lookup_srv` DNS name.
- **healthy** reflects whether Caddy believes the upstream to be healthy or not. Note that "health" is a distinct concept from "availability". An unhealthy backend will always be unavailable, but a healthy backend might not be available. Health is a global characteristic regardless of specific reverse proxy handler configuration, whereas availability is determined by the configuration of the specific reverse proxy handler. For example, a healthy backend would be unavailable if the handler is configured to only allow N requests at a time and it currently has N active requests. The "healthy" property does not reflect availability.
- **num_requests** is the amount of active requests currently being handled by the upstream.
- **fails** the current number of failed requests remembered, as configured by passive health checks.
If your goal is to determine a backend's _availability_, you will need to cross-check relevant properties of the upstream against the handler configuration you are utilizing. For example, if you've enabled [passive health checks](/docs/json/apps/http/servers/routes/handle/reverse_proxy/health_checks/passive/) for your proxies, then you need to also take into consideration the `fails` and `num_requests` values to determine if an upstream is considered available: check that the `fails` amount is less than your configured maximum amount of failures for your proxy (i.e. [`max_fails`](/docs/json/apps/http/servers/routes/handle/reverse_proxy/health_checks/passive/max_fails/)), and that `num_requests` is less than or equal to your configured amount of maximum requests per upstream (i.e. [`unhealthy_request_count`](/docs/json/apps/http/servers/routes/handle/reverse_proxy/health_checks/passive/unhealthy_request_count/) for the whole proxy, or [`max_requests`](/docs/json/apps/http/servers/routes/handle/reverse_proxy/upstreams/max_requests/) for individual upstreams).
If your goal is to determine a backend's availability, you will need to cross-check relevant properties of the upstream against the handler configuration you are utilizing. For example, if you've enabled [passive health checks](/docs/json/apps/http/servers/routes/handle/reverse_proxy/health_checks/passive/) for your proxies, then you need to also take into consideration the `fails` and `num_requests` values to determine if an upstream is considered available: check that the `fails` amount is less than your configured maximum amount of failures for your proxy (i.e. [`max_fails`](/docs/json/apps/http/servers/routes/handle/reverse_proxy/health_checks/passive/max_fails/)), and that `num_requests` is less than or equal to your configured amount of maximum requests per upstream (i.e. [`unhealthy_request_count`](/docs/json/apps/http/servers/routes/handle/reverse_proxy/health_checks/passive/unhealthy_request_count/) for the whole proxy, or [`max_requests`](/docs/json/apps/http/servers/routes/handle/reverse_proxy/upstreams/max_requests/) for individual upstreams).